Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake
please help me
Too many free trial accounts used on this machine. Please upgrade to pro. We have this limit in place to prevent abuse. Please let us know if you believe this is a mistake
please help me
You make an interesting point about the output of tokens, etc. However, Cursor doesn’t necessarily have to send our requests directly to Anthropic. They might employ various techniques, such as context caching or their own types of caching, which could explain the speed. While I agree with the context length aspect, I disagree with the suggestion that they are being deceptive about their choice of models. For example, instead of sending the full context to Anthropic, they might truncate the context to include just enough tokens before sending it. They mention the maximum context for coding in their documentation. Additionally, they could use other forms of retrieval-augmented generation (RAG) or other techniques before making a request to Anthropic. Ultimately, their business model is their prerogative. As customers, we have the option to choose a different product if this one does not meet our needs. It would be ideal if they were transparent about each model request to their backend. They do provide the option to use your own keys, albeit with limited functionality and features. Anyway, I agree with you. I’ve noticed that the model doesn’t seem as clever as it used to be, but we cannot say with certainty that they are using a specific model different from what they advertise.
So, what are the best alternatives? Cursor answers are stuck on ‘generating’ for me.
Hey, the ‘generating’ bug should be fixed in the latest update.
If you haven’t been prompted to update, you can get the latest version for your platform at Downloads | Cursor - The AI Code Editor