I waste so much time wrangling models to make accurate code updates but they just regurgitate old, bad cached code over and over. Sonnet 3.5 is the worst. At this point its becoming so bad the model becomes an active blocker.
Additionally, the context window is cursor for some reason is wildly limited. Models cannot maintain context between three prompts in a single thread. They continually forget data or instructions I provide immediately.
What is going on in Cursor’s app layer that is dumbing down these models?
I totally get how frustrating this is - dealing with models regurgitating old code and losing context can really slow you down
A few things that might help:
-
For the caching issue, you can try deleting and restarting the index for your codebase in the Cursor Settings page, to ensure
-
As long as you are willing to spend the extra requests, you can enable long-context mode, which will prefer sending more code when needed vs trying to be more economical with how we pick context. If you have this disabled, and a situation occurs when Cursor thinks you would benefit from this, it will be suggested to you within the UI.
If you’re still running into issues after trying these, could you share a request ID from a problematic interaction? That would help us investigate what’s going on under the hood
Thank you for the reply. I’ll give that a try. But this also doesn’t address the core issue, where you can attach the files to a prompt and Cursor makes it appear that the LLM is receiving that code for the prompt. But its very clear when you interrogate the LLM its either ignoring the attached file, or truncating it so its completely useless, or instead reverting to a cached version.
As best as I can tell it appears that Cursor is aggressively caching code, making the ability to attach the most recent file useless. Having to go in and re-index the codebase during a coding session is not practical.
It removes the agency of the developer to guide the LLM and cursor does not provide any visibility into how the LLM is making its decision.
Given the nature of LLMs to lie and produce false claims- Claude repeatedly claims it analyzes code and has found the issue - this lack of transparency by Cursor is the true productivity killer.