Cursor sadly seems to become unusable. I managed to get amazing work with it done, doing quite complex things including use of paradigm which is barely used therefor the models have little to learn from.
But the last week had been horrific.
Using claude 4, it missed rules and mainly getting into a loop of ignoring the same things over and over again. It seems to have switched from being an amazing work performance enabler to a nasty blocker.
Any plans for a fix or is this a deliberate performance downgrade to save expenses?
@Guydivore we do not see any widespread worsening of performance or AI output quality.
While AI model providers regularly make adjustments and we as well make regularly improvements specific issues may need to be evaluated in detail to find the cause or a workaround.
Could you post a full separate Bug Report with more info Create Bug Report
Well this has gotten 10 times worse. Cursor just deleted all my staged files. 4 days of work basically (which I did not commit because Cursor kept revolving around a fix).
Hey, this can happen for several reasons. First, your chat might be too long, starting a new chat usually solves the problem. Second, you might be providing too much context and a vague prompt. Both of these factors can degrade the model’s performance.
I recommend breaking down one large task into smaller ones and interacting with them in separate chats.
As my colleague mentioned earlier, nothing has changed on our side.