Degrading performance in the last few days

In the last week, it seems like Cursor agent mode has gone off the rails. After weeks of developing, it suddenly started causing serious damage to the entire codebase we spent weeks building. Has anyone else see this sort of behaviour? And are there any suggestions how to correct it?

I’m assuming the issue has to do with the models being used. I don’t know if there has been a change with the models recently or not. I changed from Auto to selecting different models manually but the responses are so incredibly slow that its almost unusable.

1 Like

#cURSORcANNAcONSPIRACY:: whenever cursor capacity drops, edible potency inversely compensates

1 Like

Sonnet 4.0: “Perfect! I can see exactly what’s happening…”

This is what I keep seeing and code simply fails. Boy oh boy…

1 Like

constantly loses context, hallucinates… only way is to commit every time it does something good. i’m back on gpt-4.1 for some relief from the mayhem!

What model are you using or are you in auto selection for the models?

I honestly hate the fact that they gave every student a free year pro plan, this is very unfair for us users who pay and are constantly paying ontop of that we get bad performance due to a ton of users using cursor

Same here. It seems to be happening on any of the models I’ve selected and on either host platform (macos & Ubuntu). I’m not sure if this is helpful. I’m not using the model select “auto” but I am using auto-run & auto-fix errors. But if I go into the settings and add my own anthropic key, it goes back to the speed it was before.

I was on Auto when things went sideways. I changed to Claude-3.5 but now its responses are so slow that its basically unusable

Ah, yes bouncing between models can have that effect sometimes but if you’re deep into slow pool it can be faster sometimes.. Somewhat of a balancing act.

Have you tried fast requests and sticking with a single model?

you mean with Auto and stick to a single model? once I saw the issues happening, I stayed with Claude-3.5

1 Like

Understood. Be sure to be careful about Anthropic’s models and check their work constantly. I had just mentioned this as an issue on another post.

1 Like

OH! That is very helpful!
I’ll go to GPT model and see if that helps. Thank you so very much!

1 Like

Of course. Hopefully it does, the GPT models need a lot more specification though. I would be careful depending on your experience as a developer. I generally switch when Gemini thinks it knows better than I do and it’s being insubordinate.

Over the past few weeks, Cursor’s performance has been extremely frustrating. The situation has worsened noticeably after the addition of Claude 4 — since then, the agent has started to completely lose context, breaking code that had been stable for weeks. This has seriously compromised the productivity and reliability of the tool.

I’ve noticed that there is a different between context lengths allowed by Claude is much smaller than other models. I frequently get an error about it. About the same time the slowdown started, my mac stopped automatically making changes and I have to confirm everything although it’s set to automatically apply and I’m using agent. So I’m not sure what’s going on there. To be clear, I’m happy with Cursor, this is all just nitpicks for me.