Omg .. Cursor has become lousy and lazy since 4-Sep-2025

I have noticed that Cursor’s performance has suddenly degraded since 4-Sep-2025.

I kept the same settings as before but it suddenly went ‘dumb’ and incompetent.

I am on a Pro paid plan with annual subscription and just started around Jun 2025.

Anyone has had the some experience ?

Is it due to the published ‘Missing Analytics Data due to Service Degradation’ issue ?

Will it ever recover ?

1 Like

This isn’t because of analytics data.

  1. Which models have you noticed this behavior with?
  2. Is your Cursor up to date?
1 Like

Hi @Brandon_Teoh,
Thanks for sharing your experience, sorry to hear things haven’t been working as expected for you lately!

We haven’t seen any general performance degradation or widespread issues with AI behavior since September 4th, but we definitely want to help you sort out what’s going on.

Could you tell us a bit more about the specific problem or workflow you’re experiencing?

The more details you can provide, the better we can assist. If you’re open to it, submitting a Bug Report with a full description can help us investigate further. Including a Request ID (with privacy turned off, see Request Reporting Guide) will also let us look more closely at your case.

For context, analytics settings only affect usage statistics on our end, they don’t impact Cursor’s performance or how the models function.

Just to cover all bases, here are some factors that sometimes lead to results similar to what you described. These are just general possibilities, but you may find something that matches your situation:

  • Very long chat sessions, which can cause confusion with context accumulation.
  • Processing extremely large files in one go, instead of breaking tasks down (SOLID/DRY principles can help here).
  • Custom requirements that differ from common language or framework best practices.
  • Providing very large or complex sets of rules, files, or MCPs in the context, which may overwhelm the model.
  • Having either a lot of rules or very few, or including many negative statements in rules, which could skew responses.
  • Using models that are either more basic or more advanced than needed for the task at hand.

If any of these points sound familiar, adjusting your workflow in those areas might make a difference. But either way, we’d appreciate any extra info you can share, and we’ll do our best to help!

1.) Auto model mode.

Enabled Models:

claude-4-sonnet
claude-4-sonnet-1m
gpt-5
claude-3.5-sonnet
grok-code-fast-1

2.) Cursor version

Version 1.1.3

Downloaded in Jun 2025.

I will try to install the latest version 1.5.

1 Like

I am not sure it is a bug or what.

I am working on a Blazor project, it was doing great and blew my mind away when I got started until 4-Sep-2025, suddenly.

The first observation was that it couldn’t update or edit existing codes.

It kept saying ‘I am stuck in the same loop’.

And required me to do the update manually.

However, before 4-Sep-2025, it was able to do that magically, and I couldn’t keep up with the process.

Now I have to help to figure out what is going on.

Nonetheless, I have just installed latest version 1.5.11 and is testing it out.

Hopefully the magic will come back.

1 Like

Yes I suggest also to use more powerful models for complex tasks as with a project the complexity grows over time. Sonnet 4 is good for that. Do not use Sonnet Thinking model or 1M model unless you really see a benefit with that model. Reasoning and high context size may actually be counterproductive unless required to solve tasks or issues which other models can not do.

The situation has improved after upgrading Cursor to latest version 1.5.11

The magic seems to have returned :smiling_face:

Will monitor for a few days and keep posting.

3 Likes

I have been having a similar experience . Mostly coincided with the release of gpt 5 and then from last week it went on a serious downhill no matter which agent i used.

I broke down the tasks to simple basic ones and still it managed to get it all wrong and breaking and touching modules it shouldn’t have dealt with. Managed to burn through 20 dollars worth of tokens in less than 5 days and not moving an inch forward and mostly going backwards.

Any how noticed my version was on 1.5.9 . Will see if updating to the latest will fix

1 Like

I’m experiencing the same issues. I’ve opened a ticket with support. This is disappointing.

1 Like

The magic has returned - after upgrading to latest version 1.5.11.

Cursor is able to troubleshoot and fix issues without manual intervention.

1.) Auto model mode.

2.) Enabled Models:

  • claude-4-sonnet

  • gpt-5

  • grok-code-fast-1

Hopefully it will be persistent.

Will continue to monitor for a few days before closing this ticket.

My two colleagues and I share the same feeling: for the last two months, the Cursor has become incredibly stupid, making too many errors and assumptions. We all have PRO subscriptions. We tried switching Cursor to use Sonnet 3.7, but the improvement wasn’t significant.

@Richiee which model did you use before trying Sonnet 3.7?

Sonnet 4 is overall much better than 3.5 and 3.7.

While we do not make models perform worse there may be some differences how models handle prompts based on following:

  • Adjustments by AI model provider.
  • Adjustments by us for improved tool performance.

Please let me know what differences you observe. If you can post a Request ID with privacy disabled, it would help us to see what can be improved. Cursor – Getting a Request ID

1 Like

I can totally understand your feelings.

Upgrading to latest Cursor version 1.5.11 resolved my scenario.

I hope the same for you.

Thanks for sharing your experience.

1 Like

Hello All.

Just dropping in my two cents here.

I purchased Cursor Pro in June/July time and i was amazed at first by the integration, the model selection and general working and was happy to pay for the Pro Plan.

Since that time though, it seems i spend more time now on coaching the agent, reminding it about context and its very easy to get (any of the models) stuck where they pingpong between two different results.

Originally i thought that the issue might be due to the fact that im using remote SSH to a machine where the code and real work is “done”, however i removed that piece and I can say that in the last month or so, it has gotten to the point where im spending more time teaching Cursor and repeating myself, than its actually helping.

We have sessions, get a decent context built up (not even close to 80% thought) and some how magically it just “drops” it all and all the sudden im starting over and have to rebuild sessions.

Attempts to have it read contexts that are stored fail pretty badly as well.. and more and more I find myself wondering is this tool helping me, or in fact and I the guinea pig that is training the cursor Agents and backing models at this point?

I dont know if its Cursor or the models, but whther GPT5, Sonnet, Claude, there seems to be some hidden limits where all the sudden, after about 1-2 hours, things just “die”.

This, compounded with the almost daily “errors” about connectivity to models, retry for API calls and such, are making it a bit frustrating to work with this tool..

I realize this is just a ‘gripe”, but i bring it up to point out that either the tool is degrading, or my demands are growing beyond what it can provide, but in the end, im spending more time reexplaining, re-reading and re-doing work that was done, working before, and then the agent, since context is loss, breaks things and i have to go back to square one.

I sure hope someone is paying attention, as there are almost daily posts about this now.

2 Likes

Thanks for sharing the experience.

I totally understand your scenarios.

Yup. I hope Cursor AI team looks into this and help to maintain the quality of the software.

Since the day I have upgraded to latest version 1.5.11, Cursor A.I has been able to perform quite well. Although it seems to be making common mistakes and was able to fix it later, this is probably a greater issue with generative A.I.

As of today, I am still finding it useful as a coding agent/assistant/worker for my works.

I hope the magic can maintain.

definably lazy, it says let me check (file) then asks you to press run, after that nothing, have to keep reminding it or nothing will happen, tiresome

@stefi01 @Brandon_Teoh @Daniel_Smith Could you post a full separate Bug Report with more info Create Bug Report so we can debug this?

hey @condor -

Given the fact that I’ve provided a ton of debugging before and then last night, your service just ‘went out” in the middle of a session, and we are back to “troubleshoot your connection” as a response, despite the fact that we are running a Dual ISP provider setup, full Cluster setups with proxy and ingress in our environment and not a single system had any other connectivity issue shows that we arent really “looking into this”.

Back in July i spent a good 3-4 hours sending debug and logs and such and it was “tossed away” and yesterday,when there is an obvious issue in the client related to connectivity when the cache gets over 60% and the next inbound prompt is “too large”, your team comes back with “check your link”?

sigh

Hi @Daniel_Smith thank you for reporting issues, this really helps us with debugging and reproducing issues that are otherwise hard to find. While we are improving our response times and support quality please be ensured that we are not throwing reports out.

This is the first I am hearing about connectivity issues when cache gets over 60%. Which cache specifically, or do you mean context? Even with context at 60% I am not aware of connectivity issues and that would be an important report to file as I can not reproduce it now.

Where the team is right: vast majority of connectivity issues are local or network / internet provider related, which is why we ask to check the connection first. This is also why we added the Run Diagnostic feature in Network settings.

Here is a post from you and my colleague acted professionally and with a question I would have to ask despite your claim that your internet works. Many times the internet connection may work but not well enough for streaming AI responses.

1 Like