The current behavior has become a nightmare. Despite clear instructions in new chats, the system consistently fails to follow basic rules. For example:
After even minor code changes, it insists on restarting Docker—even when explicitly told not to (e.g., “Do not restart Docker; we’re on dev”). The very next command it suggests? A Docker restart.
Directives are routinely ignored, no matter how clearly they’re stated.
This is with MANY tasks, If i don’t tell it not to do it it does it and if i tel it not to do it, it does it Regression in Quality
A week ago, the model worked reliably—mistakes were rare. Now, it’s so broken that I’ve been unable to code productively for four days. Tasks that previously worked flawlessly (e.g., searching Git repos) now result in:
Unwanted code rewrites
New, unsolicited code generation
Suspicious Changes
It feels like sonnet 3.7 was silently downgraded to a far dumber model—almost as if is throttling capability to save costs. The inconsistency is jarring: the same prompts that worked perfectly now produce unusable output.
Urgent Fix Needed
This isn’t just a minor bug; the model has become uncontrollable. Despite:
Explicit rules
Detailed, step-by-step commands
Repeated prohibitions
…it still does the exact opposite of what’s instructed. This isn’t just frustrating—it’s a total blocker.
This is beyond frustrating and they must know this is happening
Cursor is not usable right now. The models are extremely stupid. They ruin everything. Cursor this is a MAJOR bug and it should be your number 1 priority to fix. If I can’t work tomorrow as well due to this bug. I am done with you. When I finish my credits, will go for Windsurf and never coming back.
I’m having same experience. I once told 3.7 to add something within the code and to not modify existing codes. Then it proceeds to completely destroys the codebase.
I even paid for max, this was no help whatso every just costs money to make even more problems. My codebase is safe as i use git anyways, but the delays are now well into 5 days of not actually getting ahead with any feature branches.
it has been told in clear rules don’t use mock data or make up data or hallucinate, what doe sit do, exactly that. It removed real data from the database and puts in mock hard coded data, it’s like we have gone back a couple of version now.
Something happened pretty serious in cursor changes to make this so bad, it was deffo not making a foot wrong a week ago and I managed to pump out loads of amazing code that works really well.
I have even made new docs.
The add docs used to work now it jsut gives an error on everything and i had written our rules for here, but one moment it’s a cursor.rules, now it’s docs they need to be consistent where the dam rules are going, it never reads them anyway… never has just ignores them i built all my cursor rules already
Considering they are an AI firm you would think that their software would not be this buggy, if they can’t even get this right, what hope do we have in coding with it.