WHY is Cursor not making code changes?

Describe the Bug

It seems to randomly decide not to bother writing or editing code. It just tells me to do it … if I wanted to do that, I wouldn’t need Cursor, would it?!

Why does it randomly do it then randomly decide not to edit code? Extremely frustrating.

Steps to Reproduce

Random. Just inserting an error and it says what to change instead of doing the code changes.

Expected Behavior

Do the code/file edits.

Operating System

Linux

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.2.0
VSCode Version: 1.99.3
Commit: eb5fa4768da0747b79dc34f0b79ab20dbf582020
Date: 2025-07-01T19:54:35.986Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Linux x64 6.15.4-arch2-1

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

Hey, which model are you using? Also, have you tried this in a new chat?

agent model, latest version but it’s been an issue for a long time.

I find this happens a lot if you use auto mode

What else do I do? I set it on another model and I get tossed out because of token limits. And this has been a problem since long before agent mode

I dont use auto much as I get bad results with it. Perhaps you could make a rule to tell it that if it proposes code change to apply them also or something

Different LLM"s & Different models yield different results in terms of “capabilities”. Some models will utilize more effectively giving you manually the code itself and allowing you to do the changes. You must explicitly request it to provide you the code in “insert format” specifying min. of two rules etc. It is only able to provide “blanket” fixes and not “specific” ones unless you upload to a .git repository and it has “full” contextual awareness of your files/project in general.

Effectively for me Claude Sonnet 3.7 Thinking is overall great and Claude 3.5 is the same if you need something simple done.

Edit: Based on my personal experience, using AUTO only refers or utilizes CHATGPT or something more “basic” regardless of the provided task as it seems to prefer speed over reliability etc. its a dual edge sword.

As stated before, this happens on all llms, and I do have explicit requests. The issue is it works and then it gets lazy and won’t do it.

Interesting, I can certainly see how that’s concerning. Possibly could be a settings issue or something contributing to these hallucinations and/or complications. Best of luck!

This happens to me with OpenAI models - GPT41 and o3. GPT 4.1 is more of a joke, I couldn’t get anything real out of it, it constantly refused to edit the code. Gemini 2.5 constantly complains that Agent prevents it from making changes, оr made changes that he didn’t ask for. Over a month of use, only Claude hasn’t let me down (after all the other models, I had to restore the projects from backup because it was not recoverable). I use Cursor exclusively in Agent mode (alas, I’m a really bad programmer).