I’m having a really frustrating experience with Cursor AI.
Here’s the pattern I keep seeing:
Cursor helps me fix a bug (great!)
But the fix introduces a new error
When I ask it to fix the new error, it brings back the original bug again
And then this cycle just keeps repeating…
It feels like I’m stuck in a loop of “fix → break → fix again the first thing”.
I try to be clear and specific in my prompts, but Cursor just keeps undoing its own changes. I’m honestly not sure if I’m prompting wrong or if there’s a better way to guide it.
Could someone help me understand:
What’s the best way to prevent this kind of loop?
How do you give context so Cursor doesn’t “forget” previous fixes?
Is there a specific workflow I should follow to get better results?
Thanks in advance for any advice
(P.S. Sorry if my English isn’t perfect!)
This is the nature of LLMs in their current state even the best. But there is a fix. But for example, if you were to use the new chatGPT 4o to create a detail image with specific words and layout - lets say it makes a mistake. you ask it to fix that one mistake, it might, it might ignore you, it might say it fixed it and really screwed up another detail. The more you ask it, the more it screws other things up to fix the one thing. and then when you coax it to fix the other things, its back to where it started and breaks the original. I went through this again last night trying to create a somewhat complex slide in gpt4o.
What is the answer? Its similar with cursor regardless of which agent you choose. When the agent makes a mistake you have a few options in order of what I would suggest from most likely to work:
check if its something very obvious and fix it or ask it specifically to fix it by doing specifically what the fix is
ask it to fix the error, if it fails after 1 or 2 rounds, or ends up in that loop - ask it to just analyze and propose a fix, start a brand new agent or chat context, and then show it the error and proposed fix and see how it does. The reason this works a bit better than what you are facing seems to be with all LLMs, they trend towards earlier context mistakes.
There are more things I could suggest, but starting with this might be what you need. But if you want some more advice, work in small chunks, have unit tests to make sure when you add one thing it did not break something else so you can constant know its working and not regressing, and commit often.
Additionally ive found some success having the agent document the feature, i then go through and manually fix any issues or further clarify for the specifics i want to resolve in the documentation. Then in a new chat context, tell the agent the feature is not working according to the documentation and ask it to propose a few options for me to review that will help resolve the discrepancy.