I’m not opening a bug report - instead this is a request for your comments. Perhaps you’ve felt the same (I know some of you did) or you could place some examples here. Ideally we could get some clarification from Cursor, or open a bug report, if I’m not alone here.
Since v46, I noticed a degradation in the agent. But, at the same time, Claude released 3.7 and also changed 3.5 behaviour, making the whole thing a complicated blame game - it could be claudes changes, or it could be cursor’s own internal context changes.
Anyway… I ignored it for a while, and kept trusting it’ll get better, but it has only gotten worse. You ask the agent for something, and it turns around and does something completely different, or starts cleaning up unrelated code and comments in another part of the file.
This type of thing is so frequent now, that I’m losing my mind, and began to distrust the agent, and even became reluctant to use it without first commiting all the code and clearing up the history, so that I can cleanly observe any side effects - as I start to spend more and more time cleaning up after Claude.
“Why does AI get to do the art, while I get to wash the dishes.”
6 Likes
I’ve had simmilair issues but found out you can add custom rules in Cursor Settings → Rules
I added this, might help you. But agreed we shouldn’t have to keep fixing AI’s weird code
Do not edit functions that are not related to the quesiton, do not try to optimize existing code unless asked. Do not implement features that weren’t specifically aksed for in the prompt, only work on things directly needed to implement the work requested in the prompt
1 Like
Thanks for the answer
I’m very well aware of the rules system in cursor, but the example I gave doesn’t have much to do with rules I’m afraid.
If you ask the LLM to add an import in your JS - it should add the import. It should not attempt to rewrite the file, remove any comments in the file, rearrange the imports, change indentation, or edit the parent layout 
Although, in its defense - ask a real developer to import a JSON and you’ll get a 50-line PR tuning syntax to how they like it 
2 Likes
Hey, in situations like this where the model seems to not be following your instructions, I’d recommend the following:
- Keep your chats short, and very on topic - as soon as you are working on something else, make a new chat
- Don’t bloat the agent with context that is loosely relevant / not relevant to the prompt. It can be tempting to throw loads at it for ease, but this usually ends up worse off
- Don’t insult or criticise the AI - this probably sounds stupid, but there’s a noticeable drop in quality that occurs if you the chat history contains you telling the AI it’s doing a bad job or has done something wrong
- If you are using Claude 3.7, try 3.5 for a while - 3.7 is a much more creative model and does have a tendency to colour outside the lines
We’ve not changed anything major in a while regarding context or AI response generation, so I wouldn’t think this is anything different in Cursor causing this, it’s often just a case of learning how the models like to behave, and leaning into it!
3 Likes
I KNOW you are correct about this, but I’ll just add that this is my personal demon - I KNOW I should do it but find myself constantly just continuing a conversation until things start falling off the rails. Sooner or later I’ll break myself of the habit.
Also spot-on. Treat the model like you would a valued co-worker…including encouraging it and telling it when it aced something…the models tend to respond like actors roleplaying/cosplaying as developers - you hand them lines and they respond “in character”. Treat it like it’s an expert and it will respond like one…most of the time. 
2 Likes
Thanks for a considerate response Dan 
Some of it (short chats, less context) is common sense, and others are quite interesting new info!
It is also very good to get confirmation that there was no big scary cursor context change, as it allows me to focus on the models as the source of the problem. Thank you for that.
I will do better 
Ok, that helps. However, I think Claude 3.7 should be more tamed by default, without the need of the developer to get first frustrated, then learn how to stop Claude 3.7 from just moving on and on …
I think everybody has experience this already, so I did not have to further explain , but like recently …
Claude " There are these options to appraoch this 1) … 2) … 3) … 4) … Oh, I already moved ahead implementing Option x and have changed files A, B, C, D, …
1 Like
I agree, i would hope this happens on the cursor side. Ihave experienced the exact same behaviour so many times, which made me feel like perhaps Cursor has a system prompt that teaches the agent to be agent (listen, plan, execute, explain) but 3.7 is taking it waay too seriously and honors every instruction even when it wasn’t requested.
In a way it feels like we moved from an “exclusive” behaviour where agent might not listen to all instructions (which is irritating) but in turn would not do things i never asked it to do - to a more “inclusive” approach where agent always does everything, at the cost of me having to be overly specific every time because I can no longer count on it understanding the nuance in my instructions.
I’ve switched back from auto to 3.5 pretty much permanently at this point, although it isnt without its own flaws.
Yeah I’ve been struggling mightily too have Cursor stop making changes I didn’t ask for. It appears that when it searches the code base it finds an issue and completely forks its thought process and tries to attack both. The problem is, they’re not always bugs, sometimes it’s just the AI preference such as comments.
I am using rules extensively now and one of the benefits of my method is to allow rolling over into a new chat to be far less painful, and you are able retain relevant context easier in the new session. I have described how I am doing it in this post - A Deep Dive into Cursor Rules (> 0.45) - Discussion - Cursor - Community Forum - the strategy might be helpful for your situation
1 Like
100% the case. Its going and deleting ■■■■ its not supposed to ..and its removing critical functionality!
Hey, in situations like this where the model seems to not be following your instructions, I’d recommend the following: Keep your chats short, and very on topic - as soon as you are working on something else, make a new chat Don’t bloat the agent with context that is loosely relevant / not relevant…
I am so tired of this advice/suggestion. Whose bright idea was it to manage context with dotfiles, notepads, and MDC files? It’s a design mess. At $20 a month for the IDE on top of subscriptions for the models, with every update, it is like working with a team of toddlers with PhDs and ADHD. With every update the “thinking” process becomes more and more opaque, and the UI gimmickier, like
in shinier packaging.
I am beginning to prefer my own intelligence. It’s slow AF but much more reliable than the products of this experiment. At this point I’d be impressed by a hello_world
script under 125 lines.