Proactive / Extra changes

I love this tool so far and it has been great, however, one of the annoying features is that it’s trying to be way too proactive and usually messed up other settings and configs even if was not asked to do so.

What is the best way to make sure it will not be proactive and make additional changes without my consent or just disable it?

Hey, are you refering to Tab, our auto-prediction tool?

If so, we have a new version coming out which should be better and more opinionated in giving your AI completions!
It should be available in v0.45.x, which should be out in the next few days, but you can read more on it here in the meantime:

If it’s causing you issues, you should be able to status bar item in the lower right!

When using the agent and giving it specific task, it can go on with additional adjustments or changes on it’s own.

Ah, I see.

To fix this, you can add some rules in your Cursor settings to instruct the agent on how to behave. You can find out more here:

Adding a line like “Only do the changes I request, and nothing more!” may be enough to get it to follow your instructions as you want.

1 Like

I already have the following rules but it doesn’t listen
Any other recommendations?

Hmm, maybe just try instructing it yourself to follow these rules and see if they make a difference! We have some changes to rules in our next update, so that may help resolve this.

As you can see the instructions are pretty clear but it doesn’t listen to the global rules. If I apply it every request / message it kinda works…

Yeah, the rules should be there for sure!
Can you get a request ID, any will do as long as the rules are there, and I’ll see why the AI is ignoring them?

You can do so with this guide:

Ok, I will try to keep an eye on it and report it when it happens again,thanks!

Almost every request in cursor is adding changes, touch other parts of the code. It’s very annoying, it’s like the AI doesn’t pay attention to the prompts and does what it wants.

I’m not a developer to catch it that easily but it happens constantly and after some research online, I found a lot f complaints on this topic from many people.

Is Cursor planning to address it?

Hey, you can use the “Rules for AI” section to instruct the AI on when and where it should make changes, so experimenting with that should help!

Some people want lots of autonomous changes, and others want Cursor to be very precise, so we allow you to tweak the instructions to fit your usage!

This is really annoying, am I doing something wrong? Cursor is always and I mean always breaking and making changes to current functionality even if not asked to do so.

Not only it cost us the fast requests, it takes a lot of time trying to fix it and figure out what is going on.

Request id ff1ac71f-491d-4abc-8c87-8e8f9d9f2c75

v0.46 is totally broken. It seems they’ve reduced the amount of context they provide to the model to such a level that it fails at the simplest of tasks. Rules are consistently ignored. Instructions are consistently ignored. It forgets what it’s doing after 1-2 edits then goes and makes random incorrect unrelated edits. It refuses to read files when specifically instructed to do so, even to the point where it will lie and claim that it did so.

I’m guessing that the $20/mo price point for Cursor is too low to cover the cost of the models so the Cursor team have aggressively cut down on context provided to the models to keep costs down which leads to all these problems. This wouldn’t be a huge deal if they would communicate about it but we have total silence about what’s happening and a product that no longer works. I think most users are really frustrated by this at the moment.

Another request id bc67e57c-dcef-4ded-9b79-9fc5cf1309a2

Is there a plan to address these issues?

I think it’s due to system prompts telling the model to avoid asking the user for help. It runs into something it doesn’t understand but won’t stop to ask the user so it keeps trying to “fix” it (usually shows up as “let’s simplify and…”) and goes way off script. System prompt makes Claude go off script and ignore rules