I love this tool so far and it has been great, however, one of the annoying features is that it’s trying to be way too proactive and usually messed up other settings and configs even if was not asked to do so.
What is the best way to make sure it will not be proactive and make additional changes without my consent or just disable it?
Hey, are you refering to Tab, our auto-prediction tool?
If so, we have a new version coming out which should be better and more opinionated in giving your AI completions!
It should be available in v0.45.x, which should be out in the next few days, but you can read more on it here in the meantime:
If it’s causing you issues, you should be able to status bar item in the lower right!
Hmm, maybe just try instructing it yourself to follow these rules and see if they make a difference! We have some changes to rules in our next update, so that may help resolve this.
Yeah, the rules should be there for sure!
Can you get a request ID, any will do as long as the rules are there, and I’ll see why the AI is ignoring them?
Almost every request in cursor is adding changes, touch other parts of the code. It’s very annoying, it’s like the AI doesn’t pay attention to the prompts and does what it wants.
I’m not a developer to catch it that easily but it happens constantly and after some research online, I found a lot f complaints on this topic from many people.
This is really annoying, am I doing something wrong? Cursor is always and I mean always breaking and making changes to current functionality even if not asked to do so.
Not only it cost us the fast requests, it takes a lot of time trying to fix it and figure out what is going on.
v0.46 is totally broken. It seems they’ve reduced the amount of context they provide to the model to such a level that it fails at the simplest of tasks. Rules are consistently ignored. Instructions are consistently ignored. It forgets what it’s doing after 1-2 edits then goes and makes random incorrect unrelated edits. It refuses to read files when specifically instructed to do so, even to the point where it will lie and claim that it did so.
I’m guessing that the $20/mo price point for Cursor is too low to cover the cost of the models so the Cursor team have aggressively cut down on context provided to the models to keep costs down which leads to all these problems. This wouldn’t be a huge deal if they would communicate about it but we have total silence about what’s happening and a product that no longer works. I think most users are really frustrated by this at the moment.
I think it’s due to system prompts telling the model to avoid asking the user for help. It runs into something it doesn’t understand but won’t stop to ask the user so it keeps trying to “fix” it (usually shows up as “let’s simplify and…”) and goes way off script. System prompt makes Claude go off script and ignore rules