Hey everyone,
Yusuf here. I’m on the Cursor Pro plan and really frustrated right now: every time Cursor adds or fixes a new feature, it ends up breaking something that used to work fine. It’s gotten so frequent that I tried adding a rule in the .Cursorrules explicitly telling it not to break existing features when introducing new ones. Still, I’m not convinced it’s following that directive.
Has anyone found a way to prevent these regressions? Maybe a version-lock or some specialized config? I’d love to hear any strategies or tweaks you’ve used to keep your setup stable, or at least reduce how often new additions break old functionality. Any pointers would be appreciated thanks in advance!
Version: 0.44.9
VSCode Version: 1.93.1
Commit: 316e524257c2ea23b755332b0a72c50cf23e1b00
Date: 2024-12-26T21:58:59.149Z
Electron: 30.5.1
Chromium: 124.0.6367.243
Node.js: 20.16.0
V8: 12.4.254.20-electron.0
OS: Darwin arm64 24.3.0
Yusuf
1 Like
I am convinced it does not follow all directives, not even half the time. I’ve tried all variations of telling it not to change UI, don’t break important logic, don’t make your own judgement calls.
I’m doing a fairly simple app functionally but has complex calculations with a number of factors and carefully tuned weights. A simple refactor that I would give to a junior, Cursor’s Sonnet screws the pooch on. Moves the code, then makes its own judgement calls on the weights and all the unit tests suddenly fail.
Regression errors are migrane-inducing, if anyone has solid ideas I’d also love to hear it, too.
And they get worse as you go. It’s like coding with a competent savant that has the memory of a goldfish.
Nothing worse than the AI forgeting all the important decisions you’ve made together along the way.
One thing I’ve tried with some success is have it summarize exactly what it’s going to do in a .md file, and then require that it follow it to a tee, then before accepting the changes, have it check to see if it did anything beyond what was outlined.
Good luck.
Hey, sorry you are having a bad experience with the AI code changes!
The most common case for the AI not behaving is either a lack of instruction or a lack of context! This is especially frequent in big codebases, where the AI would have to look at many different files to have a complete understanding of how something works.
Because the composer is a conversation between you and the AI content, sometimes this is mitigated as the AI can see your previous discussions about how the code works, but especially when moving to a new feature or area of your code, I’d recommend starting a new composer session to ensure the AI doesn’t get overwhelmed with context, whilst still having everything it needs to answer the question the best it can.
Do you have any specific examples where you think the AI could be working better? Screenshots are super helpful here, as they help us figure out what context the AI had, and therefore what it is missing.
I’d also be interested to know if you’ve tried the agent version of Composer, as that can scan your Codebase and find its own context before completing any changes. If so, I’d recommend prompting the AI to ‘look at the folder structure, and any relevant files, before making any changes’, to ensure it is finding what it needs before answering.