Worried about cursor degredations

Every day I get a new error or nuance with cursor.

Today’s new nuance, I get //rest of your code into my files when I use the apply option on the AI chat sidebar.

It will delete all the code there, and replace it with //rest of your code and its driving me nuts and I haven’t been able to find a way to deal with this, other than copy/pasting it line by line.

The benefits cursor used to provide are vanishing and I feel like going back to using VS Code with a browser window in claude has been more productive because I have full control of the prompts.

Thoughts?

I found if Cursor is doing weird stuff like that, prompt it to “retype the entire file” or entire fix, and then it will do it in the composer window and you can just copy and paste it in. But also, sometimes when you say that, it then correctly applies the code whereas before it did massive damage.

also yes. I think it’s the latest x.x.3 update

I wonder if they are going to fix it or are just enjoying the extra money we’re tossing them in the meantime.

Cause I’m blowing through my tokens and the long-context query is paid, so I’m racking up a good amount of usage just because I have to use 10-15 prompts to finally get it right for a single feature or addition. Whereas before, a single prompt did the job just fine.

Also, adding the prompt to “Rules for AI” section seems to do absolutely nothing.

1 Like

i’ve made entire note documents that work okay if I want to communicate structure or purpose. But it definitely does not work for rules. They mostly get my hopes up until it invariably clearly didn’t read it. I have a rule that it always checks imports and says “I checked imports” if it does, and I see it saying “I checked imports” when there’s an import missing :frowning:

You’re absolutely right, it has been. It is significantly worse every day that passes. A couple of weeks ago, it was working great for me. It was doing a lot of really good work. I was so happy. I was ready to pull out the credit card and get tons of new quick credits. I started to tell the other developers on my team, (about 10 of them), that they should really consider switching to cursor and take the time to learn it, because it was very powerful.

And, baam! Since then, it went down the hill big time.

It misses out on very obvious things. It doesn’t apply things. It doesn’t even try sometimes to look for the file. The code, every single time now, is either buggy or when it updates, it removes functions, replaces with placeholders code, adds code that has nothing to do.

Yesterday, it created a file with the name of “comments"it had given” [...authfilehere].

It is a nightmare. I am going to uninstall it and go back to Visual Studio Code. Directly with Claude or chatgpt at this point, because cursor is doing way more damage and it became liability at this point.

It is hyped up all and not ready for prime yet for sure.

The worst, is that the cursor team has been very silent at addressing the mounting comments that echo the fact that it is clearly regressing.

1 Like

I’ve been wasting a lot of credits trying to get it to just change the code in the files that it says it’s going to do.

Multiple times now I’ve had to use the checkout feature to go back a number of steps and then as soon as I enter the same instructions again after the checkout, it amends the code and it works. Very annoying.

1 Like

Have you noticed the deterioration when using Claude specifically or does it apply to any model used in Cursor?

I’ve only used Claude so far

“I apologize for the confusion, you’re correct”,

I am so sick of reading this every time I prompt. Is like I’m assisting the AI to build good code, lol.

1 Like

It’s definitely getting worse. God only knows how many credits I’ve wasted just trying to get it to do what I’ve asked, plus fixing code that I’ve not asked it to change.

I was working on a form tonight and then realised it had changed my login page. So I spent time going backwards and forwards in composer trying to get the login page back to what it was.

It said it had sorted it and it hadn’t.

Again, I had to use different checkpoints to finally get it to fix the login page. So frustrating and a complete waste of credits.

1 Like

I’ve been experiencing the same and eventually have come to the conclusion that it has something to do with Claude either having been dumbed down by Anthropic or that it’s a perception bias. I found some example finally where Claude/Cursor is completely unable to fix the problem or is doing something terribly wrong, and it turned out to be reproducible in claude.ai interface as well. While this isn’t a definitive proof, it seems it may indicate that these problems mostly stem from Anthropic. I also noticed the degradation a few weeks ago like many others and thought it had something to do with Cursor changing their prompt or something like that, but it could be that Claude itself got dumber (although I’m not sure yet how this is possible if the API is using the same LLM checkpoint/weights).

It would be great if someone could produce a reproducible example where Claude in Cursor is doing something terrible, while working fine in claude.ai interface or API (like Openrouter), this then would indicate that something is wrong with Cursor specifically

3 Likes

The quality of code is for sure derived from LLM, but does it explain it when cursor suddenly doesn’t apply the code or apply to wrong file, etc.?