Cursor Tab has gotten a lot worse?

Let me preface by saying that the multiline / inner line tab completion is the main reason I use cursor, as it still leads in actual implementation of this feature, and I’ve been happily using it for close to a year now.

However, in the past month or two I feel the actual quality of the completions has greatly degraded!
This is working in the same exact codebases / languages / types of features that I have been doing all year, and yet now it struggled with even simple completions which it used to one-shot.

One particular failure pattern I have notice is it just directly duplicating lines (sometimes without any changes) or making a very similar line which makes absolutely no sense (ie redefining a const var again immediately after). It seems to completely ignore context around it in favor of mimicking one of the last things you have typed…

Its kinda crazy because even before I loved the way that it would write error messages which actually made sense for the line I’m inserting on. NOw it just duplicates the exact same messages from half way up the function??

Or even the “copy paste” behavior that I liked, it seemed really smart about adding more entries to a list or object or csv or any kind of repetitive task, but once again, now it failing to do that in a smart way, it just dumbly copies now…

This is very sad for me, I’ve actually come to the point today where it was such a hindrance that I had to just turn it off. And then I realized, that it was the only thing keeping me in cursor and not trying some other similar IDE…

I’m not sure if this is like due to changed prompting / the way context is presented to the model / a smaller/faster (cheaper) model / some kind of collapse as the model is progressively trained… But it’s really unfortunate, given how strong it was only 4-5 months ago. Like before it really was in tune with what I was thinking while coding, it often correctly predicted EXACTLY what I was about to type. But I have not experienced that in a while (again working on basically the exact same codebase / features / styles / etc)

I would happily have a slower model which only provided completions infrequently, but with quality completions. Or a way to go back to the older model. Maybe a hotkey that triggers the smarter completion, rather than just doing it on every character I type.

Also I don’t think it’s just me getting used to it and then applying selection bias to the failure cases, like I said I used to have exactly what I was about to type happen MULTIPLE times a day, but it has not happened in weeks now, thats a cold hard fact. SOMETHING HAS CHANGED

I know tab doesn’t get as much love as the “vibe coding” composer, but I really do like to write most of my own code, and before tab was a speed multiplier for that. I don’t feel that way any more :frowning:

Please don’t take this as just being negative, I just really am sad that the performance has degraded and wish that it had not happened to such a great product.

4 Likes

It’s not just you. I’ve also noticed a massive downgrade in response quality coming from the same models, working on the same project. What used to be a reliable coding partner has now become an almost hindrance. The quality of code coming from cursor has decreased steadily. I hadn’t upgraded in a few weeks and I did so, thinking maybe my old version had been regulated to a cheaper model no matter which one I picked, but it’s remained the same poor quality even after the version upgrade. I’ve tried the same chat prompts in the Claude app using Sonnet 3.7 and my responses in the app are much better code than what Cursor is giving. That makes me think it’s something under Cursor’s hood that has been affected.

I will say that I’ve noticed the difference both in Chat and Composer. The reason I’m using Chat now more than Composer is because I absolutely can’t trust Composer’s logic as it now seems to hallucinate more, replace perfectly good code with bad, and create classes/services that are already there and were in context. Chat allows me to at least keep the context controllable and review the code changes in small bites.

I finally came online looking for answers and found your post. You’re not alone, or crazy. :stuck_out_tongue:

Hey guys, I have no proof but I will definitely say this is made on purpose somehow, here’s what I noticed:

I started using vscode copilot and was overall quite disappointed with the pro version the model takes a lot to complete basic stuff whatever; I started using cursor like 3 weeks ago and stay with the free plan, first two weeks I was amazed by how good it was in contrast with vscode copilot; cursor reached a moment when it just stopped working, I assumed I already consumed the free plan, then i decide to join the pro tier, surprise the responses were similar or worse than copilot was already given.

So, I think somehow is the model used in trial vs pro, marketing purposes I guess; nevertheless very disappointed and sad at the time.

Just chiming in to agree that it feels like tab has gotten a lot, lot worse. I’ve been paying for over a year now and the change is astronomical. There are hours where it probably gets almost no predictions correct and it slows me down because I use tab for things like indentation.

I have exactly the same impression. It is almost completely useless currently. It does not seem to know standard Python Data Science APIs anymore, and often just repeats the lines aboves.