Something happened today or yesterday, Claude became utterly dumb. I’m wondering if anyone is experiencing the same? It feels like a different model. Could it be a bug in Cursor where it’s actually using a different model. It’s insufferable how utterly useless and stupid it has become.
a lot report this for claude web + cursor.
Either our project get too big or something is up there.
I had to resort to aistudio.google.com to implement my credit usage system for my saas (a little bit complicated tbh because I estimate usage multiple times + count actual usage, across 2 very big files) but idk claude was totally useless.
gemini wasn’t perfect either but at least we got it working for the most part.
Would have been a super task for composer but the output was totally useless. sadly. Even with the full detailed explanation from gemini who just got what I want except 1 minor thing.
what version of cursor are you on? will take a look on my end!
Currently 0.40.3, but I’ve noticed the deterioration since 0.40.1 I think.
I remember during the weekends (still on 0.39.6) it worked properly, I was super happy with it, basically no longer needed to write any code myself, just passed the requirements to Claude and it did everything almost perfectly.
On Monday I updated to 0.40.1, and I did some less major changes and it felt like it did alright.
But I think yesterday or today something happened and it’s now unable to perform at the same level as before.
can you report some of the chats as bad after generating a follow up? (would have to disable privacy mode too so i can take a look on my end). You can report by hitting cmd-shift-P and typing “Report Latest Chat as Bad”!
Unfortunately, it’s work code, so I can’t share it…
Perhaps someone else could report it. Maybe I’m now just expecting too much of it, if nothing has changed in prompting/model/quality of Anthropic responses…
But somehow I feel more and more frustrated with it and as if it’s not performing at the same level as during the weekends and possibly Monday.
Hard to say definitively, but I’ve also experienced the same. However, I feel Cursor Tab has also regressed, not improved.
Perhaps unrelated, but I’ve also been having trouble getting composer to actually update the code. Often it’ll just spit out the code as a response, but I’ll get no diff to apply.
Unfortunate waste of time/money .
That said, cursor has been great (outside the weird undo bugs).
same here and also on gpt-4o which leads me to believe it’s something to do with the prompt and planning structure of composer? For instance on numerous occasions i ask it to update logic on the front end for an api route but then it also goes and deletes some parts of the component at random… Was working awesome as a coding partner up until i updated it yesterday morning…