Claudes performance degraded

I’ve noticed that Claude within Cursor has been producing significantly worse output recently. I’m working on the same mobile app project twice and haven’t changed anything in my prompting or workflow.

The first iteration was straightforward - everything was clean and developed without major issues. However, after the recent changes last week, Claude’s output quality has degraded substantially, with frequent hallucinations.

I rebuilt the same app using identical prompts, the same workflow, and the same approach, but the results were terrible - full of bugs and poor design quality. I understand that outputs will never be 100% identical and there will always be minor variations, but typically the overall quality remains consistent. Now, however, it’s completely inconsistent.

For example:

  • Simple tasks like refining context files in a clear, understandable way have become overly complicated

  • It repeatedly downloads the wrong Expo create-app folder structure

  • The design quality is poor and doesn’t follow the context files or prompts properly

While Claude has always had a tendency to assume and do more than requested, this behaviour has become much more problematic recently.

Has anyone else experienced similar issues?

2 Likes

yeah, I felt too, it is performing very bad lately

Try something else model.

Also, does Claude often rely on broken search in your requests?

It’s a fact thatcircular training makes models worse. There is no significant change in any chatbots—including ChatGPT, Claude, etc.—every next release is worse