Is anyone else seeing a major drop in code quality lately?

Hey folks,

Over the past couple of weeks I’ve noticed that even with carefully engineered prompts, most LLMs are spitting out broken or off-target code. I’m using Cursor with Golang, Python and TypeScript projects and am hitting at least one of these on a daily basis:

  • Deleted blocks that were never touched by the request
  • Logic errors that unit tests catch immediately

It feels like the underlying models have regressed or are being switched behind the scenes. I’ve tried:

  • Switching between OpenAI, Google and Claude
  • Adding explicit system messages about project structure

…but the hit-rate is still low.

Questions for the community

  1. Are you seeing the same degradation? If so, when did it start for you?
  2. Any reliable work-arounds (agent mode, chunked prompts, older checkpoints, etc.) that actually move the needle?
  3. Have you found a particular config that still produces solid code?

It just good for creating web templates in these days :frowning:

Would love to pool observations and maybe escalate a concrete bug report to the Cursor team.

Thanks!
Reza