Quality of Auto mode significantly dropped down!

The quality of ‘Auto’ mode has significantly decreased.

  • The quality of the generated code is poor, with syntax errors, and when I point it out it agrees the code is incorrect and then gives me advises how can I fix it myself - in agent mode.
  • It happens it stops the plan implementation after few points.
  • It happens it deletes my comments from the code although it is not related to the task. Or it refactors methods not related to the task - making simple one line syntax code more complex spread across multiple lines.
  • It generates overcomplicated code which I need to simplify then.

All these problems were not present in the past. Working with Cursor is getting more and more frustrated.

Hey, thanks for the feedback. A few users have reported a similar experience with Auto mode, and the team is aware.

Auto mode balances quality and efficiency, and it won’t always pick the most powerful model. For more consistent results, try:

  1. Premium mode in the model selector next to Auto. It prioritizes quality over cost.
  2. Manually selecting a model. For harder tasks, pick a specific model like Claude Sonnet, Claude Opus, or GPT-5.

If you want us to look into a specific case, send the Request ID three dots in the top right of the chat > Copy Request ID. That lets us check which model was picked and what went wrong.

Let me know how it goes.

I understand that Cursor is trying to maximize its profit. However, when I purchased my annual subscription, I paid for a product with a certain set of features and a specific level of quality. I entered into that agreement for one year, and a fair and professional company should honor it. This means not reducing the quality of features for paying customers during the active subscription period.

Imagine purchasing an annual subscription for any service, only to have the company reduce the quality of that service (e.g., Auto mode) during the year, and then inform you that maintaining the original quality now requires using a differently priced Premium mode — to your disadvantage.

I have the strong impression that this is exactly what is happening with Cursor. In my view, this is a very short-sighted strategy.

Hey, I get the concerns about the subscription. Auto mode wasn’t intentionally made worse. It picks a model dynamically based on the task, and sometimes that choice won’t be optimal. Premium mode isn’t replacing something that used to be free, it’s an extra option for tasks where you need maximum power.

That said, if you’re seeing a specific drop in results, we really want to check it. For that we need the Request ID. Three dots in the top-right of the chat > Copy Request ID. With that, we can see which model was selected and figure out what went wrong.

Without the Request ID, we can only guess. Send a couple of IDs from the sessions that had issues and we’ll take a look.