Cursor 2.0 is not a good update!

Note to those downgrading because of Auto or specific model behavior, the models are the same in 1.7 and 2.0, so any issue may occur in both versions of Cursor. Difference may be in improved tooling and you can assume that this will improve.

We do listen to feedback and ask you for details with reasoning so we can see the actual scope. Please Create Bug Report for issues you face so we can find, reproduce them and fix them.

If you do see other threads with similar issues add also there a detailed response.

1.7 and 2.0 aren’t the same models actually. The 2.0 reasoning is currently poor, and barely writes a solution. I use 1.7 efficiently

1 Like

Yeah, but harnessing changed (prompt, context handling..), didn’t it?

1 Like

Yes correct the models are same but software is updated so it uses latest tools. If some things are not as expected we would rather fix them.

1 Like

Here we go:

2 Likes

Struggling with 2.0 since it came out. Pff, next step is back to 1.7. I just can’t handle the bad tooling. Gpt is worse than ever in 2.0. Even composer is getting slower

1 Like

The answer from the AI model (I don’t know which one, I use ā€œautoā€œ mode) looks like clean, short and conservative right now, changed from one style to another. Is this due to the developers want to save some tokens for the user? However, this make the answer becomse useless and dumb right now.

1 Like

What has happened with this update what a flop! the useability is rock bottom. the amount of code it went through and started destroying was laughable. It fails to follow instruction when you clarify the instruction it then agrees then continues back to an old method it tied previously with no learning loop on itself. The amount of code drift is crazy its like it no longer understands the context at all and don’t get me started with the new terminal feature where it wont run anything in the terminal where you require a manually input such as running a certain option of script/s just for it to keep saying it cant run in autonomous mode in terminal.. what a joke!! Fix it guys or ill just move fully over to windsurf #RegressioninInstructionFidelity

I wanted to share feedback about the recent Ask mode update that has significantly impacted my workflow.

What Changed

Before the update:

  • Ask mode provided multiple detailed answers with various approaches

  • It showed different implementation options and suggestions

  • The responses were comprehensive and educational

  • I could review the suggestions, then switch to Agent mode to implement the approach I preferred

  • This workflow was incredibly useful: Ask mode for exploration → Agent mode for implementation

After the update:

  • Ask mode gives minimal information and fewer examples

  • It no longer suggests different approaches or alternatives

  • Responses are brief and less helpful for learning

  • Agent mode now seems to make more mistakes than before

The Real Impact

The old workflow was perfect: I’d use Ask mode to understand the problem and see different solutions, then use Agent mode to execute the best approach. This two-step process helped me make better decisions and learn more about my codebase.

Now, I’m finding both modes less reliable, which slows down my development significantly.

My Concern

If this change was made to reduce token usage and save credits, I want to respectfully say: this is the wrong optimization.

I (and I suspect many others) would gladly pay more for higher quality agents rather than save a few credits with degraded performance.

The value of Cursor isn’t in using fewer tokens—it’s in having AI agents that actually help us build better software faster. When the quality drops, the entire value proposition suffers.

Suggestion

Could we have:

  • An option to enable ā€œdetailed modeā€ in Ask mode (even if it costs more tokens)?

  • A setting to prioritize quality over token efficiency?

  • Or simply revert Ask mode to its previous behavior?

I believe many users would prefer to pay for quality rather than sacrifice it for cost savings.

Would love to hear the team’s thoughts on this and whether there are plans to address these concerns.