Hi Cursor Team,
I wanted to share feedback about the recent Ask mode update that has significantly impacted my workflow.
What Changed
Before the update:
-
Ask mode provided multiple detailed answers with various approaches
-
It showed different implementation options and suggestions
-
The responses were comprehensive and educational
-
I could review the suggestions, then switch to Agent mode to implement the approach I preferred
-
This workflow was incredibly useful: Ask mode for exploration → Agent mode for implementation
After the update:
-
Ask mode gives minimal information and fewer examples
-
It no longer suggests different approaches or alternatives
-
Responses are brief and less helpful for learning
-
Agent mode now seems to make more mistakes than before
The Real Impact
The old workflow was perfect: I’d use Ask mode to understand the problem and see different solutions, then use Agent mode to execute the best approach. This two-step process helped me make better decisions and learn more about my codebase.
Now, I’m finding both modes less reliable, which slows down my development significantly.
My Concern
If this change was made to reduce token usage and save credits, I want to respectfully say: this is the wrong optimization.
I (and I suspect many others) would gladly pay more for higher quality agents rather than save a few credits with degraded performance.
The value of Cursor isn’t in using fewer tokens—it’s in having AI agents that actually help us build better software faster. When the quality drops, the entire value proposition suffers.
Suggestion
Could we have:
-
An option to enable “detailed mode” in Ask mode (even if it costs more tokens)?
-
A setting to prioritize quality over token efficiency?
-
Or simply revert Ask mode to its previous behavior?
I believe many users would prefer to pay for quality rather than sacrifice it for cost savings.
Would love to hear the team’s thoughts on this and whether there are plans to address these concerns.