Urgent Feedback: Concerns Regarding Cursor's New Pricing Model and "Auto" Models (Acknowledging June 2025 Pricing Update)

I’m writing to you today following your recent “June 2025 Pricing” blog post, which outlined the shift in your pricing model. While I understand the intention behind moving to a compute-based model, my real-world experience as a user under this new structure has led to significant concerns regarding its practicality, predictability, and the performance of your “Auto” models.
As a user who deeply integrates AI-powered tools into my development workflow, I’ve found the current approach to be confusing, unpredictable, and, frankly, quite inequitable in its execution.
Consider the clarity and user-friendliness offered by services like Anthropic’s Claude.ai. Their Pro plan ($20/month) provides a straightforward and highly appreciated pricing structure. It clearly states a consistent increase in usage (approximately 5 times the free tier) with limits that reset predictably every few hours. This model offers transparency and allows users to manage their expectations and budget without constant anxiety about hidden costs or rapidly depleting allowances. It fosters trust and predictability, which are crucial for professional users who rely on these tools daily.
In stark contrast, Cursor’s new model, despite the explanations in your blog post, with its “$20 agent usage budget at API prices,” creates an opaque and frustrating experience. The estimated “225 Sonnet 4 requests” or “650 GPT 4.1 requests” quickly prove insufficient in practical, real-world coding scenarios, especially when dealing with longer context, iterative problem-solving, or complex refactoring tasks. This feels like a bait-and-switch: the initial price seems competitive, but the actual usable value is far less than anticipated, leading to constant interruptions and forcing decisions about unexpected overages.
Furthermore, the forced reliance on “Auto” models once the included budget is depleted is a severe downgrade in quality and functionality. To be blunt, these “Auto” models are, from a user perspective, terrible. They frequently fall into loops, lose crucial context, provide nonsensical or irrelevant suggestions, and often do more harm than good, leading to significant wasted time and effort. This directly undermines the productivity and efficiency that users seek from an AI coding assistant and diminishes the very value proposition of your subscription.
This current pricing model feels ethically questionable. Users have come to expect a certain level of predictable service quality. Introducing a system where usage rapidly depletes with opaque calculations, pushing us towards inferior “Auto” models or unexpected overage charges, erodes trust and makes it incredibly difficult to budget or rely on Cursor for consistent, high-quality assistance.
I strongly urge you to reconsider your current pricing strategy. Adopting a model similar to Anthropic’s, which prioritizes clear, predictable usage limits and consistent performance for a fixed monthly fee, would significantly benefit your user base. Transparency, reliability, and delivering consistent value are paramount for professional tools.
Thank you for your time and serious consideration of this critical feedback. I sincerely hope to see positive changes that reflect a commitment to user experience and fair, transparent pricing.

2 Likes

I’ve used up my $20 Claude requests. After switching to the auto model, do I still have a chance to use Claude? I still have unused Gemini requests. Will the auto model reduce my $20 usage quota in Gemini?

I worry about the performance of the auto model and am also worried that it might select GPT-4.1 for complex tasks.

Totally and unequivocally agree. Removing the ability to opt back into the old pricing model has unfortunately crossed a critical trust boundary. I genuinely love what Cursor has built — I’ve been a power user since the early days — but this new model is a severe downgrade in service quality exactly where it matters most: for your most engaged users.

This isn’t just about pricing; it’s about reliability. The moment I can’t predict usage or trust that I’m getting consistent model performance, the whole development experience starts to fall apart. Please reconsider. The foundation Cursor is built on deserves better than this.

AUTO = CHATGPT

It never uses anything else. Would LOVE to see someone on the team or a mod prove this otherwise. This forum is ran by bots I swear. Why are we the consumers being punished?

How about the cursor team use AUTO to fix a bug and see how well that works :-1: