Over the past month, most users have been complaining about Cursor’s pricing model. I think perhaps Cursor has been too perfectionist about this issue, or is under pressure to meet every user need. Cursor is an excellent tool - it would be a shame if users left simply because of the current confusing pricing.
The problems started when Cursor switched from the 500 requests/month model to a usage-based model (which I believe everyone agrees is fair in principle). The feeling of not being able to control usage limits is frustrating, and being restricted within a specific time period is also genuinely unpleasant (since there are times when work demands are higher than on other days).
Personally, I found the 500 requests/month model perfectly fine (some requests use fewer tokens, others use more—they balance each other out). It’s easy to understand, user-friendly, and simple to manage. You could even offer a pay-as-you-go model (e.g., $0.02 per Normal request, $0.04 per Thinking request, $20 per Opus request, etc.), where users top up a certain amount and usage is deducted accordingly (recharge as needed—just like OpenAI or Anthropic does).
It’s possible that the previous model caused significant losses for Cursor. However, I believe users are willing to pay a higher price (e.g., $0.08 per request instead of $0.04) to use the best model for coding—rather than paying less for a model that doesn’t get the job done (which is also why no one really likes the Auto mode).
The token-based model is neither good nor fair — we’re paying for a subscription to a service, not for tokens.
Cursor buys computing power in bulk for a symbolic price, then resells API access to us at a higher retail price with a 20% markup — that’s the ideal scenario.
But on top of that, requests get bloated with 500KB–2MB worth of token data that we never added.
The problem is, we bought a super-duper drill, but one day it turned into a pumpkin — and yet, the maintenance fee remained and even increased.
But the drill seller keeps shouting that it’s the same drill and it actually got better — while user reviews about the drill start disappearing and getting hidden by the seller.
And the whole point is: now the seller is selling vegetables at the price of a drill.
couldn’t agree more. I switched back to the legacy pricing and its simple (although I have completely lost track of how many messages I get). I think its 500. Keep it simple man - right now you need a double phd to understand what the hell you are paying for. This only leads to feelings of deception. I would be very happy to pay $20 for x messages (make that compatible with Claude Code.) then if we hit the limit allow us to by in $20 increments till the month is up. I have no issue paying for that. Some month I am involved in lots of projects, others I have some downtime. But keep the thing simple and transparent. My gut tells me you are trying to fool us all Cursor team. Thats proabably not true but it feels that way.
I will use whichever model I choose. I am a highly experienced engineer, not a “vibe” coder or a so-called prompt “engineer.”
I have had absolutely terrible results with o3. I’ve benchmarked solutions across all available models, and o3 performed dismally in comparison.
Also, I think many are failing to understand the core issue with everything that has transpired. Initially, we were given 500 requests for 20 dollars. Then they advertised “Unlimited” with rate limits, and later, “Unlimited” again under Ultra. Users began reporting a wide range of issues, including harsher rate limits and more aggressive usage caps. They then backpedaled, claiming they had not communicated the message properly. That explanation is both deceitful and disingenuous. We’re talking about highly intelligent people building this product, so the idea that they simply misunderstood how to communicate clearly in English is absurd.
Ultra is no longer truly unlimited. It merely offers a few more requests than Pro. If we take the reported base of 260 requests on Pro and multiply it by the claimed “20x,” that’s 5,600 requests. Yes, that’s a reasonable increase. But it is not what was originally advertised.
To make matters worse, they quietly changed the terms and conditions. When users pointed this out and raised concerns, they were met with condescending attitudes from moderators.
This is not how a business treats its clients. It is unethical. Just look at Reddit—there is a mass exodus happening. Mark my words, this could be the death knell for Cursor. I don’t wish failure on anyone, but the team behind Cursor deserves the backlash they are receiving for their shady business practices.
I don’t know why you concluded that Sonnet is the worst among the code generation models. Most experienced developers actually consider it the best. Perhaps it comes down to the difference between Vibe Coding projects and real-world projects. I’ve been programming for nearly 20 years, and you won’t be able to convince me that Sonnet is worse than the other models when it comes to writing code.
Yeah they can hide and moderate all they want, it’s everywhere now people are blogging about Cursors downfall. They should have never tried to sell people on a lie, but those complicit with the deception will have their day, or cash out their equity and run for the doors.
I don’t consider it the worst Code Agent.
But it’s the worst in terms of price-to-performance ratio under Cursor IDE’s current pricing model.
Give the same task to Claude-4-Thinking and then to Gemini 2.5 Pro.
Then compare how much money you spend in each case and what kind of result you get.
In the gaming industry, there’s a great example — CD Projekt Red once decided to convert 2077 reputation points into money. So this is not a new story.
Gemini Pro 2.5 can perform at about 80% of Sonnet’s level. In terms of cost, I don’t see much of a problem between these two models (unlike the case with Opus). I understand your perspective, but when it comes to real-world projects, things are very different from personal “vibe coding” projects. A real project must strictly follow the design specs, meet rigorous requirements for logic and security, and fully comply with the company’s coding standards.
Here’s a simple example: take a screenshot of any UI component and ask AI to recreate it exactly as in the photo - 100% identical. Right now, there’s no tool that can do that. But that’s a real-world requirement, you can’t deliver something that’s 95% similar and tell your boss the job is done.