I honestly find the new payment system completely unreasonable.
Looking at my current usage summary (see attached screenshots), I would have already run out of credits under the new system. What used to be included before is now cut down to barely 10% of the total I had available with the old plan.
To put it bluntly:
Under the old model, the value was clear, and I could actually use Cursor as my daily development tool without worrying.
Under the new model, the quota feels ridiculously restrictive, and unless you turn on “on-demand usage” (which means unpredictable costs), you’re basically forced to use Cursor far less.
Luckily, I asked support to switch me back to the old pricing system, so I’ll be able to continue using Cursor in a sustainable way until my renewal date. But the only reason I’m writing this post is to show everyone how big of an issue this is — and how badly it impacts anyone stuck on the new plan.
For me, this is a deal-breaker. If this system stays, it will be one of the main reasons I’ll move to another platform.
Transparency and fair value are critical. Right now, this feels like neither.
@tommymac the Sonnet model is very ( I mean very ) expensive, much more than GPT-5 and it adds up context pretty quickly, matter of two prompts, it is very verbose nowadays, I recommend gpt-5-low or gpt-5-fast-low or medium ones, what work best for you, they are pretty cheap and good ( better than claude in my updated opinion ) ( gpt-5-high is TOOOO slow, it is not worth the waiting ) but the -low version is the best model I’ve been using, better than Claude ( in my usage )
To the best of my knowledge, at least from what I understood from the Cursor team, with the latest changes, everyone that got switched back to the legacy pricing will be forced (automatically changed) to the new pricing. Definitively let me know if you managed to get to the old pricing model, because they completely refused changing mine even though I had contacted them in time.
In this new era of IA, we need to be smart, we should use the right model for the right task and control the costs, we should not be locked anymore in A or B model provider. So, does not make sense to use Sonnet for every single edit because of the pricing and this is not Cursor related, is how things are… how you think your $20 dollar bucket will pay your $500 bill usage to Anthropic ?
Claude Code gives you $20 every 5 hours with the $20 plan, $200 for $100, or $400 for $200.
That said, it’s not as convenient or stable to use, even with Roo Code, Claudia, and other UI frameworks.
I used Claude Code for $200 for one month and have mixed feelings about it.
I think Claude Sonnet 4.5 might improve things, but for now I’m sticking with Cursor (it doesn’t freeze).
I actually completely agree with you, and that’s exactly what I do.
If you look at the screenshots I shared, you’ll see that I already switch models depending on the task and complexity. I’m not a “vibe coder” who just offloads everything; I still write code myself. But there are times when I’m dealing with more complex issues on large codebases, and in those cases, I really do need to use heavier models.
For simpler edits or everyday tasks, I stick with lighter/free options like Supernova or GPT-5-mini, which keep usage efficient and under control.
So yes, I fully agree that being smart about model choice is the right approach, the issue I’m raising is that even with this kind of responsible usage, the new plan cuts included usage so drastically that it feels unsustainable.
The cursor burned 6% of the limit for one request that used 17% of the context. The cursor works really well (much better than Kiro or WindSurf), but now it just doesn’t work…
exactly. Additionally the problems that add if your model-switch leads to loosing context. The future damage done after switching might cause more costs than sticking to an expensive model.
These changes were made because they were the only economically viable way to continue operating in the long term. The previous model was something of a “promotional” model, and most of us knew that. But I understand you; the sense of loss is truly enormous.
But honestly, I still don’t think Sonnet 4.5 is “necessary.” We have many options now. I still use it to run shell commands, which is where I think it excels. (For example, performing a series of operations using tinker (laravel) to calculate something, or run tests.) For that, do a analyses initial prompt to generate context with GPT-5, then use Sonnet to do the calculations.
Try to use the duplicate chat functionality to preserve important contexts.
I have been using TRAE IDE for my personal projects, and it has amazing performance and better price plans. I wish Cursor Forum hadn’t banned me because of this comment
I’ve been using Kilo Code and to be honest, it’s being a great experience so far. In my workflow, my base model is Grok Code Fast 1. If the model gets stuck, I move to GLM 4.6. If that doesn’t work either, I switch to Sonnet 4.5 to fix everything and immediately move back to Grok Code. All that without the need to open a new chat. This approach has helped me a lot to keep costs to the minimum. Keep in mind that good prompting and understanding what you’re doing is essential!
How does the pricing compare from Kilo to Cursor? Aren’t you’re just buying tokens via OpenRouter directly right? Hard to believe anyone would be saving much money by using Kilo instead of Cursor.
Can you provide more information otherwise it’s hard to measure your statement. 6% of your $20 plan? What model? 6% is a lot, but not if you are using sonnet 4 thinking or something and on a $20 plan, that is $1.2.
That’s one of the reasons I canceled my subscription recently. I went back to using VSCode with GitHub Copilot (free for me through my employer) and OpenAI’s Codex (since I have a ChatGPT Plus subscription). The only thing I really miss is Cursor’s far superior autocomplete / tab to complete model.
We quickly noticed our developers hitting usage limits much faster (in first days) than before the pricing changes on the Team plan. While we can certainly improve efficiency through context engineering strategies and developer education, the new model has made me seriously consider alternatives.
For larger modifications, the pricing and usage gap compared to something like Codex CLI feels significant. One option that might make sense is a parallel workflow: using Claude for more guided, manual work, and leveraging Codex CLI for large-scale or long-running research and implementation tasks.
I am wondering how much of the difference is from large providers like OpenAI subsidizing their own products/plans.