With the recent changes in Cursor’s pricing, I’m finding it hard to understand how to use it effectively. Even with Ultra, it’s not feasible to use expensive models like Claude Opus 4.1 every day. With Claude Sonnet 4 or GPT-5, you end up constantly having to save resources, monitor limits, switch between cheaper models, create new chats, and restrict context. These are compromises you don’t want to make when you’re paying for a premium subscription.
I wish higher-tier plans came with a competitive model that could be used “without worry” within reasonable limits. Something like GPT-5 or Claude Sonnet, with sensible daily limits or a daily budget—say $50 for all models—would make a big difference.
I understand that Cursor faces challenges and is optimizing pricing to remain viable. But many competitors are offering incredibly generous limits, and I’m not yet seeing how Cursor can compete. Honestly, Claude Code has only one drawback when used properly—slow responses, even for simple tasks.
Programming is evolving, and soon most code will be written with AI assistance under programmers’ guidance, and eventually through automated pipelines. How soon that will happen is uncertain, but I fear Cursor could repeat the Windsurf story—and I really hope it doesn’t.
Where Will Cursor Be in 2 Years? What do you think about this?
Why would Cursor give $1500 of value ($50x30 days) and only charge $200/mo for it, on top of their IDE and other features? What kind of request is this? We all want a good deal lol, but that is just illogical.
So Cursor and all the other companies will raise their prices because demand is skyrocketing and individuals are competing with large businesses for the same AI resources.
Future: These other services will drop their “unlimited” plans just like Cursor did, it’s just intro pricing and not feasible long term.
I don’t think we are seeing the real cost of this AI infrastructure. Its all just speculative investment money being poured in, where will the price actually end up. How much does it genuinely cost to run these requests? Can smaller companies provide these models, thus the market could get saturated with AI assistant programming model providers, lowering the price, or is it something that will only be able to be provided by billion dollar companies, who will set the price.
There may be a future where it is just super cheap or for $$$$$ you can have some on premise setup, and businesses may start doing that. All I know is $16/mo seems way too cheap for the value, and I expect the price to go up. Then again, grok-code seems really cheap and capable compared to gpt-5, so maybe prices for models will go down. Or grok is just at some intro price to get early adopters. We just don’t know what the real price for all this is. If anyone knows what the real cost of these requests are, that would be helpful.
You’re absolutely right — Claude Code will almost certainly raise its prices at some point. The real question is: by how much? With a $200 subscription, I can currently burn through the equivalent of about $400 in just a 5‑hour coding session. And in theory, I could repeat that four times in a single day. So how high would Claude Code have to raise prices before the limits really start to look like what Cursor has in place?
We also have to consider how insanely fast AI is evolving. Just a couple of years ago, even the “smartest” models couldn’t reliably produce working code. Now we’ve got multi‑day pipelines running on AI, and professional developers themselves are actually relying on it. Cursor, for example, has made gpt‑5‑mini usage free, and it already produces code that’s leagues beyond what cutting‑edge systems could do just a few years back. Is it cheap enough? And what will things look like two years from now?
I’ve heard that Windsurf is betting heavily on agent‑like features that operate inside browsers and other apps, while the IDE itself plays catch‑up with Cursor and similar tools. The big question is: what path lies ahead for Cursor?
$1500 in usage on a $200 subscription doesn’t actually look like much at all.
That’s because the daily cap is $50, and it doesn’t roll over. So yes, there will always be some users who hit the maximum every single day, but plenty of others won’t. That’s exactly how subscription economies work: the “average” is what really matters.
In practice, I’d estimate most people won’t spend more than around $800. For comparison, the old Ultra plan used to include 10,000 “fast” requests plus unlimited “slow” ones. Back when I had a $20 Pro subscription with 500 fast requests, I still ended up consuming roughly $800 worth.
By that measure, I’d say Cursor’s current pricing doesn’t feel competitive, and raising it further seems unreasonable. What’s needed is some kind of balance — a golden middle ground — where at least models like GPT‑5 and Claude Sonnet 4 can be used without the constant fear of being “disconnected from the internet” for two weeks afterward.
The point is API costs are fixed for Cursor. Anthropic may be operating at a loss with some of their subscriptions, but Cursor’s request costs that they pass to us are primarily based on what 3rd parties are charging. It is just illogical that Cursor can provide more requests than the plan cost. If the requests cost Cursor $800, how can they charge us only $200 and fund developing the Cursor IDE. Am I missing something? I think because Cursor is not a provider like anthropic or openai, their pricing is reliant on whatever these providers charge and thus they will always appear to be not as good of a deal than subscribing directly with anthropic or openai.
The only thing Cursor has going for them is they developed this IDE tool first and are hopefully far enough ahead of the competition, otherwise, if openai and anthropic make a Cursor IDE like tool then they can just price Cursor out of the market because Cursor relies on them.
There will be cheaper models that I think Cursor will be able to effectively use, thus people will find value in the Cursor IDE without the heft price of needing models like Opus all the time. Grok code for example is really cheap. So either anthropic and openai have to lower their api costs to compete with Grok, or they need to specialize and make their models somehow superior for certain niches. Regardless, there are lots of battles going on: Competing on api costs. Competing on tool development. Competing on model abilities.
Having prices differ this much between models that can at times can produce similar or even better code means the market has not settled at all on what requests are actually worth. Grok-code is a very capable model for like 1/6th the price of GPT-5. So as a result I use gpt-5 much, much less, and yet their price is the same. Something will have to give, or Grok-code will increase it’s price because it may not be a realistic price.
As for Cursor, they are stuck trying to make their tool very good and optimize the requests to help the user get the most out of their requests. Maybe a request through Cursor is optimized so well with context, etc, that in a way it is cheaper than other platforms for the same request. I know Cursor is trying to do that, while also not reducing the quality of the responses. It’s very dynamic, and I think the overall market will split between vibe coding services and AI programming assistant services, Cursor being the latter which will evolve to not rely on the most expensive and frontier models.
Basically if you want $$$$ worth of requests, then just go directly to the providers, but if you really need the Cursor tool, then you have to pay extra for using frontier models.
With high enough price, maybe it becomes feasible to buy one of those $23k nvidia cards and start running the llm locally. Where goes the limit and are there any really good enough open models?
Hmm, I think you’re slightly off here.
Cursor works with AI API providers under custom agreements, often with long‑term contracts. The prices they pay can vary depending on the provider and the volume of tokens they commit to. In some cases, the discount can be massive — 50%, 80%, even up to 90%. Without such arrangements, it’s hard to see how Cursor could have survived this long.
Cursor doesn’t just “pay as it goes” — it purchases token quotas per model or per provider. And once those quotas are bought, they need to be used, otherwise they simply expire. That’s precisely why the Auto mode exists, and also why it used to be free.
It looks like some of those long‑term contracts have recently expired, and providers have significantly raised their prices for new agreements. On top of that, I suspect Cursor has reevaluated its strategy for allocating quota. For example, it may no longer make sense for them to lock in huge amounts for models that are already outdated, like Claude Sonnet 3.5, and instead they’ll focus on carefully chosen, up‑to‑date models. That would also explain why Auto mode will become paid starting September 15.
I agree with you. But Cursor could have also survived by taking a loss to some some degree. But yea, I think they were definitely getting big discounts and may still be. Doesn’t change that the requests they are providing can’t be like 5x more than what a user would get if they had used the provider’s API directly. If that was the case, then everyone would just use Cursor instead of subscribing to Anthropic or Openai directly since it is the cheapest way to get requests. $1 with cursor = $8 of anthropic requests. Anthropic and openai would be losing out, and it doesn’t make sense, but it does explain some of the short term pricing policies we’ve been seeing. So once again, there are a lot of temporary variables affecting the market: contracts/agreements like you said, promotional pricing, etc.
Well, Cursor isn’t really selling an API, right? What they’re offering is an IDE + chat. When users go over their limits, Cursor just resells the API at retail rates plus about 20% — which, as far as I know, aligns with the policies of the API providers themselves.
It seems that Cursor has lost both its monopoly and those very favorable contracts with providers. On top of that, there are now plenty of competing AI coding utilities, and even the API providers themselves have stepped directly into the race.