Pricing... Just some observations

Seeing a lot of concern about pricing with the recent announcement. I can understand the concern, especially when you are losing something free. However as a heavy agent user, I’ve been through all the plans, I know how much I’m using each month, and its not trivial by any means. So, just as a guy, who tries to approach things like this logically and objectively (no, I’m not a shill for Cursor, but I am a fan of it)…a few thoughts, with some real numbers:

NOTE: These thoughts do not just apply to Cursor. Comparing cursor to other options. Claude Code does not offer free usage, you get a certain amount of requests/tokens per month based on your tier. VSCode+Copilot, offers GPT-4.1 (terrible for coding, truly) for free if you pay for any plan, however all other models are usually at API token pricing for that model. Copilot doesn’t even offer Sonnet right now, and Sonnet is one of the TOP models for coding. This is billed per million tokens. So you aren’t getting stuff for free, not really, with Copilot and a proper model for coding.

  1. Nothing is really free. If something is free, then the product, as the saying goes, is usually YOU! If you are not the product, then the product will never, ever remain free. Facts of life, it just is how it is. Unless you WANT to become the product (please, no! Just please no! Not with a tool like this!), then free simply cannot last forever (goes for ANY product where you are not the actual product yourself.)

  2. Cost vs. income. This is a simple math equation. No one likes things to get more expensive, and I get that. I don’t either. However, ignoring everything else, the cost of Cursor’s current plans on a daily basis per month is:

  • Pro: $20/mo, 20 work days, $1/day
  • Pro+: $60/mo, 20 work days, $3/day
  • Ultra: $200/mo, 20 work days, $10/day

These are not big daily cost numbers. How much money do you spend per day on coffee? Its a classic cost/benefit question. Lets say you spend $3 every weekday morning on a cup of coffee. For the Pro and Pro+ plans, you are spending no more than your daily cup of coffee. Is the cost truly a concern? How much is the daily cup of coffee worth? You are spending $60/mo on that coffee, too!

This also doesn’t touch the bonus value that Cursor offers when you are on a plan. I don’t know how they calculate that, so I won’t speculate on it. But when you factor even in a small amount of bonus value, your daily Cursor cost, will be less than that cheap cup of daily coffee. Or…replace coffee with your, favorite phone game that you spend umpteen bazillion dollars on every month, or whatever your guilty pleasure is, etc.

Everyone has to decide whether the cost is worth the benefit. But the cost isn’t really that high. Hardly anything that we consume regularly, either itemized or subscription, costs just $20/mo anymore. Subscriptions are rarely less than $50/mo anymore!

  1. Are you using a plan designed for your use case? Each plan has its intended use case. FWIW, the $20/mo Pro plan is intended primarily for tab-completion usage, with minor if any agent usage. The $60/mo Pro+ plan is the first tier intended for moderate agent usage and tab-completion usage. Neither are intended for full, heavy agent usage, however. The only plan actually intended for full and heavy agent usage is the $200/mo Ultra plan. Agent usage is VERY TOKEN HEAVY. Heavy heavy.

Over the period of about 6 weeks, I went from Pro, to Pro+ & paygo ($60+$150+ excess), to Ultra. If you want to use the agent, the simple reality is, a few million tokens just won’t do it. For over a month and a half here, I’ve been burning hundreds of millions of tokens a month. Truly heavy agent usage can burn billions of tokens a week.

If this is your goal, then just revisit point #1. Nothing is truly free. Cursor can’t foot the bill for you. Even if they are a $500m company, if they foot the bill for everyone’s heavy agent usage, they won’t be a $500m company for long.

  1. Are you a pro, or a hobbyist? I wholly understand the cost concerns if your usage of Cursor or any other agentic coding tool is being paid out of your own pocket for a hobby. If you are jsut a hobbyist, you may find it is worth setting up a local model and plugging that into Cursor, so you aren’t incurring usage fees for the mainstream models. I think a few people have shared ways of doing this in these forums.

If you are a pro, though…consider your yearly costs, vs. your income. Yearly costs for each plan are:

  • Pro: $240/year
  • Pro+: $720/year
  • Ultra: $2400/year

Now, Ultra does get up there. If you ARE a Pro, however, check with the company you work for. They may well be willing to cover that cost (its literally like adding a dollar an hour to your effective hourly rate!)

Then, what is your salary? Compare the cost of the plan to you salary as a percent of your income:

  • $60k/year:
    • Pro: 0.4%
    • Pro+: 1.2%
    • Ultra: 4%
  • $100k/year:
    • Pro: 0.24%
    • Pro+: 0.72%
    • Ultra: 2.4%
  • $150k/year:
    • Pro: 0.16%
    • Pro+: 0.48%
    • Ultra: 1.6%
  • $200k/year:
    • Pro: 0.12%
    • Pro+: 0.36%
    • Ultra: 1.2%

As a pro, our value…what we are paid…is often related to the value we produce. In a general sense, that is how it should be, and its what getting paid according to the merits is all about. This is where an agentic ide can provide value to you, as a developer: By giving you the power to produce greater value. I don’t like to speak in terms of “Oh yeah, it 10x’ed me easy!” or anything like that. Its not really about rate. Its about value. How valuable are you? Is an agentic coding tool helping you produce greater value? No? Then maybe agentic software development is not for you. Yes? How far can you push the “value produced” envelope?

I’ve been a professional software engineer and architect for a very long time. I always try to produce as much value as I can, with whatever tools I can, with whatever tools are at my disposal. I currently am a Cursor Ultra subscriber, and FWIW, I do believe I am currently getting value out of it, and in turn producing great value for my employer. I don’t know if that will always be the case. A new pricing model may well diminish the value of Cursor below my threshold of acceptability, and if that happens, well, there are other options out there! Will they remain a better value than Cursor over the long run if I switch? :man_shrugging: Who knows! From what I can see, the industry seems to be converging on a $/MTok cost basis. I suspect that is where everything will end up. Providers like Cursor, Windsurf, Claude, etc. will probably differentiate mostly through some kind of multiplier basis…cursor tokens may cost 1.4x, windsurf maybe 1.25x, etc. It’ll then be up to the individual to determine which option provides the value they are seeking in the areas they need that value most.

This is not just a new industry, like everything AI, it moves at lightning speed. I guess, I am not surprised at the constant pricing model changes. I suspect the turmoil will remain in the industry for a while, and every time there is some kind of disruptive event (which seems to occur in this industry at a higher than normal rate), then I would expect price disruption as well. My own hope, is that as there are major disruptive changes, that each of them will reduce the cost of tokens, support more advanced models, which in the long run, should actually benefit us. Another thing about new industries…they are costly at first, then as innovation extracts all that can be extracted in said industry, prices begin to fall as things are commoditized. I see a couple things in the future that should bring prices down and maybe commoditize it all:

  • HRM - Hierarchical Reasoning Models. These are a coming replacement for LLMs. As with many hierarchical algorithms, they allow a divide and conquer sort of approach, which often brings with it significant performance improvements, cost reduction (i.e. fewer compute cycles per token), and allow more work to be done for any given set of resources.
  • QC - Quantum Computing. LLMs are just the kind of vector-based mathematics that is perfect for quantum computing. At some point, there will be a marriage between quantum and “AI”…given the looming troubles with energy production, I suspect this marriage will be happening sooner rather than later, unless bringing commercial-grade quantum computing to market itself is disrupted. Quantum computing promises to significantly lower energy costs for LLMs, HRMs, etc. which should also make things more cost effective for us developers.

One of my coworkers likes to say: Its never going to be worse than it is right now! I suspect he is right, and as innovation continues in the “AI” space, not only should costs improve but model capability should as well. Its probably the worst its going to get right here and now, so look towards the future.

4 Likes

As artificial intelligence advances, the cost for model providers to provide models with the same performance will only get lower and lower.

DeepSeek has achieved a level close to that of Claude Sonnet 3.5 and GPT 4T at an ultra-low price. I believe this price can still bring profits to DeepSeek.

Of course, a large part of Cursor’s users only like the latest models, while some people - like me - like open source models and models that are already sufficient.

In fact, for many user needs, the latest model is not necessary. I really hope to have a more intelligent Auto mode, or to allow users to gradually train a classification model for their own needs by selecting Prompt → Model about 20 times.

In this way, Cursor can reduce costs and does not need to provide advanced models every time Auto is called. Users will also be more satisfied. There is no need to use advanced models or even think about models when processing small requests. They can clearly see which model will handle their prompt.

Varies by country. eg. $10k or lower.

One thing to add, while AI model prices may be going down, they are also becoming more capable and therefore use up more tokens, which also increases overall the cost of usage.

5 Likes

In short time prices might be going up and only later start going down.
But there’s also this to consider, that anyone trying to do what Cursor is doing is just a ridiculously horribly money draining business:

1 Like

Do you mean varies by countRY? If so, I understand that, but, so too should the pricing. Most multi-national companies have pricing set according to the country.

2 Likes

Agreed (typo fixed)

This can’t be a long term excuse. Especially as model design becomes more efficient. Reasoning models are not necessarily required for a lot of the work we do, either, and using reasoning as an excuse for setting prices higher, would indicate a problem in the pricing model design. People not using reasoning models, shouldn’t be paying for it. (Right now, I’m honestly really sick of the ridiculous amount of time GPT-5 thinking models take, and the amount of thinking they do seems extremely wasteful on most tasks…so eagerly, eagerly awaiting non-thinking versions of the GPT-5 models!)

Further, token counts should also be guided a lot by what the user is doing and the output they are getting. Exclude the reasoning, basic model usage should become more efficient over time, as the most common tasks really shouldn’t be using more tokens overall. They should be roughly the same amounts of input and output tokens, meaning as token costs drop over time, so too should the cost of using an agent and model to perform work.

No, not meant at all as an excuse, rather an observation. I do hope that models become more efficient with tokens.

You are right the reasoning models are usually not required for delivery of good code. Thats why we switched from Sonnet 4 Thinking to Sonnet 4 non-thinking as the default Sonnet 4 model.

Some models are more eager to use tools than others. Especially with newer models they may need additional adjustments to reduce unnecessary tool calls.

As for input tokens and output tokens being roughly the same, this is often not the case. E.g. any tool usage means we have to send additional input tokens where output tokens vary.

GPT-5 speed has improved today.

1 Like

GPT-5 speed does seem to be improved. Is that from the OpenAI side? Was it just overloaded yesterday?

So, just to make sure I am understanding this. When you say tool usage, that would mean things like MCPs, or perhaps @Docs? If so, I agree. Tool usage increases token usage as well.

The point I was trying to make is, we use those things today. As models become more efficient and per-token (or rather per MTok) cost comes down, these common use cases should get cheaper. If the token usage is constantly increasing, for things that work just fine, then I would question that: Why? I can understand a better tool coming along, that can do more than an old tool. And sure, that could be more expensive. However, an old tool that works fine? If it remains viable, it should become less costly, as MTok cost comes down, right?

Similarly, the basic task of just producing code: “Lets change that outline color for completed status to lime green.” (I literally just did this.) The cost of this stuff should, ultimately, come down over the long term. If it remains high, then I would sincerely question WHY IS IT REMAINING HIGH? And I don’t mean, tomorrow. But over the next 6 months, year, two years…at the rate technology is advancing right now, especially if we find a way to solve the energy usage problem (to @plesknekekec’s point) by say using quantum processing, then the cost of a million token batch should be a tiny fraction of what it is today.

Improvements for GPT-5: Both OpenAI and from our side.

Tool usage would be any tool like reading code, editing code, searching in code, MCPs, web search, …

Basically an AI decides what tools need to be called to work on the users task.
Each tool call is a separate AI provider API call where we have to send the whole chat thread.
While the already ingested context is cached to avoid re-processing and to save 75-90% of input pricing cost, it does add up.

The cost depends on model size and capability. Heavier models may be able to do more but they also cost more to run. Sometimes also old models are being charged same (see Sonnet 3.5 and 4 API pricing).

2 Likes

Ah, I understand your definition of “tool” now. Interesting that the whole chat has to be sent for every one. I was aware that caching was involved, which is good. Interesting nevertheless…

Well, this stuff evolves day by day. Hopefully as it transforms from “novelty” to “commodity” the cost will become commoditized as well.

1 Like

I appreciate you put a lot of effort into this post but what are you talking about saying Copilot doesn’t offer Sonnet? Copilot absolutely offers Sonnet and has done since forever?

Apologies. I was looking at a big pricing table for all the models they supported, and I did not see Claude of any kind listed. Perhaps it was just an oversight on that table.

I feel as though Cursor themselves don’t know what to price things at anymore and are starting to panic, and its starting to really trickle down to their community. Their decisions this year (2025) have been laughable. The new models, the new price changes, the every single day updates because they broke something, the community in shambles with all of these new updates, its been going on like this for months. They even banned me previously from the forum for bringing up all of these issues and my posts are still up from March. NOTHING has changed since then even though they said it would. Genuinely I believe that Cursor backed themselves in a corner with their pricing and ideas and now they’re unsure of what to do.

2 Likes

I can’t disagree with you regarding the state of things. Shambles is pretty much the word for it. It has been quite chaotic, and I’ve suffered from some very serious bugs myself. Its hard to know if any of them are actually getting fixed, because the changelog site isn’t comprehensive (and I’m not sure if they maintain a detailed changelog anywhere…their changelog site does not have any detail as far as I can tell.)

Cursor is a powerful tool, however because the purpose is ultimately to accelerate the performance of the individual using it, when things go wrong, the impact can be quite devastating. I lost an entire day to the “conversation length too long” bug when using Auto (and I don’t know if that’s fixed, I haven’t been brave enough to try Auto since, as there doesn’t seem to have been any update about it that I’ve seen.) I had to stop using my PC entirely due to such severe WSL2 terminal integration issues that the agent was simply unusable.

These are not good things, and the Cursor team should tune in more deeply into the concerns of their community. Moving at breakneck speed often means broken necks… O_o

Cursor is undoubtedly the best in the game and absolutely should be regarded as such, however I think that their business and financial side of things needs some serious help/reworking from the ground up. I’m unsure of what their day to day spending looks like, but being a paid user for almost if not a year now, there needs to be renegotiations with model providers, researching how much $$$ is spent per user per month, and tracking and understanding the patterns between the software, the financials, and the community. At this point in time it seems like they cannot figure out how to combine all 3 to make a seamless user experience that is built with a positive community. Throughout 2025 a lot of users trust has been broken due to them going back on promises and making the community and user experience worse. All we can do is hope things get better and prices get cheaper, but we’ve been hoping that for a long time now.

Sure, I don’t necessarily disagree with any of that. My goal with this thread was just to point out that the cost is not really “egregious” as some people have been saying.

Whether it is actually worth it, well, that’s up to each individual. There are certainly issues that need to be worked out. Maybe Cursor needs a “optics” guy to sort out the community relations.

Well, despite more powerful models are capable of reading and generating more tokens. I believe there would be an upper bound for the absolute number of tokens needed to solve a problem (e.g., the whole code base + some design documents + some API references). So the cost of vibe coding is still very likely to be much cheaper in the future.

What I’m concerned is Cursor’s choice on charging. It’s fair that Cursor earn some margin on each token (10%? 30%? I don’t know) but once Cursor selects such a profit model, it tends to consume more tokens for each task (or just not fixing BUGs related to high token usages) to increase its revenue and explain as “Ah you know the model is very powerful so it could read many many tokens”. I think Cursor needs to final find a way to estimate the actual outcome of each request and charge by that instead of tokens.

1 Like

I keep thinking that it would be easy to just take someone who knows all the answers to the questions, have them prepare a few clear and organized pages without uncertainties explaining exactly what the plan is, what the changes are, how and when.
One page with all the updates.
A page with all the reported bugs, what’s being worked on, and the current status.
And so on.
It’s obvious this would make people much more satisfied.