As an old user, I want more clarity from the new pricing

Many people feel that Cursor has become more expensive. So do I.

I think the key issue is that, for old users the pricing has been human-intuitive - e.g. 500 requests. We know exactly what is a “request”. We click send, and that’s 0.2% of your quota used.

However, with the fast development of agent models, understandably, a request is no long the same request as before. I get it. That’s why they change to token-based pricing.

Two things I hate here:

  1. You get no real “sense” of the usage you are going to get. Maybe for the same problem the agent now iterate way too much on small gimmicks. It’s no longer as intuitive as a “prompt” which is a user-defined action. And I always worry that Cursor, or model providers, will abuse this behavior because they can simply push for more tokens to get more money. It’s like on a taxi where the driver simply goes detour to charge more. Taxis are regulated in real world, but there seems to be little infrastructure on why users can trust Cursor and/or model providers that they are not detouring.
  2. You are in a world of unknown in terms of what they are going to offer you. If it were 500 prompts, I get it. I can manage. Now they put a soft cap of $20, which, according to many users experience (including me), feels far shorter of what is used to be capable of. Cursor is kind enough to, sometimes, increase that limit. But you don’t know what’s that actually gonna be. Is it $28? Is it $35? This kind of uncertainty bugs me.

With the very large of user pools, I do think Cursor might be able to do something here. What is the typical level of usage for a given model? Maybe put into percentage terms. Like an average gpt-5-high request is gonna cost 0.2% of your allowance, whereas Claude 4.5 is gonna cost 0.23%, that feels much better. Or simply based on each user’s own usage history. Now the usage are in the dashboard, that’s not the thing everyone will visit every day.

For 2, they are already doing the usage percentage prediction, which is great. But I still much prefer a defined cap.

I see what you are saying, and it is an inherently annoying issue, but the cost per request even within one model really varies too much. Even giving an average would be unhelpful. Some requests could be massive, while others very small. It is better to determine your own understanding of token usage for a few models you bounce between from (free, medium, and expensive). And try to determine when and how you should use each one.

Now your personal usage history, that could be useful. That’s actually a good idea. You should make a feature request and link it here. A way to look at the average or median cost per request for each model based on your personal history would give me an idea. And if it told you how many of those requests you have left based on your remaining usage balance. Then I can see for example, I have 30 sonnet 4.5 requests or 100 gpt-5 requests and infinite gpt-5-mini requests. If this was shown in the model dropdown when choosing a model or above the current in program usage percentage per model selected, that could be super helpful.