Feedback: New Pricing Model

Thank you for your reply. My question was not clear. I understand that once we’ve reached the rate limit, we’re billed for subsequent requests.

It’s a question of transparency. We don’t know when this happens. It magically switches from one mode to another without notifying us.

I have no problem paying for additional requests. I think it’s normal to have to pay to use AI intensively.

I’d just like to be aware of what’s going on. Where I stand with the rate-limit, when will my prompt be included or paid for.

At the moment, I feel like I’ve given the AI access to my credit card and it can do whatever it wants with it without informing me.

It’s only when I’ve reached my budget limit that I’m pleasantly surprised to find that nothing works until I put a coin back into the machine.

This feeling is extremely unpleasant.

1 Like

@CLSixteen yes thats understandable, there is a feature request for this already out.

Usage based pricing switch when a limit runs out (fast requests before or rate limit on now) has always worked without notifications once enabled.
You can turn it off in case its just about usage on Desktop and you will get notified.

I did forward the feedback to Cursor Team.

1 Like

Regardless of the new or old pricing model, current status (current chat context size… whether or not you are rate limited… current chat “requests” count… monthly 500 requests remaining…) needs to be obviously displayed right there in the chat status bar (e.g. on the left hand side, opposite the “Start a new chat for better results. New Chat” text…

The difference being that before it was predictable and you could easily check when you were coming up on paid pricing.

Hi Cursor Team,

Don’t want to be that guy, but you really made a bad marketing move here:

  • Pricing changed without warning
  • It’s way too complex — had to dig through the forum and many docs just to barely understand the new vibe
  • Nothing worse than unpredictable, confusing pricing
  • Opting out is an illusion: legacy pricing isn’t reflecting the old service (fast requests are slower than my grandma, and my grandma passed 15 years ago – rest her soul)

I’m fine with any pricing when I clearly understand what I’m paying for (and I agreed to it). That way, I can adapt my usage to fit my budget.

I don’t know how to optimize my usage to boost my performance anymore. My flow is broken and I don’t like “Auto” mode because I want to know which model I’m using.

So instead of coding, I’m here writing a post like a karen.

Cursor is an amazing tool. Please fix this. Right now, you’re just handing your competitors a golden opportunity to lure your customers away.

FYI: Asking your clients to upgrade or turn-on usage-based pricing is a tough sell when they don’t even know why they’re hitting rate limits.

Good luck — hope you evolve toward a smooth, predictable usage experience like we loved.

1 Like

Hello Cursor team,
I presume that you guys made this change because it needed to be. But as a user of Cursor for 5 months I would like to share my opinions about this new pricing,

First of all thanks for your efforts on this product, really appreciated and using it heavily as a frontend developer for 5 months and 4 months on pro pricing.

But must say this new pricing is really upsetting because of the following reasons;

1-For Pro users it says and had unlimited completions back then because we used to at least have slow requests after the limit, which was very okay cause we still had chance to keep working with the product. Now when the limit ends, we are not able to use claude 3.7 or 4.0 not in normal neither thinking mode. I have experienced a lot with free LLM models in Cursor such as Gemini 2.5 pro and the others. In the beginning Gemini 2.5 pro was doing very well, don’t know what happened but after lately changes it became a problematic model. So these free models somewhat useless on big codebase projects, they make lot of mistakes, failing to apply changes on projects.. So I find these free models pretty useless for my use cases. Therefore this “unlimited completion” feature not really an “unlimited” thing IMO.

2-Back then we at least had a panel which showed our usage/500 and worked exactly as we spent our requests, so we knew how much we spent.

3-Pro users had 500x fast request limit (which is equal to 20USD/500= 0,04 cents) which makes sense, but now I hear people saying they are spending their
request limits in few days?? I am working 6-7 hours on my projects and back then with pro plan using claude 3.7 thinking my requests limit used to work for atleast two weeks.. Is this due to price change per requests? Looks like pro plan now doesn’t exactly provide 500 requests for claude models..

My flow is too REALLY BROKEN.

1 Like

I don’t know who made those changes but reopened space for other companies. Good job!

off topic

You people need to give clear cut answers as to what we’re paying for. buy Pro+ plan for 3x n requests? ok what is n? 200x n? I use this for work. I need to know exactly what I’m paying for and what I’m buying. You don’t buy a car with a warranty that will last for n amount of time. If nothing else, you need to provide transparency when we’re within a limit like other products do. Example: “You have 3 requests remaining before you hit the limit for today”. That’s transparency. Then I can decide whether to let an auto model handle it and just fix whatever it breaks afterward, or use up part of my remaining limit.

2 Likes

I find it extremely disappointing that a company that provides such a good product, Have a financial administrative department so bad that it is unable to provide a transparent pricing policy. For as long as this happens, I am out.

I was paying $40 per month previously. With the same usage, I have already used all my limit for the month, and today is just the 4th, what the hell…

Model: claude-4-sonnet

Fire whoever came up with this new “pricing model” completely opaque, cursor has been nerfed, makes zero or little sense, what is “compute” what are limits? Can’t use Claude 4 sonnet at all, maybe “auto” works, but that’s about as much use as a chocolate fire poker.

there has been an update since