Pro Plan’s “Unlimited” U‑Turn

First off, a disclaimer: I still love Cursor and I’m too lazy to look for a replacement and change my workflow.

That said, I feel the need to speak my mind. Getting users excited about the Pro plan being unlimited, encouraging them to use it to its fullest, and then saying “we’ve changed our minds” a week or even less later is, frankly, a scummy move.

Personally, I probably haven’t lost much (except for usage transparency), since I subscribed back when it was a 500-request package with the option to buy pay-per-use more. But for anyone who just bought the Pro plan in the last few days, seeing “Unlimited” on the official pricing page… well, this looks really, really bad. It feels like a classic bait-and-switch.

And to think, before this latest change, I was already struggling to see the difference between Pro and Ultra. Both were supposedly unlimited, yet Ultra didn’t even come with free MAX…

15 Likes

Yes, actually we hope they find us solution!

“I still love Cursor and I’m too lazy to look for a replacement and change my workflow”. → I felt this in my soul :laughing:

OT: Currently until the transparency issue is found and someone can make sense of this bull in a glass shop situation, I think it’s up to us the users to continue using cursor and discovering these issues and bugs. I can only assume the technical side to these errors are more intensive than let on.

If you’re using anything like Opus or o3-pro etc. you will hit limits and be charged an absurd amount, better of utilizing the “free” LLMs like Claude 3.7 Thinking (NOT MAX) the pro plan/new rates.

Edit: Not MAX, Thinking. For Claude 3.7

1 Like

I wish more people would acknowledge this, they are changing plans like we change under-wear. Get people on the hype then choke us out.

Are there are MAX models that available at a non-MAX rate? 0_o

Sorry I haven’t had coffee yet. I meant 3.7 Thinking model not the “MAX” mode enabled. I’ll correct that.


Does anyone know what the hell it’s talking about “auto-select”? I’ve looked everywhere.
This is what i get every single request no matter how long I wait.

1 Like

Create a bug report about this. I have this mode displayed correctly.

Version: 1.2.1 (user setup)
VSCode Version: 1.99.3
Commit: 031e7e0ff1e2eda9c1a0f5df67d44053b059c5d0
Date: 2025-07-03T06:16:02.610Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.22631

I asked Gemini at Ai Studio to calculate the rates, but he didn’t have enough data on my ~3.5$ spent to do it correctly ( ͡° ͜ʖ ͡°)

You can try to do reverse rate yourself if you are curious and rich enough to spend at least $10 on different models

Upd: pic changed

1 Like

Exactly - I was actually talking with Support about this “Unlimited” but limited, which is calculated based on “compute” and not requests. I would prefer to know what my usage is and when it runs out - no issue with paying for more, but at the moment it is like tossing a coin.

1 Like

I don’t like the lack of transparency. Obviously i would be using unlimited less if i had a idea when i might start getting charged. So hiding the request count and other things is dumb.

2 Likes

I was explaining my issue on Reddit and then I got suddenly banned from there I really don’t like their way of handling customer relations right now and they must either revert back to the correct and old system or they must be honestly clarifying their own matter in a proper way.

Hey @Artemonim We changed the wording on our pricing page, as it was confusing some folks, who thought:

~“Unlimited Agent requests (with rate limits for some models)"

Meant unlimited, un-rate-limited LLM calls to all models, we’re working on improving our communication around this pricing change, this wording change is one of the things we’re trying to improve.

Nothing has changed behind the scenes for the Pro plan.

Pro+ has 3x limits
Ultra has 20x limits

  1. As far as I understand, a huge number of your users rushed to the forum at almost the same time to complain about sudden limits. That’s very strange and doesn’t look like a mere “wording change.”

  2. Under the old pricing system I knew I had roughly 2–3 weeks before I’d need to spend an extra $5–10. Then I saw in my dashboard that premium requests were included in Pro and weren’t being billed, and on the official site it began to say that my requests were now unlimited. Yesterday I updated to 1.2.1, and instantly I got a notice that I’d used up my limits and owed money for further usage.

  3. As far as I understand, we’re now in permanent MAX mode.
    3.1. What, then, is the point of having a separate MAX mode and context limits?
    3.2. By how much do your context‑window optimizations reduce usage costs?
    3.3. If your optimizations significantly lower costs, perhaps make MAX mode the default and, for the limited mode, add a checkbox like “Optimize my usage”?

  4. I understand that the limits are adaptive, but I’d like greater transparency so I can plan my budget and work. For example, it seems I burned through my Gemini package on a free open‑source project over several days whereas, had I known about the limits, I’d have saved it for paid work.

And… Where is it?

@Artemonim i will address the specific point about Max mode. This isnt accurate and this is not what the thread where I responded is talking about. There is no permanent Max mode. Its a setting you can enable and disable.

Context window optimization does reduce cost!

  • Less tokens sent, received or processed = less cost. Avoid attaching unnecessary context like files, rules, logs etc..
  • Shorter chats mean also less context to carry around and process, though the currently active thread is cached by AI provider for cost reduction.
  • There is no extra optimization on Max mode vs non-Max.
1 Like