Is cursor really nerfing models to sell "MAX" models?

I can confirm, lately Cursor has been really bad… it feels like the model is just dumbed down, I am using Claude 3.7, and it doesn’t make sense, it just doesn’t follow up from anything in the conversation, it just ignores the whole logical path of what has been done, what we have been doing etc.

I think they Nerfed it, it really feels like that, I was going to be proposing for us to use it at work, but how can I right now, it will be embarrassing.

1 Like

Yeah, I noticed as soon as that “Auto” option appeared and started auto-selecting itself, whatever model it auto-selects seems lobotomized af. I figured they did it to save money.

1 Like

When Cursor was in its previous state and the MAX models hadn’t been released yet, it tantalized us with the 3.5 Sonnet, but now, with their new commercial models they’ve named MAX, and along with Claude’s new ■■■■■■ model, they’re poisoning us. Very soon, there will be a rush towards tools like Trae, VSCode with Copilot, and Windsurf. Thanks, Cursor, those were the good old days…

The way they calculate fast requests is also not very clear. I can see in logs several entries for “Errored, Not Charged” but its very well counted under fast premium requests, so what gives!
I can definitely prove this. Becuase I started out with some 370(approx)/500 used quota today morning and now its 463/500. If I check the logs, there arent even 32 requests(of which 6+ are showing errored not chrged)
Past 15-20 days Cursor has been more headache than anything. Fast requests are consumed like water, poor responses from the models…Several times erroring out.
Its kind of giving me anxiety attacks. I was so very impressed with the trials when I got a lot more delivered with their 150 FREE fast request credits and now am down 400+ with hardly any output.

Same here, our enterprise anyways was paying 40USD for Copilot subscription and was to seriously thinking to recommend this at work. Thankful, I decided to wait a bit.

Why don’t you charge the max rate upfront and then cash-back the unused context window later?

Someone using their whole library of greek-turkish erotica in each prompt? Already charged and paid. Someone saying hello! to the agent using only a few percentages of the context? Rebate unused costs.

1 Like

One of the issues with the difference between the max model and the regular model, especially for Claude, is that the thinking max length is set to 2k for the regular, while it costs 2 fast requests, and the max is 128k

1 Like

Hi everyone,

I’m also facing exactly the same issues, and honestly, it’s becoming unsustainable to use Cursor.

Instead of helping, Cursor is now getting in the way:

• You give it a clear instruction and it starts modifying unrelated parts of the code like crazy.

• You tell it explicitly: “Do not make any changes without my approval, just analyze and propose a solution” — and it obeys for one message, then immediately disobeys and starts changing code on its own.

• You ask it to generate a simple class, and it produces 20 compilation errors — trying to fix one thing while breaking two others.

Not long ago, Cursor was an incredible tool.

Now it feels like it’s been completely dumbed down. Instead of making me more productive, it’s wasting my time and adding errors that weren’t even there before.

Please, we need honest answers :

• What exactly changed in how the models behave inside Cursor?

• Are you secretly reducing context, model capabilities, or introducing optimizations that harm quality?

• Is this happening to push users toward the MAX plan?

If Cursor doesn’t clarify and fix this soon, it will sadly stop being a viable tool for professional development.

Thanks.

2 Likes

@danperks and @truell20
Instead of additional dollars, can you just have high usage of requests like 2-4 requests used for MAX models. Many of us pay for cursor and get it reimbursed by our organization, and it becomes painful to explain this varying $0.05 usages.

Also, I sometimes only use 200-300 requests per month, and the remaining 200 are wasted, which I can spend on MAX models. Since $20/500 is $0.04, you can use 2 requests($0.08) to cover the cost of MAX model

5 Likes

Exactly, that would be a possible solution here. People see their included requests not being used when they switch to the max model. I can understand one wants to limit the calls, but wouldn’t a fee of 3-5 requests be of equivilant use? It’s weird when virtual and real currencies are mixed up in a pricing

2 Likes

I ended up adding a rules / instructions file and that really helped!

Your post a year ago says it was 10K then:

So in the time since you raised the limit to ~64K, then to 120K for 3.7 and Gemini 2.5.

Model context lengths are rapidly increasing with per token costs plummeting - especially on cached input. I think most users expect Cursor to continue to share some of those improvements for regular Pro use as you did last year. Regardless of whether a MAX-style option to pay for using the full capabilities of the model is also offered.

If you retain a fixed maximum context length the effect is similar to having frozen tax brackets and high inflation; more and more usage is pushed to MAX over time as expectations for ‘normal’ usage shift with models getting better and productively using more context.

Is that the plan, or will you continue to increase context lengths roughly in line with capability improvements and lower per-token costs from providers as previously?