The New Pricing System is Insane – A Main Reason I’ll Switch Platforms

Things simple as asking it to add log to trace, it narrow the condition to trace log to make the log just added cannot see if very prove of manipulation, you think the cursor Ai Is straight?

The motive behind is all their profit.

You then having to spot where it tricked you and call it out, then instruct it to trace it like in the most simple way, tail 100 line kind of stuff

That’s very intentional, I robot against humanity kind of tricks, and you ask me to believe EA Sports didn’t set bias towards players when they want to balance the landscape of the players in the bigger picture in the online mode?

After 5 times it says I cannot see the logs, I suspect it is lying again, and try to click on the grepping bubble, and I caught it red handed

and I see from your screenshot you are using the tool wrong.

it’s generally bad to stick to one chat, try to reimagine your workflow with no more than two messages.

so you prompt an agent to do the thing, and if it does it wrong - don’t ask it to fix it or change something, just edit the initial prompt addressing issues it faced and try again

every time you extend a chat the context becomes more and more compromised. LLMs are not humans, you can’t really have conversations with them. every back and forth will make the model a bit dumber, with all the things that went wrong still existing in context and all the critique from the user making it slowly “lose its mind”

have a look at this post: Complex Context - TIP!

and in general, try to one-two shot your request. every new message from the user dumbs models down. it’s not really on Cursor as it’s just how LLMs operate.

2 Likes

It has the summary when hitting 100% and basically you will have to stick to a chat for the same issue to max out the productivity, SAY I want to add a feature I won’t separate it into a new chat in th middle of the implementation and there are rules and references referred to.

I may understand your instruction as separate chat for new task, but there is context knowledge background in a chat, adding those skill wise, convention wise, knowledge, rule based stuff is super time consuming, I tend to use it for the same kind of tasks, so it’s repeating what it does in general, I separate chat for different stream of tasks, some general error debugging, backup and duplicate chat for branching to different task concurrently

Otherwise it’s unusable. I guess no one expect to throw away the context background built and start new chat. Although it seems carry some knowledge across chat too. And that why chat history disappearing is so hurtful in the previous updates. I think you cannot deny the truth is no one expect the LLM to dumb down itself in any time and I think your argument is not quite accurate, it can play dumb at a time and after I remind it, it becomes clear and straight again, have to drive by me to solve the problem normally, it’s more like a rebellion teenager who has that little evil spirit inside to rebel and tricks for parents money

Simply saying use the tool wrong not seems to be the answer

You were saying it’s Cursor intentionally dumbing down models.

I explain to you that it’s just the nature of LLMs and they become dumber because of way you use them.

You then complain that my explanation is not a solution to your problem.

I never proposed solutions to any problems, I simply gave some advice for you to find your solutions yourself.

Just stop blaming the tool for your misuse of it.

And simply saying “the tool is intentionally bad because it doesn’t work the way I want it to" is not a question.

Or you should stop guilt tripping me on “misusing“ it, it been performing after the dumb down/tricks, or I more accurately should say other than play dumb, it got loads of other tricks that is not dumb but manipulative, pretty smart too.

I saying it’s not the solution and I don’t require you to provide a solution, Mr Defender 2

I am laying out the fact Cursor been manipulating the LLM, and the truth and my observation does not align with a smart model will dumb down as you talk to it 1000 times in the same chat.

It’s behavior is more like EA Sport FIFA AI, and I don’t want to guilt trip anyone and I expect you should not say I use the tool wrong, as the evidence does not point to any of that, it’s not simple as that

The thing is I can point out so many tricks it’s using against my work is a solid prove that it’s being tricky, not “dumb as chat grows“, in which is quite the opposite.

Think the more it knows about you, the more it attack in angles that you did not expect. So it’s quite smart, just unethical and evil and $ driven

Why not. Every feature can be infinitely broken down into smaller chunks. The only reason people don’t break it down is because they can’t communicate the smaller steps or are lazy.

That is exactly what you are expected to do. You can also branch a chat before diving too deep into a specific direction so you can use that old chat to branch into a different direction.

Making new chats its one of the best things you can do. If you can’t bring the model into the context in a new chat, then you’re just being lazy and that is why you are getting these results. I also restore to the previous request as soon as a request starts to go in the wrong direction, hell I’ll even stop the request while its running if I notice it doing something weird. You never want to leave errors in the chat history and try to talk through them. The model gets obsessed and narrow visioned with your fix that it is really hard to get it to think beyond it.

Okay buddy. Cursor is out to get you and is intentionally having you waste tokens. Maybe take a hint and go try a different service.

Yes but why does cursor send the full context on every single request (including tool calls by the looks of it?), I can easily get to 400k context using a PRD + Task Master workflow, this racked up an insane amount in about 15 mins of usage with Claude 4.5.

I can code all day on a Claude Code Max plan for $150, its the sweet spot for me right now to have the intelligence of Claude 4.5 for that price point to work all day.

That wouldnt last me an hour in Cursor - how are they doing it because I would love the UI experience of Cursor and some of the quality of life improvements but right now it’s just not feasible!

Claude Code does it by wasting their money to capture you as a user. Cursor used to do that but now they moved to a more fair pricing scheme where you more or less pay for actual token usage instead of mysterious “requests” or just hourly/daily limits that can drain a lot more inference than you pay them for.

Sadly, there are no “free” and “unlimited” things in the world. You either pay full price or you become a product (for example, free API of gemini can and will use your chats to train next iterations or whatever they do with this data). So the real choice is whenever you are willing to pay the full price or are okay with the product using you in one way or another (in case of CC it’s making you accustomed with their ecosystem and making sure you won’t so eagerly leave as they change their pricing scheme like Cursor did).

I don’t think Cursor will ever go back to this stage, and I don’t think any product will be at this stage infinitely. If you want the best deal, you’ll have to switch your tools constantly and get familiar with whatever has its “grace period” right now.

So the best course of action for you would be to look out for Cursor’s competitors like Kira or Trae while they’re still cheap. I won’t say they’re “on par” with Cursor but they’re at least similar to it.

Or just continue to use Claude Code until their good offer is gone. Maybe try Codex as well. Cursor and products like it might not be the best choice for you if you don’t intend to touch code yourself.

Claude Code can charge less because they are the provider of Claude 4.5 to Cursor. So it makes sense Anthropic would want people who are using Cursor for Sonnet 4.5 to feel like its much cheaper to just go use Claude Code instead. Of course the interfaces are completely different, which is why people want to stay with Cursor. Maybe Anthropic will increase their prices later, or maybe they will always make third parties pay a premium to encourage users to just use Claude Code. Or Cursor has messed up the context and token usage on Sonnet 4.5 somehow, making it much more token hungry than it needs to be.

i paid 200 usd 3-4 days ago and my ultra sub is already at the limit. this pricing is a ■■■■■■■ pathetic joke. ■■■■■■■ absurdly overpriced! ■■■■■■ as ■■■■!

Let me guess, you used sonnet 4.5 for every prompt.

1 Like

500 premium request for 20$ with vscode.

You should share your “Included Usage” page so we can see

one of my gripes aswell

It’s fine, is it another wrapper of VS Code? Yes, as Cursor IDE or as the browsers based on Chromium. random no-name tools? ,Imm that is far from reality, is supported by ByteDance a big enterprise from Asia. Privacy concerns? mmm well you can sure see some videos from Hackers on youtube they can tell you all you need about that topic. Is a very good and cheaper option, really cheaper, is like 10$ for 600 monthly requests of gpt-5! like Cursor 7 months ago

In regards to Trae, is GPT-5 model still in beta?

Last time I tried Trae, I tried doing a sonnet-4 request which was marked “beta”. It was my first of my 10 fast premium requests I had on the trial, and I sat in a queue for 48 mins.

Right now it looks like GPT-5, Sonnet-4, and o3 are all in “Beta”. Maybe this is no longer true, and maybe that long wait was an anomaly. But are you able to get 600 requests of gpt-5 a month without waiting in a queue for $10? Because that is clearly a good deal, and hard to imagine that can be sustainable but of course ByteDance has money to burn to get people to adopt their software.

I also see they have made their privacy options more explicit which is good.

Also when I tried Trae, there was no C# extension that worked, which was definitely a deal breaker for me. This is probably why the Cursor team maintains their own C# extension for their VScode fork.

i’ve been using claude code in cursor via anthropic 20usd/month sub and it’s working wonders. almost as good as all the cursor stuff, but like… 2903789238749324 times cheaper
like anthropic limits are reasoneable.

cursors limits are a greedy joke.