Prices. So strange

Token based pricing makes sense, what is wrong here the amount of token send to models by cursor. You guys need to work on optimizing it.

Why would someone use orchestrator, when orchestrator is charging more (due to increased token usage) than the original model.

1 Like

It’s funny because before with slow mode for $20 I could work the whole month and now with $50 I do 2 weeks. So no there is a problem.

Put back the slow mode for the 50$ package at least.

3 Likes

A bit of creating a self-fulfilling prophecy, too, isn’t it?
If you shoehorn everyone into the MCP+Heavy Agentic slot, then sure, token-based pricing makes perfect sense. You want power? You pay. No questions asked.
But if you don’t use much of that fancy stuff, and you see Claude requesting a bunch of cache, simply cause of “well, that’s what Claude does”…Dunno man, that’s a different feel. Soz for cutting some corners here. Just requires some mind flexing to “Hmmm…so that’s what Claude does and it makes the price increase completely justified”…" we also believe that’s where others will turn to as well"…Again, if you’re the market leader and start by doing this, then yeah, I’m indeed sure others will follow.

It’s just a tad much. Don’t you see that? “Here, dudes, have a steep price increase on the house. Oh, and btw, usage costs are now totally more expensive too, at least for the models you need to use a lot if you depend on our most reliable agent tools”.

Just rename your plans so that they speak for themselves:

2 Likes

That last one made me giggle but yes, it would be a good step. And perhaps just some handy refs like “here guys, auto mode can ■■■■ if you expect it to do too much but can be really handy if you use it properly> followed by some stuff that it can do just fine and probably can’t do fine”…

Then work on better/improved automatic session handoffs with or without compact context overflow and perhaps even a cute little widget telling you "this particular command has cost you “blabla tokens/price”…even on toggle for those of us that care about that stuff.

You don’t get it!

Auto is unlimited - this is the main selling line. So, per the Infinite Monkey Theorem, you just gotta repeat your request enough times. You got enough time for that, right?

But in fact, auto is not as useless as It is said to be.

Nah, man, Auto ain’t quite cutting it for my masochistic drive for pearls to the swine. Fresh session. Rules in place. Good grounded doc. 12 files app. Claude chews through it like it’s nothing. Auto comes in good too. But Gemini 2.5 Pro somehow needs 1 edit, 1 context reset now.

we must be using different auto modes.
Auto always will give me non-working implementations that will still miss the mark after fixing. and fixing the mess “Auto” produces usually takes hours.

If I give the same implementation plan to a proper model: 10 minutes implementation, 2-3 minor fixes and – boom – done. next task.

1 Like

It’s quite a long time :eyes:
Most likely, the task is too complex, or the prompt is too bad.

You cannot judge this without knowing the feature being implemented. I also wouldn’t consider 10 minutes of AI implementation as being long.

Also, auto not being able to implement such things while others are, should give you your answer.

2 Likes

With a rifle, you can kill a mosquito and you can kill an elephant. You can kill a mosquito with a slipper, but you can’t kill an elephant. Are you going to use a rifle to kill a mosquito?

создатель топика пишет верную вещь. Я тоже поднимал этот вопрос и был рутинно отфутболен по типу “давайте все примеры, мы пересылаем лишь ваши токены, учитесь оптимизировать”. Все это интересно, только вот “клод код” за 10-30 запросов за сессию (такое примерно ограничение у него перед паузой внутри дня) способен решить задачу, которую тот же Клод за 5 запросов в курсоре только проанализировал и составил план (2-3млн токенов ушло). Видимо я как то продумано работаю в клод коде и плохо в курсоре…2-3млн токенов на такие задачи выглядит скорее смешно, чем интересно для разбирательств…)

1 Like

Guys, as much as I like Elephants wearing slippers, can we pls get back on topic?

Translation of @dreamrandomhub:

“The topic creator writes the right thing. I also raised this question and was rudely dismissed with something like ‘give us all the examples, we’re only wasting your tokens, learn to optimize.’ All this is interesting, but here’s the thing - ‘Claude code’ with 10-30 requests per session (such a rough limitation he has before a pause within a day) is able to solve a task that the same Claude with 5 requests in Cursor only analyzed and made a plan (2-3 million tokens spent). Apparently I’m thinking about how productively Claude works in Claude Code vs poorly in Cursor… 2-3 million tokens on such tasks looks rather ridiculous than interesting for developers…”

Thanks for the translation! And I completely agree. Last month of Cursor sub, it became a circus

1 Like

Most likely, this is an architectural problem of how Claude Code processes code internally vs how it communicates with Cursor. It is obvious that overspending occurs every time the tool is called, and I even made a bug report in which I showed that Claude clearly consumes the cache many times more.

I still haven’t received an answer, by the way.

or was it the Mosquito wearing slippers? I am confused…

1 Like

I too have started to dip my toes in Claude Code (Btw, they offer a free 7-day trial atm). Now, mind you, I have had exactly 3 sessions with it but to say it’s more efficient that Cursor’s Claude integration is quite the understatement.

It actually follows your rules too! Like ALL of them.
Within 5 minutes I had made it:

Always code automatic tests to verify itself
Compact context request after each task if it took it more than
either 10 toolcalls or 25K tokens
Write session handsoffs either on command or after a session of 5 minutes
Make sure it has comprehensive guardrails
Limit token usage as much as possible
Setup a fully automatic git commit

Mistakes it made: ZERO.

Maybe this changes when the trial expires (I’ve seen weirder things) but for a dreamstart it scored a near perfect 10.
It still needs good guidance, good prompts but it’s easy and quite enjoyable.

Tips:

/clean > wipes context /compact > compacts context
Ask it to write a comprehensive guardrail package
Then ask it to compact that package into short cards

Yes, you’ll run into session limitations but you can get stuff done.
Once that yellow “You’re about to hit your limit” message pops-up,
immediately make it write a session handoff cause that limit WILL kill it mid-activity

1 Like

I also will give CC a try.
Right now I am trying Trae (in auto-mode) and it actually seems to get some work done. Windsurf with free SWE-1 is nice for simple tasks. Gemini CLI does not hold what it promises (1000 gemini pro prompts per day).

1 Like

Yeah give it a shot. I think you’ll quickly agree it’s a breath of fresh air. Does it do everything that cursor does? No. But with cursors new price model…pfff. And if Cursors LLM integrations delivered clearly superior results…

Be careful of one thing, btw: If you take the more expensive sub (I think it’s USD 100,-) it will default CC to OPUS, which tanks tokens like a hotrod. You can’t change that in the terminal, afaik, so you need to change it in a config file. There are also tons of useful YT tuts, so you’ll be fine. Good luck!

1 Like