Yes, in order to charge more for independent use, they even blocked the GitHub Copilot Chat window in version 0.47.
Let me give one pattern that seems to work okay with this new mode:
Do a round of huge-context planning in “ask” mode (i.e. user-attached context or regular Claude agent searching) with Claude Max Thinking, and produce a very thorough plan and summary. Then switch back to regular Claude for all the tool calling and iteration. Maybe occasionally going back to Claude Max to get a birds-eye view.
But that’s really twisting the intended usage of this new release.
thats not because of them, thats microsoft
This is the way to approach any task, just like you would in the real world (before AI).
Start a new conversation > you assess your codebase (Ask), brainstorm implementation (Ask), create a solid scope (Ask), and then execute (Agent).
Been doing this with 3.7 Thinking and the results are stellar.
no, its obvious they do not.
Create a problem to sell the solution, create a linter error to charge for fixing it. Americans wow.
Infelizmente sou obrigado a concordar com você não vale apeka por hora ainda usarei o normal eu até melhore meus prompt e etc e não tive retorno melhor ou diferente
It is evident that the development team is quite envious of their competitor’s revenue model based on tool usage frequency, which is highly profitable. The newly introduced max mode is merely a trial, as they are not fully prepared to provide the complete 200K context capacity. In individual conversations, it quickly reaches a point where cursor cannot handle the information (you can distinctly feel the lag). Therefore, this max mode serves as an experiment for changing the pricing model. If that day ever comes, I will not hesitate to abandon it.
For me, if spending money can improve efficiency and I don’t frequently encounter the usual Claude 3.7 usage peak prompts, then it is definitely worth it.
the tool call made me bankrupt
@danperks so how , are you guys gonna try care ? $56 USD for MAX that doesn’t work.
0.45 is so much better please , your cost saving measures at cursor failed so badly.
Why are you guys acting like OpenAI now? Disgusting really , I felt so productive in 0.45 , you guys should had just increased the price from $20 to say $80 or $100, more than wiling to pay. Instead you try some fancy new logic that doesnt work.
On top of that, you just dont want to leverage on the cheap Deepseek R1 , somehow paying “3rd party” for privacy at the cost of x16? How much isit now $7/Mil? Deepseek offpeak is $0.50
You’re on fireworks right?
Something Im curious about, Im not using Max unless I have something big to implement and I want it to go through and plan it well.
For everything else Im using 3.7 AND have thinking on, it seems that after your planning stage you dont have thinking on?
Im assuming (maybe wrongly) that having thinking on doesnt cost and makes it take a bit more time to plan each time?
I guess whats the benefit of having thinking on or off?
Including thinking costs 2x the usage credits. That may be worth it in some cases. I have found that if I am asking the agent to just “do” something that it should know how to do, then thinking is redundant. If it needs to “solve” or “plan”, then thinking can give a (small, I’ve found) benefit.
Oh it doesnt say that (since the change), I actually assumed that with the latest change it no longer cost twice as much as now they have Max, also I actually ran out of my premium credits so on the slow response and still using thinking.
Help me understand. I’ve sent a single prompt with 3.7 MAX, it read many files and counted three premium-tool-call.
What is premium-tool-call? Can reading a codebased trigger premium-tool-call?
Hey guys, thanks for the feedback!
While we are continuing to refine and improve the Max mode for Claude 3.7 behind the scenes, I just want to reiterate a couple things.
Firstly, while this mode is “no expenses spared”, this does not mean that Max is a golden bullet. Like any LLM-powered tool, the models have certain ways of working that you, as a user, may have to lean into to get the best outputs possible.
We are already seeing users find great uses for Max, but please don’t expect it to solve every query you throw at it - the underlying models still have hard limits on their abilities, regardless of how strong Cursor is as feeding the models the information they need.
Additionally, Max mode is entirely optional, with no changes made to the existing model selection. Claude 3.7 and 3.7 thinking are still available within the basic Pro plan, and will be fit for the majority of queries that you may throw at Cursor.
Please keep your feedback coming on the model, as any changes we can make to improve its performance will be acted upon promptly.
However, as with any LLM, we cannot guarantee results, so please only use Max if the cost is no issue, even if you see bad outputs.
I’ve been playing with Windsurf and one of the nice things they do is that certain tool calls are free, the biggest example bing lint fixes. There is a custom model that handles linting and it is free so you see things like this:
It would be really nice if Cursor implemented something similar.
I will recommend Gemeni that had larger context windows to give you Birds Eye of your code base:)
I won’t waste time and money using the Max. I’m using alternative tools, which seem to be an eye-opener. I would test Cursor again if and when they start listening to their customers. The tool I use was able to easily process a 4,000-file codebase, maintain context awareness across a massive chat with about 45 message prompts, and still follow my rules and best practices with zero hallucination. My jaw dropped when I tested it.