The real problem with Claud 3.7 Max and GPT 4.5

… is that they’re now the only agents worth coding with.

I’m unclear on how I should optimize my subscription given the upcharges involved. I get that 5¢ per tool call and 200¢ per 4.5 agent call is just the reality for these high-context models but… where does that leave my base level Cursor sub? Is there any reason to pay for 2000 tokens a month or should I just go back to the $20 sub and hike up my pay to play alotment?

@danperks can you help clarify this for me?

Hey, the fixed request bundles are now deprecated in favour of the usage-based pricing, so I would recommend downgrading to 500 fast requests, and just setting your usage-pricing limit to wherever you feel comfortable having it.

1 Like

Did it.

Life is interesting. Gemini 2.5 has been eye opening but deciding between Cursor and Windsurf now depends entirely on what I’m doing. I often have both open and swap between them like I do models. I’m waiting for Cursor to make context and memory management more transparent and explicit.

The differences really show when… have your devs try this @danperks

Open up a React project in NextJS, common use case. Run Next 15 and try to set up a Tokens Studio pipeline to Tailwind v4 via Style Dictionary v4.

Every LLM struggles with this because SD4 has multiple breaking changes that released after the training window of the current models but Cursor’s version of the agent seems to pollute the process with v3 code far more often than the others. This tells me a few things about how you are and aren’t managing context for the AI models.

What I can tell you is this. When cursor rules and whatever vector memory solution you all are doubtless cooking up can handle the scenario I proposed to test on then a LOT of your complaints will stop. It’s a fantastic mess of edge cases that represent the bulk of people’s problems with the setup.

Appreciate the feedback here.

The name of the game has always been context. Initally, the jump from ChatGPT to Cursor was just adding the context of your codebase. Then we added the terminal, docs and MCP, to start to bring in context outside of the code itself.

However, the next obvious hold does seem to be syntax and languages. We’ve tried to help this by adding Linting and the Web as context, but this only goes so far - the LLM can know something is wrong, but not necessarily how to fix it.

I can’t say we have a magic bullet here - I don’t think anyone will - but I’ll feed this back to the team as it does seem like one of the main roadblocks holding Cursor back!

1 Like

Right, and while cursor rules help I haven’t seen any easy way to say"

“Pause this process”

Then do a sidebar chat where I have a discussion on the current state of things with the LLM, we agree on some principals, it asks me clarifying questions, then we restart the main thread.

This would amount to intelligent context management on a more interactive scale and could include things like attaching API documentation for the version of vitest and fast-check I’m using to avoid wasting time on deprecated code.

I know there’s no magic bullet but the UX pro & systems thinker in me just wants to dive in and solve things.