… is that they’re now the only agents worth coding with.
I’m unclear on how I should optimize my subscription given the upcharges involved. I get that 5¢ per tool call and 200¢ per 4.5 agent call is just the reality for these high-context models but… where does that leave my base level Cursor sub? Is there any reason to pay for 2000 tokens a month or should I just go back to the $20 sub and hike up my pay to play alotment?
Hey, the fixed request bundles are now deprecated in favour of the usage-based pricing, so I would recommend downgrading to 500 fast requests, and just setting your usage-pricing limit to wherever you feel comfortable having it.
Life is interesting. Gemini 2.5 has been eye opening but deciding between Cursor and Windsurf now depends entirely on what I’m doing. I often have both open and swap between them like I do models. I’m waiting for Cursor to make context and memory management more transparent and explicit.
The differences really show when… have your devs try this @danperks
Open up a React project in NextJS, common use case. Run Next 15 and try to set up a Tokens Studio pipeline to Tailwind v4 via Style Dictionary v4.
Every LLM struggles with this because SD4 has multiple breaking changes that released after the training window of the current models but Cursor’s version of the agent seems to pollute the process with v3 code far more often than the others. This tells me a few things about how you are and aren’t managing context for the AI models.
What I can tell you is this. When cursor rules and whatever vector memory solution you all are doubtless cooking up can handle the scenario I proposed to test on then a LOT of your complaints will stop. It’s a fantastic mess of edge cases that represent the bulk of people’s problems with the setup.