How about actually listing what those “tons of other options” are? So far it appears only Github Copilot and Claude Code seem to come anywhere close to cursors (now deprecated) auto-unlimited usage …
You use claud-4-sonnet for everything? Of course its going to run out fast.. 1.5m tokens can go quickly as its thinking about everything in the background
I was researching and I believe I found the best alternative to Cursor today. I’m going with GitHub Copilot Pro+, which offers 1,500 premium requests for USD 39, plus USD 0.04 per additional premium request. We also have access to many simpler models for free.
It’s important to note that a premium request is essentially a prompt sent to the agent. Regardless of how many tool calls or tokens it uses, it counts as a single request. Depending on the model, it may be charged at 0.33x or up to 3x (for Opus 4.5).
Additionally, with a ChatGPT Plus subscription, you get access to Codex CLI, which is excellent for large tasks. In my opinion, this stack currently offers the best cost–benefit.
For $20 on Cursor, I get about 10 hours of heavy agent use per month (using the cursor Composer 1 model). For $20 on Claude Code, due to their reset every 5 hours, I get 80-90 hours of active use per month (same code base, heavy agent use, Anthropic models). That’s 8x hours (minimum) with Claude Code. Easy decision. Cursor subscription cancelled.
@Guilherme_L I appreciate you sharing your research. I’ve been Claude Coding for several months (other than this recent experiment with Cursor). I’ll try Codex CLI and GitHub Copilot Pro+
I tried github copilot a few months ago. Their harness is not good enough. Same model performed better on other coding agent than copilot. If their harness is good now then its way cheaper than Cursor.
UPDATE
My stack is proving to me amazing.
I have far more quota and getting things done for almost nothing $.
The ChatGPT Plus subscription has plenty of quota for Codex daily use.
TIP: In Codex, use model gpt-5.2-codex with HIGH or XHIGH effort (very important select the effort). The results are similar to Opus 4.5.
So, we spent more than 1,000$ with cursor in 14 days.
Signed up for OpenAI and installed OpenAI Codex. Cut monthly spending by 85+ %
Goodbye Cursor, until you fix your pricing
Cursor just passes through API costs. I may be mistaken, but it sounds like you are comparing Cursor’s subscription with additional on demand costs (API billing) with a subscription to OpenAI which does not integrate into the Cursor IDE from my understanding.
Did you install the OpenAI Codex extension in Cursor or in VS Code? If that works for you, then maybe you actually didn’t need the Cursor IDE all along? Glad your figured that out and and stop paying API costs for codex through Cursor, when you could subscribe to OpenAI Codex and use it via extension and get what you needed. Thanks for sharing your experience, but I think you may be comparing different products (Cursor IDE vs CLI), but sounds like your team doesn’t need to work within Cursor. Good luck.
The main issue of Cursor is the Cache Read. Worth nothing cache reads 10x cheaper if Cursor uses 10x more cached tokens.
Simple tasks using 1-2M tokens? That’s nonsense. (And no, it’s not because MCP, or whatever. I disabled all of them)
I don’t know why they started to send much more data as cached data to the LLM providers. Price is the same, but Cursor started to spend a huge amount of unnecessary Cache Read. Then, the cost skyrocket.
When using Cursor you need to pay attention at the Cache Read price of the model, not the output tokens price.
Claude Code seems to be much more efficient with context window. Cursor just send everything to the LLM.
I still have my Pro+ subscription, but I’m about to move to CC as soon this billing cycle ends. I love Cursor, its UX, model harness, but the cost simply can’t justify it.
Input tokens cost something.
Output tokens cost something else.
Cached tokens cost another value.
But on general usage over time, they are about equal in cost proportion. So if you know the cost of one (cached tokens), you can calculate the 3X cost of the total query.
I have a related question: Someone please explain - Why are cache read and write chargeable?