Unofficial Community Discussion to Give Fedback on Plan Rates

Since @T1000 asked that I make my own thread..

I am of the opinion with how much is able to be spent a month on cursor (5000+ if you go hard!) that the highest tier should both have more guaranteed usage and a higher price. If I pay 200 a month and run out in 10 days, and the rest of the month costs me 1000+, this seems like a broken pricing model to me. I would love to hear what other members of the community (especially ultra users hitting limits) think so Cursor can hear our feedback.

4 Likes

@banth31 thanks for creating the thread, a feedback focused on current plans would be great.

1 Like

92% of my total usage is cache read. You tell me.

1 Like

Let’s accept: Cursor + Claude 4 in not presently viable.

I spent the last few days exploring alternatives. (Production codebase, own product with customers, ~35 hrs programming)

I explored

  1. Cursor + Claude 4 Sonnet
  2. Cursor + Auto only
  3. Github Copilot Plus
  4. Claude Code + Cursor
  5. Gemini Code Assist
  6. Warp Agentic Coding

I didn’t explore Kiro, or Taie AI, or

My Findings

1. Cursor + Claude 4 Sonnet

  1. Cursor + Claude-4-Sonnet is ~2x the cost of using it through Claude Code w/ Anthropic API Key. It problem solves AND applies code edits, but it applies the code changes is costly. Sometimes $1, sometimes $3. It depends on how many times it does stuff like searching files, reading files.

  2. Claude-4-sonnet is massive overkill for edit_tool calls in Agent mode, the tool that applies the code change to your code. It’s analogous to driving a Ferrari to the grocery store at 8mpg.

  3. If you ask the Agent a question like ā€œWhy is that border the color blueā€ when it’s loaded with a 100k chat context, expect to pay $0.40 for it to respond ā€œIt’s blue because of Xā€.

Conclusion: Once in Agent mode, never treat it like a chat. Minimize it’s ability to search files. Input all the files it must edit manually in the context at the start. Don’t do things like the RIPER-5 protocol. Minimize ā€œturnsā€ in the chat. NEVER switch models mid-chat as switching from claude-4-sonnet to claude-4-opus will have it repopulate an entire cache and bring a huge expense

2. Cursor + Auto only

  1. Auto is free and best for edit_tool and grep and searching folders in Agent mode.

  2. Auto is idiotic for problem solving or code comprehension.

Conclusion: If you use Cursor with your implementation + Gemini/Claude + all the context files attached for the first ā€œturnā€ in Manual mode, and ask it to write out the code snippets of exactly what changes to make…THEN you switch the model to auto and ask auto to apply the edits according to the plan, you can save massively on costs. If you ask claude-4-sonnet to apply the edits, you will get the $1-4 cost in the dashboard for this chat

3. Github Copilot Plus

A. It technically ā€œcanā€ do the things that Cursor does but I found it’s UI awkward and

B. The background agents are not as good as Cursors because you can’t tell them to do something mid-chat. You have to wait for them to do their edits in a PR. And that ā– ā– ā– ā– ā– . In cursors you can tell it in the chat window to modify stuff before the PR is cut.

Conclusion Several times during this experiment the Agent just failed mid request, or gave a 502 error. At these moments I just switched to Cursor+Auto or Claude Code. It feels like a copy-cat app to cursor. It’s not really reliable, and the UI and UX feels unloved - like it’s made by people who don’t really care.

4. Claude Code + Cursor

A. Claude Code has that hard-to-use terminal/editor thing that pops up and moves windows around.
B. Claude Code terminal can’t easily navigate the text input box like Cursor
C. With Claude Code, I haven’t found a good way to run multiple agents in parallel like in Cursor I’d open multiple chat tabs
D. But with Claude Code, I have the least cost anxiety as it always displays it and give $5 expense warnings.

I think this is the best workflow. Use Claude Code for the agentic stuff. Cursor has an advantage you can run multiple agents in parallel in the chat window. I like to run 3 at a time, and check their progress. Claude Code, AFAIK, doesn’t help you manage that. If you do use Claude Code in parallel agents, the git diffs get all messed up together - it’s hard to see.

Conclusion: Claude Code’s terminal UI is garbage compared to Cursor’s agents, but at least it has guard rails on the Token usage, and doesn’t have run-away expenses. It’s the only viable solution for now to use Claude models.

5. Gemin Code Assist

A. Subbed for $20, and the second agent task I did asking it to edit some in-app context it say ā€œGemini call canceled because it breaches some content restrictionsā€ - and honestly was such a disappointment. I forgot howmany times Gemini’s model refuse to write code for me when they encounter a bad word or something.

B. Gemini Code Assist’s VSCode embedding is the worst out of claude code / copilot. It’s a huge weakness

Conclusion Not a reliable alternative. If it unpredictably refuses to write code, then it’s not something I can trust.

6. Warp Agentic Coding

A. It was a cool interface. But the moment I wanted to edit something it wrote myself I suddenly remembered I’m inside a Terminal UI and writing code in a terminal ā– ā– ā– ā– ā–  compared to what Cursor and VS Code offer

B. It was the same cost as Claude Code but less intelligence so it’s not a real competitor.

Conclusion: Not a viable alternative

Overall Conclusion

Cursor is still the best product. Claude 4 is unusable with cursor at the moment until changes are made to the product. Also Cursor has zero guardrails engineered to stop run away spending and that’s why the fell into this PR nightmare - but there is no better alternative. Github Copilot Plus comes close, but it’s agents fail more, and ttheir Copilot-autocomplete autocomplete is weak compared to Cursor’s tab. Github Copilot agents also lack slack integration.

Anthropic models should only be used with Cursor via Claude Code running inside Cursor’s IDE. They are unguard railed and lead to horrible user experiences when easily allowed through the Agent chat mode (which is the default).

My conclusion: Cursor is the leading product still - I don’t see a viable alternative.

My suggestions to Cursor’s product managers

  1. Please for the love of god, don’t let claude-4-sonnet do the greps and read files, have it switch mid-chat to using auto for editing files or doing CLI commands. Do this hybrid approach, you can reduce expenses for users some 90% if only the most relevant stuff was done by smart models, and the stupid-cheap models did the grudge work.

  2. Implement a working Grok4 xAI integration. It’s a differentiator from Github Copilot as they don’t have Grok, and likely wont.

  3. Fix the issue where when you use Cursor in a workspace instead of a single repo, the agent always messes up the path for grep causing every grep command to always be mis-pathed by 1 folder. e.g. Could not find file 'Child/X/UY/Z/W/G/my_file.swift' in the workspace. where it actually should be grepping the path Root/Child/X/UY/Z/W/G/my_file.swift. This constantly happens and I bet it has a cost.

  4. When a user submits a ā€œbug reportā€ through the UI - give some email followup so we know that you actually receive it, and care about us submitting reports.

  5. Each Cursor agent chat window should display a ā€œ$5 of expensive incurred alreadyā€ warning in increments of $5. Just like Claude Code does.

  6. Cursor’s communication with users can be massively improved with more empathy. We initially came to forum seeking answers and were left enraged at the responses from dan and t1000 which felt as if they never directly acknowledged what we wrote or the questions we asked. Being told we’re using it wrong upfront without any investigation, or indication of investigation from their side is a recipe for bitterness.


@T1000

In closing, Cursor is still the best viable option. There aren’t alternatives to turn too.

But Cursor could be the greatest of all time, and should strive after it. To do so requires becoming better. And the first step to becoming better is having the courage to acknowledge failure.

9 Likes

@mokhir thank you also for the detailed write up and really appreciate your professional approach.

The team is reviewing and preparing more information about token usage, however if you could share such a Request ID (with privacy disabled) where Sonnet 4 is double cost through Claude Code they could investigate further.

Yes, there are alternatives. I’m using Windsurf at work (my boss cancelled Cursor for all of us, and we got Windsurf) and it’s pretty good, better and transparent. At home, I’m using Kiro, and that is also very promising, probably Cursor replacement.

2 Likes

Experiment overview

I create two git-branches, from the same parent commit.

In git-branch A, I input the two turns of chat message into Agent mode Cursor + Claude-4-sonnet
In git-branch B, I input the same two turns of chat message into Claude Code

I compare the difference, the before/after in costs for git-branch A vs git-branch B to do the feature implemenation.

Start Time: 07-22-2025: 6:22pm KST

  • Privacy Mode: Data Sharing Enabled

End Time: 07-22-2025:

Cursor current status:

@Start Time (07-22-2025: 6:22pm KST)

claude-4-sonnet-thinking: 
    - input (w/ cache write): 4,310,746
    - input (w/o cache write): 537,388
    - cache read: 26,186,758	
    - output: 376,728	
    - total tokens: 31,411,620	
    - API Cost: $31.79

Total: 
    - API Cost: $56.71

Cursor Experiment - Git Branch A (Request id: turn3: 7a2eadbb-4fd6-4a61-97de-a083b5ad6e30, turn4: 1331f1b9-4d73-4fbb-bf6b-3a5bea861efd)

Exact same prompt given to Cursor + Claude-4-sonnet and Claude Code

<INSTRUCTIONS_A>

→ Turn 1 cost: claude-4-sonnet-thinking Input (w/ Cache Write): 67,282, Input (w/o Cache Write): 45, Cache Read: 138,524, Output: 2,289, Included

2nd-turn

<INSTRUCTIONS_B>

→ Turn 2 cost: claude-4-sonnet-thinking Input (w/ Cache Write): 54,133, Input (w/o Cache Write): 6,578, Cache Read: 851,303, Output: 10,526, Included

@End time (6:47pm KST)

claude-4-sonnet-thinking: 
    - input (w/ cache write): 4,432,161	
    - input (w/o cache write): 544,011
    - cache read: 		27,176,585
    - output: 389,543
    - totak tokens: 32,542,300
    - API Cost: $32.75

Total: 
    - API Cost: $57.67

Turn 1 & 2 cost in dollars: $57.67-$56.71 = $0.96


There is a bug, doing 3rd turn with Agent Cursor + Claude
3rd-turn

<INSTRUCTIONS_C>

→ Turn 3 cost: claude-4-sonnet-thinking Input (w/ Cache Write): 89,162, Input (w/o Cache Write): 181, Cache Read: 88,179, Output: 893, Included

Turn 3 @End time (6:47pm KST)

claude-4-sonnet-thinking: 
    - input (w/ cache write): 4,521,323
    - input (w/o cache write): 544,192
    - cache read: 		27,264,764	
    - output: 390,436
    - totak tokens: 32,720,715
    - API Cost: $33.13

Total: 
    - API Cost: $58.05	

Request id: 7a2eadbb-4fd6-4a61-97de-a083b5ad6e30

Turn 3 cost in dollars: $58.05-$57.67 = $0.38

Another bug. Must do 4-th turn

<INSTRUCTIONS_D>

→ Turn 4 cost: claude-4-sonnet-thinking Input (w/ Cache Write): 95,148, Input (w/o Cache Write): 265, Cache Read: 826,582, Output: 4,889, Included

Turn 4 @End time (6:59pm KST)

claude-4-sonnet-thinking: 
    - input (w/ cache write): 4,616,471
    - input (w/o cache write): 544,457
    - cache read: 		28,091,346	
    - output: 395,325
    - totak tokens: 33,647,599
    - API Cost: $33.81

Total: 
    - API Cost: $58.73

Turn 4 cost in dollars: $58.73-$58.05 = $0.68

Cursor Git Branch A total cost: $58.73 - $56.71 = $2.02

Claude Code Experiment - Git Branch B

Claude Code API Key

@Start Time

API Key Cost: $18.05

@End Time

API Key Cost: $18.05

Exact same prompt given to Cursor + Claude-4-sonnet and Claude Code

<IDENTICAL_TO_CURSOR_EXPERIMENT_INSTRUCTIONS_A>

2nd-turn for additonal clarification

<IDENTICAL_TO_CURSOR_EXPERIMENT_INSTRUCTIONS_B>

@End Time

API Key Cost: $18.27

Turn 1+2 cost in dollars: $18.27-$18.05 = $0.22

There is a bug, doing 3rd turn with Agent Cursor + Claude
3rd-turn

<SMALL_CLARIFICATION_INSTRUCTION_C>

@End Time (7:18pm)

API Key Cost: $18.80

Turn 3 cost in dollars: $18.80 - $18.27 = $0.53

No need for turn 4 because Claude did the implementation successfully.

Claude Code Git Branch B total cost: $18.80 - $18.05 = $0.75


Conclusion

  1. Cursor Experiment - Git Branch A
  • (Request id: turn3: 7a2eadbb-4fd6-4a61-97de-a083b5ad6e30, turn4: 1331f1b9-4d73-4fbb-bf6b-3a5bea861efd)
    Total cost: $58.73 - $56.71 = $2.02
  1. Claude Experiment - Git Branch B
    Total cost: $18.80 - $18.05 = $0.75

The same exact feature, with same chat instructons, same files attached in context to reach completion cost $2.02 in Cursor, and $0.75 with Claude Code in API Key.

It’s a 260 % difference.

In both case, I tested the code in the iOS simulator and it has nearly identical UI, and the same functionality.

@T1000
I hope this is sufficient. I’m almost at my account limit with Pro+ subscription and cannot do any more experiments for a while

7 Likes

Here’s the proof of 4-turns with Cursor + claude-4-sonnet

Here’s the 3-turns with Claude Code

I don’t know what else I can do to convince you. It’s the identical inputs and commands to Cursor+Claude4sonnet and Claude Code. A side-by-side branching experiment from same parent commit.

And I have every cost-saving measure turned off in Cursor. Memories, Web Search. Everything to reduce cost.

Have your product engineers replicate this experiment

6 Likes

There’s something pretty wild here which Cursor needs a ā€˜big-up’ for… the fact that you can in so much detail analyse costs in a quite granular manner is proper excellent.

and believe me, I’ve not gone easy on Cursor and even had a post removed because of it. Go look at my excoriating post related to degraded performance on Cursor… but when something is good… you just have to say so.

Claude max 200$/mo with Claude code frfr

@mokhir Thank you so much for testing this out in detail and sharing the info. Will pass it to Cursor team for comparison and review. Sincerely appreciate your effort! This is very good info, let’s see where they take it from here.

1 Like

This might be off-topic, but why not use the Pro subscription for Claude Code? The API billing for Claude Code is still expensive.

1 Like

The only viable configuration I can make work is using my personal Gemini key. It’s not nearly as smart, I feel like cursor on haiku, but it’s better than whatever the hell Auto is lol

Thank you so much. This is exactly what i needed, i even blocked the last payment to Cursor because it simply auto-updated and i continued working as usual. Suddenly it showed me messages i’ve never seen before stating i’ll hit my limit by a certain date at the current rate. Which was 2 weeks from then, suddenly 3 days later i’ve hit that limit. I blocked the payment for extra usage and didn’t bother to create a ticket yet, as there would be a lot of back-and-forth which usually results in nothing. But it was obvious something was seriously wrong. Again, very much obliged sir/madamšŸ™‹

I used Claude API key in-order to run a valid experiment for these Cursor guys.

In practice, 100% should use Claude Pro sub as it has daily rate limits, resets, and I’m sure other optimizations.

I do RIPER-5 with Gemini 2.5 pro, and I ask it to output all it’s code snippets in the chat window along with exact file paths to apply them at. Usually $0.01. Then switch to auto, and ask auto to do EXECUTE MODE.

If you ask Gemini to APPLY the code edits, it can be up to to $0.20-0.40 with all the tool calls.