I’ve been a Cursor user for over a year and rely on it heavily for real development work — refactors, feature changes, and agent-driven workflows. I wanted to share some honest feedback after the recent pricing model changes.
Before the switch to the token/API-based model, I was doing more development in Cursor than I am now and rarely hit limits. In the last month, however, I’ve hit usage limits multiple times, which has made me rethink how sustainable Cursor is for power users.
I understand the shift away from request-based pricing and that all models now count toward usage. What’s becoming difficult is how this pricing interacts with real-world agent workflows.
Main concern: “Done” doesn’t always mean done
A common pattern I’m running into:
-
I give the agent clear, specific requirements
-
The agent reports that the task is complete
-
I review or test the result
-
Parts of the requested functionality are missing, partially implemented, or misunderstood
-
I go back to the agent to correct or finish the work
This often repeats multiple times for a single task.
Each iteration consumes paid tokens, even though the follow-up work is required only because the initial response was incomplete. Over time, this adds up quickly — especially when working on non-trivial features or larger codebases.
As a user, it starts to feel like I’m paying not just for productivity, but also for agent misunderstandings and partial completions.
Why this matters more under the new pricing model
-
Agent-based workflows are inherently iterative
-
“Almost done” responses are common with complex tasks
-
Power users who rely on agents daily burn through usage much faster
-
Monthly cost becomes difficult to predict unless workflows are artificially constrained
Open questions / suggestions:
-
Are there plans to improve agent completion accuracy so fewer follow-up iterations are needed?
-
Could there be pricing considerations for agent retries caused by incomplete task execution?
-
Are power-user tiers or usage models being considered that better match iterative development workflows?
I’m sharing this because I genuinely like Cursor and want to continue using it. Right now, though, the cost-to-value balance feels harder to justify for serious, daily development work.
Curious to hear:
-
If you are seeing the same “done but not fully done” pattern
-
How are you adapting your workflows?
-
Any guidance from the Cursor team on how this will evolve
Thanks for reading, and hoping for a constructive discussion.
