Pricing model change + agent “done but incomplete” iterations are costly for power users

I’ve been a Cursor user for over a year and rely on it heavily for real development work — refactors, feature changes, and agent-driven workflows. I wanted to share some honest feedback after the recent pricing model changes.

Before the switch to the token/API-based model, I was doing more development in Cursor than I am now and rarely hit limits. In the last month, however, I’ve hit usage limits multiple times, which has made me rethink how sustainable Cursor is for power users.

I understand the shift away from request-based pricing and that all models now count toward usage. What’s becoming difficult is how this pricing interacts with real-world agent workflows.

Main concern: “Done” doesn’t always mean done

A common pattern I’m running into:

  • I give the agent clear, specific requirements

  • The agent reports that the task is complete

  • I review or test the result

  • Parts of the requested functionality are missing, partially implemented, or misunderstood

  • I go back to the agent to correct or finish the work

This often repeats multiple times for a single task.

Each iteration consumes paid tokens, even though the follow-up work is required only because the initial response was incomplete. Over time, this adds up quickly — especially when working on non-trivial features or larger codebases.

As a user, it starts to feel like I’m paying not just for productivity, but also for agent misunderstandings and partial completions.

Why this matters more under the new pricing model

  • Agent-based workflows are inherently iterative

  • “Almost done” responses are common with complex tasks

  • Power users who rely on agents daily burn through usage much faster

  • Monthly cost becomes difficult to predict unless workflows are artificially constrained

Open questions / suggestions:

  • Are there plans to improve agent completion accuracy so fewer follow-up iterations are needed?

  • Could there be pricing considerations for agent retries caused by incomplete task execution?

  • Are power-user tiers or usage models being considered that better match iterative development workflows?

I’m sharing this because I genuinely like Cursor and want to continue using it. Right now, though, the cost-to-value balance feels harder to justify for serious, daily development work.

Curious to hear:

  • If you are seeing the same “done but not fully done” pattern

  • How are you adapting your workflows?

  • Any guidance from the Cursor team on how this will evolve

Thanks for reading, and hoping for a constructive discussion.

Hey, thanks for the detailed feedback. The “done but not fully done” pattern is definitely frustrating for power users.

The team is working on improving the agent’s accuracy. It’d be really helpful if you could share a few concrete examples of these incomplete iterations (prompts + outputs). I’ll pass them to the team for analysis.

A similar discussion is here: Why the push for Agentic when models can barely follow a single simple instruction?

2 Likes

@deanrie Thanks for the response — appreciate the willingness to dig into this.

I’ve attached an example screenshot here to illustrate what I meant by the “done but not fully done / misunderstood intent” pattern.

Context of this example:
I asked the agent to review its existing implementation plan because there had been updates made after the plan was generated. The intent was explicitly to validate and re-review the plan — not to implement anything yet.

What happened instead:

  • The agent interpreted “re-review the plan” as permission to proceed with implementation

  • It made multiple code changes I did not ask for

  • Some of those changes introduced new issues

  • I had to fully revert the updates

This is a recurring pattern I’m seeing:

  • I ask for analysis / review / confirmation

  • The agent moves directly into execution

  • Execution is partial or misaligned with intent

  • I then have to iterate multiple times to course-correct

Each of those iterations consumes usage, even though the follow-ups are only required because the original request wasn’t followed precisely.

To be clear, this isn’t about a single mistake — I can provide many more examples across different tasks.

I’m happy to:

  • Share more concrete examples (prompts + outputs)

  • Walk through them live if that’s easier

  • Hop on a call and demo these cases end-to-end

I want Cursor to succeed, but this gap between intent and agent action, combined with token-based pricing, is where the frustration really shows up for power users.

Cursor as of now, is just a total letdown, if I use sonnet or the other models, it takes 3-4 days or less to run out of those credits, and even auto mode now is limited? Damn I´m having to use grok cause it´s free cause auto is gone too, cursor is not what it used to be.