Cursor is costing me money without delivering proportional value

I am a Pro user and occasionally use Opus for specific tasks. This month I have used it very little because the usage costs are difficult to justify. Recently, however, I encountered two separate issues that highlighted a serious problem.

In the first case, Opus resolved the issue almost immediately. Despite that, it continued running browser tests and trying alternative solutions as if the problem were still unresolved. I verified the project locally and confirmed that everything was already working, then immediately shared a screenshot as proof. Only after that did Opus stop testing. This unnecessary activity cost me around $6 for an issue that was already solved.

Today, the same pattern occurred again. Opus quickly identified a duplicated code segment and effectively fixed the issue. Nevertheless, it continued running browser tests repeatedly. I checked the project myself and confirmed that the problem was already resolved, yet Cursor kept “working on it” for no reason, generating additional charges. Once again, a simple issue ended up costing at least $6.

When I explicitly asked Opus why it kept trying to solve a problem that was already fixed, it acknowledged that I was right. That response makes the situation even more frustrating.

From a user perspective, this behavior feels misleading. The system continues expensive actions even after the task is effectively completed, and the user bears the cost. At a minimum, this points to poor cost control and inadequate safeguards. As it stands, it creates the impression that unnecessary actions are allowed to continue at the user’s expense.

This is not acceptable for a paid Pro service and seriously undermines trust in Cursor’s pricing model and reliability. Ha anyone noticed this?

Have you configured User Rules before using the top 2 most expensive model?

Is the issue that I did not set the User Rules or that Opus continued useless browser tests to consume credits, when it shouldn´t?

can you share ur prompt or convertation history? other than that, i think Opus is expensive for 20$ Pro user, i am not recommending using Cursor+Opus on 20$ since there are better case for AI Models with 20$ Pro user. Such as :

  • Gemini 3 Flash
  • GPT-5.2
  • Gemini 3 Pro

or other models, Better if want to use Opus, try use with Pro+ plan.

Hey, thanks for the report. I understand your concern. The agent really should stop once the task is done.

This is a known issue, and the team is working on improving how the agent detects when a task is finished. Similar cases were discussed here and here.

Temporary workarounds:

  1. Use the Stop button actively. As soon as you see the task is done, click Stop. This should prevent extra costs.

  2. Set up User Rules (Cursor Settings > Rules). Add a rule that clearly limits the number of iterations or blocks extra verification. For example:

    - Stop immediately after the task is done
    - Don't rerun browser tests unless I explicitly ask
    - Max 3 iterations per task
  1. Use more specific prompts. Clearly say you want one quick change without extensive testing.

Could you please share:

  • Request IDs for the problematic sessions (Chat context menu > Copy Request ID)
  • An example prompt that triggers this behavior
  • Whether you have any User Rules set up

This will help the team fix it faster.