Fair Billing Request, Don’t Count AI Mistakes Against Our Balance

Hello Cursor Team & Community :waving_hand:,

First, I want to say how much I appreciate Cursor AI—the editing tools are impressive and the potential is huge. That’s why I’m happy to upgrade or refill my billing balance to continue using the platform.

However, there’s a recurring issue I’m sure many of us face: roughly half the time, the AI response misses the mark. It either misunderstands the prompt or delivers something we have to ask twice, rephrase, or correct—sometimes several times. This not only drains our billing balance, but also our time and energy.

It’s understandable that Cursor charges for usage, but it feels unfair to pay full price when the AI doesn’t deliver a usable result. What if Cursor could:

  • Detect when the AI response clearly didn’t answer the question or satisfy the user
  • Automatically not count these “failed attempts” against our balance
  • Or allow us to flag the response as “unhelpful” and reset the usage for that prompt

That way, we’re only paying for successful, quality assistance, which is exactly what we’re here for. I’m confident this would improve satisfaction across the board—users would feel more confident investing in usage, and Cursor would earn trust and stronger retention by focusing on quality outcomes.

I believe many in the community share this sentiment. We want Cursor to thrive—but we also want assurance that we’re paying for results that work.

Thanks for listening and considering this. I know Cursor AI can be a game-changer—let’s make sure that billing supports our success, not penalizes honest attempts!

Best regards,

Hi @beshoo and thank you for the detailed post. Cursor team is reviewing reports and considering improvements.

As models are getting better, the errors should be also less and less.

@condor, thanks for acknowledging the feedback—but I strongly disagree.

:stop_sign: While models may “get better over time,” our balances are bleeding now, and that’s unacceptable. We’re being charged for every iteration—almost 50% of our prompts need reframing, reasking, or correction before we get usable output.

This isn’t speculation—multiple users have reported the same experience. One even experienced frequent “stopped” errors that still deducted credits every time medium.com+8forum.cursor.com+8forum.cursor.com+8americanbar.org+4forum.cursor.com+4community.openai.com+4.

Another user pointed out how “linter error tool calls” in Claude 3.7 MAX racked up costs on failed attempts forum.cursor.com+1forum.cursor.com+1.

These charges happen before the model has a real chance to succeed.

So telling us to just wait for model improvements is like telling us to hold onto a burning match. Our balances are being consumed right now, on failures—and that erodes our trust and confidence in the platform.

We deserve a fair billing mechanism that:

  1. Exempts us from charges when responses clearly fail or don’t address the prompt.
  2. Provides an option to flag unused or unhelpful responses and reset the balance deduction.
  3. Offers transparency: let us see if a request was errored and not billed—right in the dashboard.

This isn’t just about complaints—it’s about holding Cursor to its promise of quality AI-assisted editing. We’re willing to pay—but only for successful results. And I know the wider community stands with us on this.

Please don’t ask for our patience—act on our concerns now. Thanks.

For billing issues please contact [email protected].