[Bug] Persistent "resource_exhausted" False Positive on Ubuntu 24.04

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Context:
I am currently in a production environment with tight deadlines. I have had to switch to a personal OpenAI API key to continue working, but I would like my account state fixed so I can return to using my included credits.

Ironically I was considering moving to Cursor pro because up until a few days ago I was able to use Cursor AI chat without issue. This problem causes me to reconsider that decision.

Environment:

  • OS: Ubuntu 24.04 LTS
  • Cursor Version: 2.2.43
  • Installation Type: apt install / .deb

Description:

My built-in Cursor AI chat has stopped responding entirely. When a prompt is sent, the UI remains blank or returns immediately without a response. The Developer Console confirms a ConnectError: [resource_exhausted] error.

However, my dashboard shows that I have not hit my limits. I am currently on a Free plan with approximately $20 in credit and only $3.00 in actual usage.

Steps Taken (To avoid redundant troubleshooting):

  • Account Refresh: Logged out and back into the IDE.
  • Protocol: Disabled HTTP/2 in settings.
  • System: Rebooted machine and restarted Cursor multiple times.
  • Networking: Verified api2.cursor.sh is reachable.
  • Verified Connections: Using Cursor’s network diagnostics
  • Isolation Test: External AI extensions (Claude) are working perfectly in the same environment, indicating this is specific to Cursor’s internal API handler.
  • Logged out and back in - no change
  • Cleared out ~/.config/Cursor/User/globalStorage/state.vscdb and recreated - no change
  • Rebooted my machine - for those of you in IT :wink:

Log Trace / Request ID:

(anonymous) @ workbench.desktop.main.js:10010
workbench.desktop.main.js:13074 [composer] Error in AI response ConnectError: [resource_exhausted] Error
    at szc.$endAiConnectTransportReportError (workbench.desktop.main.js:12375:456006)
    at xBo._doInvokeHandler (workbench.desktop.main.js:13027:22831)
    at xBo._invokeHandler (workbench.desktop.main.js:13027:22573)
    at xBo._receiveRequest (workbench.desktop.main.js:13027:21335)
    at xBo._receiveOneMessage (workbench.desktop.main.js:13027:20152)
    at zLt.value (workbench.desktop.main.js:13027:18244)
    at Ee._deliver (workbench.desktop.main.js:49:2962)
    at Ee.fire (workbench.desktop.main.js:49:3283)
    at Tvt.fire (workbench.desktop.main.js:12360:12156)
    at MessagePort.<anonymous> (workbench.desktop.main.js:15027:18433) {arch: 'x64', platform: 'linux', channel: 'stable', client_version: '2.2.43', requestId: '97379bad-6f59-4ed0-ad67-a8f76874e03d', …}

Request ID: 97379bad-6f59-4ed0-ad67-a8f76874e03d

Steps to Reproduce

Start Cursor.

Type ANYTHING into the Cursor AI chat window.

Select ANY model / automatic, ANY type of chat (Agent, Ask, etc)

Results: NOTHING. No error message. Console logs show false positive resource_exhausted error

Expected Behavior

Well… I expect Cursor AI to work.

At the barest minimum, I expect some kind of error message, hopefully an informative one.

Operating System

Linux

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.43
VSCode Version: 1.105.1
Commit: 32cfbe848b35d9eb320980195985450f244b3030
Date: 2025-12-19T06:06:44.644Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Linux x64 6.14.0-37-generic

For AI issues: which model did you use?

ALL MODELS including Auto

For AI issues: add Request ID with privacy disabled

97379bad-6f59-4ed0-ad67-a8f76874e03d

Additional Information

Note: my chat prompt was “ping”.

Does this stop you from using Cursor

Yes, at least the AI aspects (which is the only reason to use Cursor instead of vscode)

Hey, thanks for the report. I can see you’ve already tried all the standard solutions.

This looks like a server-side issue with your account. The resource_exhausted error shouldn’t happen if you have $20 in credits. I’ll pass Request ID 97379bad-6f59-4ed0-ad67-a8f76874e03d to the team to investigate.

As a temporary workaround, use your own OpenAI API key (as you’re already doing).

Thank you for your quick response. I really appreciate your help.

To be clear, I love cursor… when it works :wink:

1 Like

We need a bit more information.

Could you please share a screenshot of your Cursor dashboard/settings page where you see $20 in credits and $3.00 in usage? Specifically:

  1. Go to https://cursor.com/settings (make sure you’re logged in)
  2. Take a screenshot of the Billing/Usage section showing your credit balance and usage

This will help us understand the difference between what you’re seeing and what our system shows.

Here you go. I’m confused as to why it indicates 415k tokens on the 22nd… I haven’t been able to get a response from Cursor since before that time. Regardless, you’ll see that my usage is very far below my limit.

Ironically I can’t find that page I was referring to… it’s possible I conflated Cursor with Claude (as I use both), but you can see my usage is quite low and I should not have hit the free limit.

In addition, there should have been some kind of message. Even if it was incorrect, getting the “you’ve hit your limit” error message when I use cursor chat would have saved me a bunch of time (tracking down how to find logs, etc). When I submitted anything into the chat window (new chat), it failed silently.

Sorry for repeat messages… I checked my chat history and I had indeed used a decent number of tokens on the 22nd… so that number is correct. My comment on getting some kind of error message still stands.

I also find it ironic that Cursor allows a certain number of chat completions based on a fairly complicated formula (for users), but no way that I can find to easily discover where I am in relation to those limits. You use chat completions… but the usage shows tokens.

Is there something I’ve missed? Some dashboard that shows clearly the tab completions, slow model completions, and pro level completions with faster models?

Thanks for the extra info. It looks like you’ve hit your actual usage limit. If you need more usage, you’ll need to upgrade to Pro.

Any chance we can get the feedback to your team regarding the “silent failures” and the lack of clarity on usage? On that latter point, if I’ve missed some kind of dashboard that shows limits and usage please point me in the right direction. Having these two things will save you (and your support team) a lot of time, I imagine.

Thanks for the feedback. I shared your notes with the team about silent failures and the need to improve the UI for limit notifications. This is a known issue, and the team is working on improvements.