When I use OpenRouter as the provider in Cursor, requests start normally and the model responds for about 10–20 seconds. Then Cursor shows a “Provider Error” message and stops responding. This happens repeatedly, so I cannot complete longer prompts or multi-step tasks.
Steps to Reproduce
Open Cursor IDE
Configure AI provider to OpenRouter
Start an AI Chat/Agent request (any prompt)
Wait 10–20 seconds while it is generating a response
Cursor shows “Provider Error” and the response stops
Expected Behavior
The response should continue until completion without disconnecting or failing mid-generation.
For AI issues: add Request ID with privacy disabled
Request ID: 8e9b9bc0-a944-4645-a5a6-d7127689ab5f
[resource_exhausted] Error
LTe: [resource_exhausted] Error
at kmf (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9095:38337)
at Cmf (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9095:37240)
at $mf (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9096:4395)
at ova.run (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9096:8170)
at async qyt.resume (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:34190:60450)
at async Wpc.streamFromAgentBackend (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:34239:7695)
at async Wpc.getAgentStreamResponse (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:34239:8436)
at async FTe.submitChatMaybeAbortCurrent (vscode-file://vscode-app/c:/Program%20Files/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9170:14575)
Additional Information
Issue is consistent and reproducible.
Happens across different prompts.
If there are logs I can provide (Cursor logs / console), tell me where to export them and I will attach them.
I’m having the exact same problem. And im not using “OpenRouter”, im using claude 4.5 that comes with cursor.
I cant send a single message…
The 2.4.21 version worked better than 2.4.22 update.
Provider Error
Request ID: a0141157-2bd5-4cb6-a472-633712489902
{“error”:57,“details”:{“title”:“Provider Error”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“isRetryable”:true,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:true}
[resource_exhausted] Error
LTe: [resource_exhausted] Error
at Gmf (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9095:38348)
at Hmf (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9095:37251)
at rpf (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9096:4395)
at fva.run (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9096:8170)
at async Hyt.runAgentLoop (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:34196:57047)
at async Zpc.streamFromAgentBackend (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:34245:7695)
at async Zpc.getAgentStreamResponse (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:34245:8436)
at async FTe.submitChatMaybeAbortCurrent (vscode-file://vscode-app/c:/Users/jello/AppData/Local/Programs/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:9170:14575)
We don’t have a public timeline for OpenRouter support. The team is aware of the requests, but for now we’re focused on official providers like OpenAI, Anthropic, Google AI, Azure OpenAI, and AWS Bedrock.
With respect, this is not a convincing position for a product positioned as a “next-generation AI IDE.”
OpenRouter is not a fringe or experimental service. It is a production-grade routing layer that provides immediate access to thousands of models, including the most up-to-date versions from OpenAI, Anthropic, Google, Meta, and others. Many developers already rely on it precisely because it abstracts provider volatility and accelerates adoption of new models.
Saying “just use official providers” fundamentally limits Cursor’s flexibility and future-proofing. The AI ecosystem is changing extremely fast. Today’s official provider list may be outdated in months, not years.
I am paying $20/month for Cursor because I expect choice, control, and speed of adaptation. Allowing a stable, officially supported custom Base URL or OpenRouter-style integration is not a risky experiment, it is a strategic necessity.
If Cursor does not move in this direction, another IDE will. And the one that enables broad model access first will gain mindshare very quickly.
I’m not asking for special treatment, only for a clear commitment: official support for external API routers or a public roadmap toward it.
Please consider this feedback seriously. Many paying users depend on this flexibility.