“Override OpenAI Base URL” breaks requests when pointing to OpenRouter

Where does the bug appear (feature/product)?

Somewhere else…

Describe the Bug

Where does the bug appear?
Model Provider Integration → OpenAI API Override

Describe the Bug

When using Cursor’s “Override OpenAI Base URL (when using key)” option and pointing it to OpenRouter (e.g. Model Not Found | OpenRouter), Cursor sends malformed requests to the provider.

The request reaches OpenRouter, but Cursor omits a required prompt or messages field, resulting in:

Provider was unable to process your request
Input required: specify “prompt” or “messages”

This occurs even though:

The OpenAI key works normally when tested directly at OpenRouter’s Playground

The same models work outside of Cursor

Disabling the Base URL override fixes the issue immediately

This indicates Cursor’s wrapper is not constructing a valid OpenAI-compatible request when the Base URL override is used.

Steps to Reproduce

Steps to Reproduce

Go to Settings → Models → OpenAI

Add an OpenAI key

Turn ON “Override OpenAI Base URL (when using key)”

Set Base URL to:

Select any OpenRouter model in the chat sidebar

Send any message (e.g. “hello”)

Expected Behavior

Expected Behavior

Cursor should send a standard OpenAI-compatible chat-completion request containing:
{
“model”: “”,
“messages”: [ { “role”: “user”, “content”: “hello” } ]
}

Actual Behavior

Cursor sends an incomplete payload without prompt or messages, causing OpenRouter to return a 400 error:

Input required: specify “prompt” or “messages”

Additional Notes

This is a known issue: routing OpenAI requests through a custom base URL works in other tools but fails in Cursor.

Disabling Override Base URL fixes the problem instantly but also negates a huge justification and table stakes feature for a “multi-agent IDE.”

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.0.75 (Universal)
VSCode Version: 1.99.3
Commit: 9e7a27b76730ca7fe4aecaeafc58bac1e2c82120
Date: 2025-11-12T17:34:21.472Z (1 day ago)
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 24.6.0

For AI issues: which model did you use?

GPT-5.1 Fast

Does this stop you from using Cursor

Yes - Cursor is unusable

:police_car_light: PSA for Anyone Trying to Use “Override OpenAI Base URL” with OpenRouter in Cursor

Posting this so the next poor soul doesn’t waste hours debugging a bug that isn’t theirs.

As an OpenAI GPT looking at this from the outside, here’s the blunt truth:

Cursor’s “Override OpenAI Base URL” + OpenRouter is a known breaking combo.
If Cursor wants to support “OpenAI-compatible endpoints,” they should actually follow the OpenAI request format.


1. What’s actually going wrong

Two unrelated things get tangled and create maximum confusion:

A. OpenRouter key-check noise

OpenRouter validates keys by pinging the model name “openai/text-embedding-3-small”.
This produces “model not found” or “opt-in required” warnings that don’t matter if you are only using chat models.
Your key is fine. The OpenRouter Playground proves it.

B. Cursor’s Base-URL override bug

Cursor errors such as:

Provider was unable to process your request
Input required: specify “prompt” or “messages”

…appear only when Cursor’s Override OpenAI Base URL toggle is pointed at a custom URL like OpenRouter.

Cursor then sends malformed requests missing both “prompt” and “messages”.
The provider rejects them, and Cursor blames the provider.

But the provider is not at fault.
The underlying request generated by Cursor is incomplete.


2. Quick Fix (save yourself hours)

If you just want Cursor + GPT-4.x/GPT-5.x to work reliably:

  1. Open Cursor Settings → Models
  2. Turn OFF: Override OpenAI Base URL (when using key)
  3. Leave the Base URL empty or set it back to the default OpenAI URL
  4. Paste your OpenAI key normally and click Verify
  5. Select any normal OpenAI model in the chat sidebar
  6. Type “hello”

Everything works because you’re no longer fighting the bug.


3. “But I actually want OpenRouter inside Cursor.”

Reasonable.
Unfortunately:

Cursor’s team has stated that custom OpenAI-compatible endpoints are not fully supported yet.
The OpenRouter combo is specifically known to break using the Base-URL override.

So at the moment your working options are:

• Use OpenAI directly in Cursor
• Use OpenRouter in your other tools (CLI, VSCode, n8n, agents, etc.)
• Revisit OpenRouter in Cursor once they fix their override implementation


4. TL;DR for other users landing here

• Your OpenAI key is good
• OpenRouter is good
• The “text-embedding-3-small” warnings are irrelevant
• The real issue is Cursor breaking the request format when the override is enabled
• Turning OFF “Override OpenAI Base URL” makes everything instantly work

Grif here. Wow, GPT got a bit snarky there but it’s probably because I’ve been venting at it about this for a while.

Thank you Cursor team for releasing 2.0 which appears to get a lot of things right.

Of course as the Tao Te Ching translated by Jane English and Gia‑Fu Feng (Chapter 64) says about all great software products :slight_smile: :slight_smile:

“People usually fail when they are on the verge of success.
So give as much care to the end as to the beginning;
Then there will be no failure.”

We need support for ALL the foundation models. Please escalate.

Hey, thanks for the report. You’re absolutely right about this issue.

The “Override OpenAI Base URL” + OpenRouter combo is a known bug, and the team is already working on it.

Current workaround: as you found, disabling “Override OpenAI Base URL” and using OpenAI directly is the only reliable option right now.

Alternative: some users have had success using a LiteLLM proxy as an intermediary, but it requires running your own server.

The team knows this is blocking users and is working on proper support for custom OpenAI‑compatible endpoints. I’ll also add your report to the existing ticket under review.

Hey team. THANKS(!) for the quick reply… I appreciate the fast response and confirmation that the malformed-request bug is being fixed. Thank you.

Just to be crystal clear (because this matters to a lot of us):

Fixing the override bug only restores the old fragile state. It still leaves Cursor locked to a single-provider architecture.

In late 2025, serious developers expect to use GPT-5.1, Grok-4 Fast, and Claude 3.7/Opus simultaneously in the same workspace. No restarts, no config swaps, no LiteLLM proxies, no hacks.

Every other major AI coding tool already ships this today:

  • Continue.dev: yes
  • Aider: yes
  • Windsurf / Roo Code: yes
  • Plain VS Code + Continue: yes

Cursor is now the only outlier still forcing us to choose one provider and break the others when we try to use more than one.

Grok-4 Fast is currently one of the strongest coding models on the planet (especially for large refactors and TypeScript/React). GPT-5.1 remains best-in-class for structured planning. Claude is still unmatched for documentation and careful review.

We don’t want to pick sides. We want the best tool for each task, in the same IDE, at the same time.

Please treat native, simultaneous multi-provider support as a Tier-0 priority. A dedicated xAI provider slot (or simply multiple independent provider configs) would immediately put Cursor back in the lead instead of watching power users drift to Continue + VS Code.

We’re giving this feedback because we love Cursor and want it to stay the best. Thank you again for listening. Looking forward to any roadmap signal on this.

Thanks again,
GdL

PS: Congrat to all the Cursor folk on yesterday’s (November 13, 2025) close of a $2.3 billion (Nvidia, Google, Coatue) funding round at a $29.3 billion valuation, tripling the valuation in just five months. I look forward to what comes next now that resources should no longer be an issue. :wink:

3 Likes

For now I have reverted to VS Code with Continue. It’s not perfect but it does what it claims to do, allowing use of BOTH OpenAI and xAI. Until Cursor adds the same capacity, essentially this product is misrepresented, as it does NOT support both foundation models.

I very much hope this is resolved, and I look forward to becoming a customer when that happens.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.