Requests are sent to incorrect endpoint when using base URL override

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Using a base URL override + OpenAI API Key, requests are formulated as Responses requests but are in fact sent to /v1/chat/completions, resulting in:

Request failed with status code 400: {"error":{"message":"openai error: Missing required parameter: 'messages'.","type":"invalid_request_error","param":"messages","code":"missing_required_parameter"},"provider":"openai"}

Steps to Reproduce

  • Open Cursor Settings
  • Set “OpenAI API Key”
  • Set “Override OpenAI Base URL” to a proxy
  • Observe request being received on /v1/chat/completions path with payload formulated for /v1/responses (characterized by input field rather than messages field)

Expected Behavior

Requests are formulated using the correct payload for the target endpoint.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.1.42
VSCode Version: 1.105.1
Commit: 2e353c5f5b30150ff7b874dee5a87660693d9de0
Date: 2025-12-01T02:18:26.377Z (11 hrs ago)
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 24.5.0

Does this stop you from using Cursor

Yes - Cursor is unusable

1 Like

Hey, thanks for the report. This is a known limitation: BYOK currently only supports /v1/chat/completions, while GPT-5/5.1 models use the /v1/responses format-so the proxy sees an input payload on the chat/completions endpoint and returns the “missing messages” error.

Workarounds:

Could you confirm:

  • The model you selected (e.g., GPT-5, GPT-5.1 Codex)
  • The exact Base URL you’re proxying
  • If switching off the custom key/override fixes it

We’re tracking this limitation-appreciate your patience.

GPT-5x models can work with /v1/chat/completions (I’ve got this working with Codex CLI).

This issue affects GPT-4x as well.

Thanks for the screenshot - that’s the exact BYOK case: with Base URL override, the request is formed as /v1/responses, but it goes to /v1/chat/completions, which causes the “missing_required_parameter: ‘messages’” error.

Please send:

  • The exact Base URL proxy
  • Request ID from the error (with privacy mode disabled)

We’re tracking this and working on a fix for the error.

So you’re saying this is intentional behaviour? Why?

Surely the base URL override should ask the user which endpoint they’d like to hit, and then build the request according to that endpoint?

This behaviour changed recently and it’s honestly pretty wild that you folks didn’t mention this in your changelog (as far as I could see). This breaks a tonne of integrations.

The base URL I used is an internal URL so I can’t provide it.

Request ID: 04433689-472c-427c-8009-eb378616edba
{"error":"ERROR_OPENAI","details":{"title":"Unable to reach the model provider","detail":"We encountered an issue when using your API key: Provider was unable to process your request\n\nAPI Error:\n\n```\nRequest failed with status code 400: {\"error\":{\"code\":\"missing_required_parameter\",\"message\":\"Missing required parameter: 'messages'.\",\"param\":\"\",\"type\":\"invalid_request_error\"}}data: [DONE]\n\n\n```","additionalInfo":{},"buttons":[],"planChoices":[]},"isExpected":true}

@deanrie hey, can you please provide an update?

Is this bug 1) acknowledged and 2) being worked on?

Hey, thanks for the ping. Confirmed: this is a bug. It’s a known BYOK limitation with Override OpenAI Base URL, where the payload for /v1/responses is sent to /v1/chat/completions and fails with missing_required_parameter: 'messages'.

The team is working on a fix. Thanks for your patience.

Thanks for the update!