Cursor cannot connect to a custom URL compatible with the OpenAI protocol

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I developed an interface that is compatible with the OpenAI protocol, but after setting it on the cursor, I cannot connect.

Steps to Reproduce

I use curl to request the custom API interface and it works fine
However, an error is reported in the cursor. We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.

Screenshots / Screen Recordings

Operating System

MacOS

Version Information

Version: 2.4.22 (Universal)
VSCode Version: 1.105.1
Commit: 618c607a249dd7fd2ffc662c6531143833bebd40
Date: 2026-01-26T22:51:47.692Z (1 mo ago)
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Darwin arm64 24.2.0

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, this is a known issue with the Override OpenAI Base URL feature. A few things are going on here:

  1. Responses API vs Chat Completions format
    When you use Agent mode with a custom base URL, Cursor currently sends requests in OpenAI’s Responses API format (input, flat tool format, etc.) instead of the standard Chat Completions format (messages, nested tool format). If your endpoint only supports /v1/chat/completions, it’ll fail. Related thread with more details: Cursor Agent sends Responses API format to /chat/completions endpoint.

Workaround: Try switching from Agent mode to Ask mode. It’s more likely to use the standard Chat Completions format.

  1. HTTP/2 compatibility
    Go to Cursor Settings > Network > HTTP Compatibility Mode and switch to HTTP/1.1. TLS and connection errors often happen when HTTP/2 doesn’t work well with custom endpoints.

  2. Version update
    You’re on 2.4.22 (from January). The latest stable is 2.5.x. It’s worth updating since there have been BYOK-related fixes since then.

A couple of questions:

  • Does your endpoint support the OpenAI Responses API format, or only the Chat Completions format?
  • Can you try Ask mode instead of Agent and let me know if that works?

The team is aware of the Responses API format issue with custom URLs. There’s no timeline yet, but reports like yours help with prioritization.


I have upgraded to 2.6.13 and set the network mode to HTTP1, and I have seen that my interface supports the OpenAI response api response format, and I still cannot connect to my URL correctly in ask or agent modes.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.