Unable to use custom OpenAI compatible API

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I’ve a custom OpenAI compatible endpoint that doesn’t work with Cursor. I can confirm that when accessing it via other tooling such as Insomnia, I’m able to get a proper response.

Steps to Reproduce

It might be an issue on my end, but I would prefer at least a proper error message instead of silent failure.

I can confirm that on Cursor’s official dashboard I receive a generic error message stating “Errored, no charge.”

Expected Behavior

The agent should stream responses accordingly

Screenshots / Screen Recordings

Operating System

MacOS

Version Information

Version: 2.6.12
VSCode Version: 1.105.1
Commit: 1917e900a0c4b0111dc7975777cfff60853059d0
Date: 2026-03-04T21:41:18.914Z
Build Type: Stable
Release Track: Default
Electron: 39.6.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Darwin arm64 24.6.0

Does this stop you from using Cursor

Yes - Cursor is unusable

Here’s a video on Insomnia:

Confirming that it’s failing with no error message.

If there’s anything I can provide or look for, such as debug logs, it would help me a lot.

Hey, this is a known issue with Override OpenAI Base URL.

What’s happening: in Agent mode, Cursor sends requests in the OpenAI Responses API format (input, flat tools format, etc.) instead of the standard Chat Completions format (messages, nested tools format). If your endpoint only supports /v1/chat/completions, the request will fail. That’s why you see “Errored, no charge” with no details.

Things to try:

  1. Switch from Agent mode to Ask mode. Ask mode uses the standard Chat Completions format, so it’s more likely to work.
  2. Go to Cursor Settings > Network > HTTP Compatibility Mode and switch to HTTP/1.1. Custom endpoints often don’t work well with HTTP/2.

More details here: Cursor Agent sends Responses API format to /chat/completions endpoint.

A couple questions: does your endpoint only support Chat Completions, or does it also support the Responses API? And which model are you trying to use?

The team is aware of this issue. Let me know if Ask mode fixes it.

It does support the /v1/responses endpoint, and I did try it in Ask Mode, but still nothing occurred.

For some reason, the forum didn’t allow me to post two attachments, so please see above.

Also, I’m just asking for ways to debug this. Are there any logs I can search for? Again, this might be an issue on my end, and it could be pointless trying to get you involved.

@deanrie Sorry for the mention, is there any place I can look for to debug this?

Hey, a few things you can try to debug this:

  1. Developer Tools Network tab: Open Help > Toggle Developer Tools, go to the Network tab, then try sending a message. Look for the request to your endpoint that fails. The status code and response body should show what’s wrong.

  2. Check the exact URL Cursor is hitting: In the same Network logs, confirm Cursor is sending the request to the right path (/v1/responses or /v1/chat/completions). Sometimes the base URL gets combined in an unexpected way.

  3. HTTP/1.1 mode: Try switching to HTTP/1.1. Go to Cursor Settings > Network > HTTP Compatibility Mode. This often helps with local endpoints.

  4. Localhost specifics: Your endpoint is on localhost:20128. Make sure Cursor can reach it. If Cursor is sandboxed or there are network restrictions, localhost might not resolve the way you expect.

One question: what exactly are you entering in the “Override OpenAI Base URL” field, the full URL including /v1, or just http://localhost:20128?

The Network tab logs will be the most helpful. If you can share the error and status code you see there, we can narrow this down fast.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.