Connecting local AI server to Cursor does not work

Hey, there are actually two issues here.

  1. Localhost isn’t supported directly. All BYOK requests go through Cursor’s servers to build prompts, so localhost or local network addresses won’t work because the server can’t reach them. You’ll need to expose your Ollama instance as a public HTTPS endpoint using something like ngrok or Cloudflare Tunnel. Then use that public URL in Cursor Settings > Models > Override OpenAI Base URL.

Related thread with the same setup question: How can I use a local LLM on my desktop/AI computer?

  1. Model name validation bug. In your logs, gpt-oss:20b gets changed to gptoss20bplain because special characters like - and : are stripped out. As a result, the request goes to Cursor’s model registry instead of your local server.

Right now, even after setting up a tunnel, you may still hit this model name validation issue. As a workaround, try using a model name without special characters (if Ollama supports aliases).

Let me know if you’ve got any other questions.