I have read a lot of questions about OLLAMA once being able to be used for local (cheap and faster) LLM’s in cursor. But it appears they have disabled the function to do so.
Does anyone have any idea how we can fix this?
I have read a lot of questions about OLLAMA once being able to be used for local (cheap and faster) LLM’s in cursor. But it appears they have disabled the function to do so.
Does anyone have any idea how we can fix this?
Hey, thanks for the question.
Right now, Cursor doesn’t support direct connections to local models like Ollama running on localhost. The “Override OpenAI Base URL” option needs a publicly accessible HTTPS endpoint because all requests go through Cursor’s servers to build prompts.
There’s a workaround. You can use tunneling, like ngrok or Cloudflare Tunnel, to expose your local Ollama instance as a public HTTPS endpoint. Then use that URL in Cursor: Settings > Models > Override OpenAI Base URL.
Related discussion: Setup Ollama (local model) in Cursor
The team is aware of requests for native support for local models without tunneling.