"OpenAI API Key" verified, but shows "Unable to reach 127.0.0.1"

I am setting up a local model with OpenAI-compitable API. I was able to get “OpenAI API Key” verified successfully. However, when I select the local model in the “Chat”, it shows the following error message

Unable to reach 127.0.0.1
We encountered an issue when using your 127.0.0.1 API key. Please check your API key settings and try again.
request id: 37df8db8-e534-4975-b089-5b1482b8c1c7

Notes:

  • In fact, my local model server didn’t ever receive any request when I tried it in the “Chat” model. However, in the “Verify” process, the local server was called and passed the verification.
  • I also tried using OpenAI’s API key without overriding the Base URL. It worked! So, I have no clue what happened when using my local model.

See the snapshot below:


Hey, local models don’t work in Cursor because all requests go through our servers.

you have verified the openai api, so plz change model to gpt-4o

Is this by design?

I have pro membership, but need to test some local LLM in some situations due to data restriction. I think Cursor supported using local models served by ollama in the past. There are also plenty of online articles about how Cursor supports local models such as How to Use Cursor with Local LLM.

I’d really appreciate if Cursor can allow to use local model served at http://127.0.0.1/.

Hey, as @deanrie said, this isn’t something support by Cursor right now, as it’s not your client that makes the final request to the LLM, but our backend servers.

Therefore, our backend cannot execute any request where the LLM is locally hosted! While you may be able to publicly expose a locally hosted LLM (e.g. with ngrok), this isn’t something we support or recommend due to the security issues that could come with it!

1 Like