"Problem reaching OpenAI" error on a local model with overriden base URL

I’m using a local model (on localhost) and I have overriden the base URL in the settings. The Verify API key button works well (and I can see that this request is sent to my model on localhost), so everything seems to work fine (and the OpenAI API Key button is activated and green).

But when I try to use the chat, I get an error message “Problem reaching OpenAI”.

Also, when I tried to inspect the request with Mitmproxy, it reveals that the request is sent to api2.cursor.sh, not to localhost.

1 Like

The override base url must be publicly accessible as the request first goes to our backend servers for preprocessing

Ok. Thanks.

I thinks that’s worth mentioning in the UI. I spent almost one hour trying to debug this.

1 Like

Does this mean there is no way to run against a local model without internet access? I was not aware that every request is routed through a cursor server. Seems like something that should be mentioned in the UI when configuring which models to use…

I have OpenAI disabled yet I am still getting the “Problem reaching OpenAI” error.

Looks like so I was also trying to figure out how I can route cursor requests to local llm but there’s no option as of now. I hope they will support this in future.

2 Likes