I’m using a local model (on localhost) and I have overriden the base URL in the settings. The Verify API key button works well (and I can see that this request is sent to my model on localhost), so everything seems to work fine (and the OpenAI API Key button is activated and green).
But when I try to use the chat, I get an error message “Problem reaching OpenAI”.
Also, when I tried to inspect the request with Mitmproxy, it reveals that the request is sent to api2.cursor.sh, not to localhost.
Does this mean there is no way to run against a local model without internet access? I was not aware that every request is routed through a cursor server. Seems like something that should be mentioned in the UI when configuring which models to use…
I have OpenAI disabled yet I am still getting the “Problem reaching OpenAI” error.
Looks like so I was also trying to figure out how I can route cursor requests to local llm but there’s no option as of now. I hope they will support this in future.