Unable to use LM Studio with override

The override should work as they designed it to be integrated with the OpenAI python library as a URL override. But I believe they do not support sending empty queries to check API key.


This is what the endpoint errors out on when trying to set the API key. Then it says API key invalid and cannot continue.

A solution I can think of is to override this judgement and proceed anyways, but ultimately it is up to the devs how they want to proceed. LM Studio could try to fix this endpoint issue instead as well.

Same things happen to me. Not working LM Studio. Is there anything wrong here Recording 2024-02-25 140411.mp4 - Google Drive

Local versions do not work, it doesn’t matter whether it’s LM Studio or Ollama, Cursor requires an API key which is not available. However, I managed to run popular versions of LLM through openrouter.ai, it’s not free, but quite cheap.

1 Like

The issue is not with whether the service provides an API key or not. The LM Studio API server mimics the OpenAI API exactly such that you can use the OpenAI python library with a custom URL and it works. You set the API key to whatever you want (I use the string “not-needed”). The developers of Cursor just need to make sure their service works similar to how the OpenAI python library works.

2 Likes

Hello! Inferences happens through our backend which cannot access servers running locally on your computer. You’ll need to provide a publicly accessible URL.

So we can’t use Local LLM, Right? So either we need to use OpenAI or any other hosted LLM services like openrouter.ai etc.

Thank you for the transparency. I understand how to resolve this issue now.

1 Like

If you understand how to resolve than help others also by describing the way. I think you are talking about adding 127.0.0.1 example.com on host file C:\Windows\System32\drivers\etc\hosts

No I think you need to host your own server publicly, then set up an API key so only Cursor backend can access it. Takes a bit of technical know-how. I might upload a github repository that does this with ollama if I have some time.

1 Like

Inferences happens through our backend which cannot access servers running locally on your computer

Will this always be the case? I can understand the reasoning and this may be obvious, but for some employers that means Cursor can’t be used.

Also, setting the URL to http://127.0.0.1 does get an HTTP response back, probably from the web server that is sending the request to the OpenAI URL, so you may want to look into blocking that.

1 Like

vLLM an option?

or did you use Gaia / ngrok?