Support local LLM's

Hi There,

I’m wondering if there are plans in the future to support local LLM’s within Cursor? While today you support GPT-3.5 & GPT-4, it would be great if we could point Cursor to a local LLM on the machine that has been specifically tuned on a particular codebase(s).

9 Likes

Agree this would be great, for flying also. For the time being I use Continue with codellama which is pretty impressive for offline/local.

1 Like

Does it yield better results compared to gpt4? btw openai is a investor cursor, i kinda hope they dont get “vendor” locked because of that

2 Likes

No, it’s noticeably worse, but good enough for syntax questions, what does this error message mean, how do these pieces of the web app stack work etc. Definitely worth playing around with via ollama if you have a mac

Can you share some URL’s for the tools you’re mentioning? Would like to check them out.

1 Like

https://ollama.ai/ - via the cli tool i’ve found mistral and codellama most useful but they have others. i have a 16gb m2 and they run pretty well.
https://continue.dev/ is a vscode extension, works in cursor as well that let’s you use ollama + codellama in a similar way to cursor - I think they are just going to get eaten by copilot x/cursor just being an extension.
I had a period of a few weeks where I was frequently without internet and found these very useful

3 Likes

This was discussed in a thread on our Discord server.

Regarding using other AI models:

Regarding Localhost:
image

3 Likes

Fair enough! i assumed it would be difficult because features are tuned around the capabilities of 3.5/4, so a drop in replacement with some lesser model would be a poor experience

Hi, thanks for your amazing work with cursor !

Have you reconsidered this feature ? Mistral model are getting pretty good on code and using LMstudio for exemple could be a really amzing alternative solution.

Thank you.

3 Likes

Someone recently posted on Discord that their cursor was using a local LLM. Is this possible now? What about other non-local LLM’s such as Gemini?

1 Like

Maybe if there was support for settings together.ai or openrouter.ai, we could try other models such as Codellama or DeepSeek Coder.

there is support for that now. you can change the base url in the settings!

(we dont guarantee that our prompts will work well there though. you should experiment and tell us!)

2 Likes

How? I can’t find settings that can change base url

Thanks, I know. However, it doesn’t work when I try to use openrouter.ai and together.ai. Could I be doing something wrong?

Here:

What are you setting as the base url/model? And could you send the failing request ids?

It’s strange, but everything started working today, although yesterday nothing was working, here’s a screenshot:

2 Likes

For the future, it would be nice if it was possible to add URLs, not just modify them. For instance, I sometimes use the OpenAI key, but then I have to remove the base URL and enter the key. If possible, please do this.

5 Likes

I agree! It is a small thing but would make it a lot more pleasant when switching different work context/modes/tasks.

2 Likes

whoah crazy - so wait can i just throw the anthropic api link in here and test claude 3 opus? how do i set the api key?

(if this isn’t possible, please consider a feature request for both using our own key and a custom api endpoint)