I’ve seen some discussion that local LLMs ought to be supported. Apparently the queries have to be routed for some special Cursor handling but the servers don’t appear to be doing anything terribly special for what a modern Mac Studio can handle.
I was going to work on a PR for local LLM support, but where’d all the source code go? The GitHub repo is completely wiped of any code. Isn’t it a fork of VS Code?
I mean I was just playing around with a plugin for Blender and my local LLM was writing scripts for interacting with models. It’s really not that difficult.
Yeah, its not just that simple as you make it sound.
Specially since a lot of handling needs to be adjusted to the model capabilities. And how would they support all the different models people can run locally if they already need to adjust to each model from supported openai/anthropic/… apis.
Dude, just let me play around with it. Maybe it is way harder than everyone else is making it out to be, what do you care? It’s a fork of VS Code, it offers the ability to use a custom API endpoint, there’s other offerings that have source code and don’t require a roundtrip to the Internet, but Cursor seems to be the consensus right now, so I think it’s reasonable to want to poke around at adapting it and doing things like playing with the new DeepSeek. Get it?