Anyone in the Federal space probably can’t use cloud based services. Unless they are FedRamp certified. If you have any CUI, ITAR or EAR data you will need a private LLM. I was hoping to try Cursor with a locally hosted LLM myself. I spec’d my new Mac mini specifically so I could run a reasonable LLM locally. I’m horrified at the thought of anyone in my company going down this road. Ignorant of the fact their private or proprietary data is flying across the public cloud spaces. Scientists are often not the most informed about such matters.
I had an acquaintance visit today who works at Lawrence Livermore labs. He says they have the same issue with Cursor. They can’t use it as long as it depends on any public cloud.
Implementing a fully self-hosted Cursor environment and LLMs is the final requirement for our company to adopt and roll out Cursor for our developers.
Our servers do not have internet access and are only accessible via work laptops. Relying on Cursor’s website or backend would be a dealbreaker for our SLA.
From a business perspective, I can only imagine the potential costs Cursor might charge. Considering that other major companies can demand six-figure sums for database support, it would be a logical step for Cursor to adopt a similar pricing strategy.
It’s really frustrating that Cursor doesn’t prioritize this more. I use Cursor at home but can’t use it at work because a lot of our code was created in partnership with another company like half a century ago, and it’s too much of a mess for our legal department to get permission to share it outside the company. It’s not that we’re hypersensitive about leaks, we just literally can’t do it.
And local-hosted models keep getting better, and the hardware keeps getting better. Starting May next year for example NVIDIA will be releasing a $3k “DIGITS” AI computer with a massive 128GB unified VRAM on a top-end Blackwell core - so we’re talking the ability to run truly massive coding models. Any company could get one of those, and serve dozens if not hundreds of programmers.
A lot of the magic of Cursor happens on our servers which process your requests before sending it off to an LLM. Therefore, locally hosted models running on your own machine won’t ever work in the current setup because our servers can’t see them!
If you are happy to set up your self-hosted LLM to be public-facing (secured with an API key), you should be able to override the OpenAI URL to your own and get this to work.
I’ve disabled all other models in the checkbox as someone else has mentioned to try. My local model does not need an API key (will authorize any), though I’ve tried with one. Is this bugged or am I doing something wrong?
Found the fix if anyone else is facing the same blocker. Apparently you have to give it a valid OpenAI key first. It’ll then show the toggle. You can then put whatever key and base url you want after.