Hello!
Are all models cursor uses just APIs? Or do they host any of their own?
I ask to clarify the “Auto” choice and also help my teams understand security implications/options for our usage.
Hello!
Are all models cursor uses just APIs? Or do they host any of their own?
I ask to clarify the “Auto” choice and also help my teams understand security implications/options for our usage.
(Disclaimer: this could all be completely wrong
)
My understanding is that cursor built it’s own model specifically for applying code actions that the conversational model returns.
For the converse model, I would assume they are hosting them through cloud provider (AWS for claude & deepseak, Azure for OpenAI, gcp for gemini) and not calling to these providers API’s directly.
If security is a concern (probably should be), you can configure cursor pretty easily to connect with a Claude models hosted on your on AWS account. That said, I’m pretty sure your data is still going through their servers for that.
Cursor uses OpenAI, Anthropic, Google, xAI, Fireworks as model providers. Cursor-small is the only model they are hosting thierself I believe.
You can see it on docs
And Security Pages
An AI request generally includes context such as your recently viewed files, your conversation history, and relevant pieces of code based on language server information. This code data is sent to our infrastructure on AWS, and then to the appropriate language model inference provider (Fireworks/OpenAI/Anthropic/Google). Note that the requests always hit our infrastructure on AWS even if you have configured your own API key for OpenAI in the settings.