Hi Cursor team, thank you for the great product. It can definitely boost our productivity.
Currently, I am using Cursor for my personal open-sourced projects, since I don’t completely understand the security and compliance risks for business use cases. I assume many other developers might have the same concerns. So, it would be great to provide an official document to answer security and compliance concerns on each pricing tier. As we can use our own OpenAI API even on the free tier, I would like to know the best practice to securely use Cursor in the workloads.
Ultimately, we may want to know how to securely use Cursor for businesses in terms of security and compliance in each pricing tier. For instance:
what data will be sent to Cursor if we use our own OpenAI keys in the free tier?
how can we manage authentications of members who use Cursor at the company, like Google SSO etc.?
Re Question 1: If you’re maximally security focused, the two settings to toggle are “Local/Privacy Mode” (hit the gear in the top right) and “Indexing new repos by default” (More tab on the right bar and then click into settings). Details on each:
When turned on, Privacy Mode will ensure that absolutely no code data is stored on our servers.
If you turn off indexing by default, then we won’t compute embeddings over new codebases you open (the embedding step requires uploading your code piece by piece in chunks to our server; nothing will persist after the life of the request but understand that some folks might not even want this).
In summary, with both of these on, we’ll only send 100-300 lines of code to our server when you invoke the AI, and none of this code will be stored anywhere at-rest.
Thanks for the answer here. It’s useful to know that these options are available. But as you go through the procurement process, I cannot point the legal team to a forum answer. Do you have a document we can send our security and compliance teams with your terms of service?
Thank you for answering my question. That’s good to know. I noticed the privacy policy describes what you explained. That would be a good material for many Cursor users to convince the security and compliance teams. As mzanchi said, it would be good to provide the detailed document about the behaviors of Cursor with settings and our own OpenAI API key.
TLDR:
Everytime you use the AI features, we start by finding the most relevants parts of your code locally (~10-300 lines of code).
We then beam this chunk of code up to our (Cursor’s) servers and input it into a prompt that is sent to an OpenAI model.
We have a signed agreement with OpenAI that says that they will not train on this prompt and will only persist it for 30 days to monitor trust and safety.
By default, we (Cursor) persist this prompt on our server to improve Cursor. To turn this off, simply click on the settings gear in the top right of Cursor, then Advanced, and turn on Privacy mode.
@truell20 Sorry, one more thing, it would be great to offer any agreement like data protection agreement in the commercial tier, as many SaaS companies do. Customers can use Cursor with no concerns.