Concerns about Privacy Mode and Data Storage

Just adding the recent tl;dr version of the privacy policy at:

https://www.cursor.com/privacy

TLDR

  • If you enable “Privacy Mode” in Cursor’s settings, none of your code will ever be stored by us or any third-party (except for OpenAI and Anthropic, which persist the prompts we send to them for 30 days for trust and safety. Business plan users’ data will not be retained at all by OpenAI or Anthropic.)

  • If you choose to keep Privacy Mode off, we may save prompts / collect telemetry data to improve the product. If you use autocomplete, Fireworks (our inference provider) may also collect prompts to improve inference speed.

Other notes

  • Even if you use your API key, your requests will still go through our backend! That’s where we do our final prompt building.

  • If you choose to index your codebase, Cursor will upload your codebase in small chunks to our server to compute embeddings, but all plaintext code ceases to exist after the life of the request. The embeddings and metadata about your codebase (hashes, file names) may be stored in our database, but none of your code is.

I don’t get the sense that the Cursor team are ‘bad guys’, they always seem to be open and transparent about what they are building, how they are doing it, and how their services work, but for their benefit, and that of users, I’d love to see a published checklist of business security ‘must haves’ to facilitate adoption and onboarding and save everyone’s time when having related discussions.

To be honest, I find even the big players aren’t very good with explaining their security practices ‘precisely’, I’ve already had too many discussions essentially speculating on what, exactly, is occurring, because the docs aren’t clear enough.

2 Likes