Has anybody had to try to cleanup after having privacy mode disabled?

So, I naively used Cursor “out of the box” with privacy mode disabled and agent set to “auto”. I am on the Pro plan. I used Cursor to update Typescript types throughout my codebase and am now trying to understand how much of my codebase is “out there” and what can be done about it?

Specifically, I am trying to understand:

  • What services processed the codebase?
  • What and where our codebase or information about is stored?
  • What we can do about removing any stored info or code related to our codebase?

The following is what I got from Cursor’s ai email response robot. Is there somebody at Cursor I can speak with to better understand the response? Is my codebase just out there and visible for others to see at this point? It does sound like it is not just floating out there in plain text, and perhaps limited info was even sent to 3rd party providers . I am hoping to get confirmation of this and direction on next steps either from somebody at Cursor, or from somebody who has dealt with this.

Can somebody at Cursor determine how much of my codebase was even sent to them in the first place? This would be very helpful to know!

Obviously, I have learned my lesson and now know Privacy Mode exists!

Here is the response I got from the auto-email, but it still doesn’t really help address the actual impact in simple terms. **

Services that processed the code:**

  • Cursor’s servers (AWS in the US)

  • Third-party AI providers (OpenAI, Anthropic, Google, and/or xAI via “auto” model)

  • Turbopuffer (Google Cloud in the US) for embeddings storage

What’s stored and where:

  • Mathematical embeddings (vector representations) stored in Turbopuffer

  • Obfuscated file paths and line numbers

  • Plaintext code “ceases to exist after the life of the request” - only embeddings remain

  • Prompts and limited telemetry may have been shared with model providers

Data removal options:

  1. Automatic: Indexed codebases delete after 6 weeks of inactivity

  2. Immediate: Delete the Cursor account (complete removal within 30 days)

Third-party providers: You don’t need to contact them separately - they handle API requests per their standard data retention policies.

Hey, thanks for the question - I understand your concern.

The auto-reply info is accurate. Let me confirm the key points from our docs:

What was sent:

  • For indexing: code was temporarily processed to create embeddings (vector representations), then deleted
  • For AI requests (in “auto” mode): prompts and code context were sent to model providers (OpenAI, Anthropic, Google, xAI)

What we store:

  • Only encrypted embeddings (math vectors that can’t be turned back into source code)
  • Obfuscated file paths and line numbers
  • Plaintext code is NOT stored on our servers

Your code isn’t “publicly accessible” - embeddings are encrypted and can’t be converted back to source code. Model providers handle requests per their standard policies (without Privacy Mode we can’t guarantee Zero Data Retention, but they don’t make your code publicly available).

Deletion:

  • Automatically after 6 weeks of inactivity
  • Immediately via account deletion (full removal within 30 days)

Docs: Privacy and Data Governance | Cursor Docs

Let me know if you have more questions.

Hello @deanrie ,

Thank you for your response. I do have a couple more questions:

  1. On the Cursor security page, with “Share Data” selected for Anthropic, Vertex, and xAI it says “We have a zero data retention agreement with Anthropic.”. Would those agreements in fact apply to my situation as the Security page indicates?

  2. Even if they do not apply in this situation, it looks like the only 3rd party provider that would store the data anyways is OpenAI. Is that correct?

  3. Is it possible to access the Cursor logs to see what and how much data was sent during that time?