Copilot++ prompt privacy

From what I can tell, if:

  • Privacy mode is enabled
  • Codebase index is turned off
  • I make sure to not have sensitive files active or @mentioned

that no sensitive data is sent to the Cursor server.

I’ve tested asking the chat for the secrets in sensitive files, and no secret got leaked if I didn’t have the file open, och explicity @mentioned the file.

BUT:
Sometimes when typing “secret” or “env” the Copilot++ suggested the secret variable, with the correct sensitive value.

My question: Is this all locally, or do the Copilot++ suggestions come from the Cursor server?
If they come from the server, does this mean my sensitive data did a round trip to the Cursor server, with the potential of being hi-jacked in a log file/prompt retention?

Regards

You can check here: Secrets and Credentials

There is no AI model that run locally, it’s all on openai or if it’s their own model (like for copilot++) on their own servers. With privacy mode they don’t store the prompt, but it doesn’t mean it doesn’t reach their servers to be processed.

That’s one reason people would like some of those model to run locally.

I also understand and share that concern as well. But putting secret in a code base is bad practice anyway, .env is good for local dev, but I would not use that for production.