Block access to credential files (*.env, *.env.local) to prevent AI exposure

Summary

To enhance security and prevent accidental exposure of sensitive credentials to AI systems, we should block access to environment files containing credentials.

Details

  • Block access to *.env files
  • Block access to *.env.local files
  • Block any other files that typically store credentials or secrets

Security Impact

  • Prevents accidental leakage of credentials to AI systems
  • Reduces risk of unauthorized access to sensitive information
  • Aligns with security best practices for handling environment variables

Related Items

  • Consider adding similar protection for other credential storage files
  • Document the blocked file patterns in security documentation
27 Likes

You could just add it to .cursorignore but yeah it’d be better if they don’t even index it by default so we can trust it

3 Likes

.cursorignore file defines which files should be excluded from code base indexing for reference purposes. However, even if files are listed here, AI can still read them. It’s crucial to prevent AI from accessing credentials and confidential information.

3 Likes

100% agreed, this is absolutely critical to being able to use cursor to build real production software

.cursorignore does not seem to work very well. I’ve been very diligent to add .env there, but I’ve noticed multiple instances of the cursor agent writing to .env and even adding .env to the context in the context file picker.

3 Likes

Example of Project Rule (deny access to .env, .env.local, etc.) when you want the default to be this way, but you want to handle it manually

1 Like

@kinopee What will happen is that this will be send to OpenAI (or whatever):

my_secret=123

***DO NOT READ!**
...

Just because the AI says it is not allowed to read it, does not mean that the file is not already being sent.

I find the handling of .env an absolute deal breaker:

Related. Cursorignore etc do nothing for me: CRITICAL .env files are ingested and send to servers - Security Breach

2 Likes

Custom instructions are sent before the agent begins processing, so they are effective safeguards if they follow the rules. However, this should be the default behavior.

Theory and practice differ. I could easily check the traffic using a proxy. It is sent via plaintext to the count token endpoint (and probably in binary form in several other requests, but it seems to use protobuff or something similar for communication).

Here is me triggering a Chat (which automatically added the .env file as context)

User Rule:

User Rule + Your suggested project rule


2 Likes

Thanks for checking!

Thank you Cursor team for the improvement on this issue!

Hello, I’d like to ask about this wording. So actually it means that it’s still not 100% guarantee that excluded files won’t be sent, right?

1 Like


As far as I have verified, in version 0.47, files listed in .cursorignore are invisible to AI, so they should not be transmitted.

1 Like

image
An invisible icon is also attached to files in Explorer.

but only with a specific .cursorignore. NOT by default (which it should!)

1 Like

As a corollary, there should be a secure alternative for defining and injecting environment variables or secrets into the cursor agent, entirely on the client side. This would maintain security while allowing the cursor agent to function effectively in agent mode when executing commands.

1 Like

I had the same issue about .env files here - Cursor reading .env files. And it is still doing that

Environment files like .env and .env.local contain sensitive information such as API keys and database credentials. Allowing AI tools to access these files by default poses significant security risks, including potential unauthorized access and credential leakage. To mitigate these risks, it’s essential that Cursor’s default settings prevent AI from reading such files. Relying solely on users to configure .cursorignore files isn’t sufficient, as this approach can lead to accidental exposure of confidential data. Implementing a default block on access to these credential files aligns with security best practices and enhances overall trust in the platform.

2 Likes

I want to expand on problematic scenarios that could lead to secrets being leaked, as long as those files aren’t blocked by default:

  1. A user accidentally opens the wrong folder or one that already contains sensitive information.
  2. A Git repository contains sensitive data that the user was unaware of (e.g., a private company-internal repo created by a colleague).
  3. A Git repository unexpectedly includes relative symlinks to other critical directories (imagine a compromised library creating a symlink to ~/.ssh).
  4. A script, an attacker, or even the user—intentionally or accidentally—runs cursor someDir in the shell.
  5. The user is simply unaware of this behavior or hasn’t been properly informed.

I strongly believe this isn’t a matter of perspective—people make mistakes, and good security concepts should account for that by making it difficult to make such mistakes in the first place.

Ultimately, all it takes is a brief moment of failing to manually ignore these files for them to be transmitted, at which point they should be considered compromised.

I also want to highlight that while Cursor states these files are not stored in private mode, the fact that they are transmitted at all is already a security risk. Even if one trusts Cursor to handle these files responsibly, other attack vectors, such as man-in-the-middle attack, still exist.

Lastly, the benefits of processing these files for the overall experience are negligible. Most frameworks and languages already reference .env keys in their codebase through constants, making this feature largely unnecessary

2 Likes