Hi Cursor Team,
I am exploring Cursor Hooks to implement enterprise-level logging and sensitive data protection (DLP) within our organization. While the current hooks are promising, I’ve identified a few critical gaps regarding security and data privacy that I’d like to share.
1. Visibility into Local Tool Operations
Currently, Cursor utilizes various local tools (e.g., readFile, grep, listDir) to gather context. However, these operations seem to bypass the current hook system.
-
The Concern: From an enterprise security perspective, we cannot log or audit when/what specific files are being accessed by these internal tools.
-
Question: Is the team aware of this “blind spot” in logging, and are there plans to extend hooks to cover these local filesystem operations?
2. Intercepting and Redacting Sensitive Data Before LLM Submission
I am particularly interested in the afterShellExecution event (and similar tool-output events). In many cases, a shell command might return sensitive information (PII, credentials, or proprietary data).
-
The Requirement: We need a way to inspect the output of a shell execution before it is sent to the LLM context. If sensitive data is detected, we want the ability to redact or block that specific content from being uploaded to the model.
-
Current Limitation: As far as I can tell, the current API does not support “intercepting and modifying/blocking” data before it reaches the LLM.
-
Question: Are you considering adding a “pre-LLM submission” hook that allows for programmatic data filtering or blocking?
We love the productivity Cursor provides, but these security features are essential for broader enterprise adoption. I would love to hear your thoughts or if these items are on your current roadmap.
Best regards,