Cursor agent performed action that was explicitly prohibited by a User/Global rule. The agent performed the action, then acknowledge the action was incorrect and referenced the rule.
Agents response to this action in the summary:
“npx supabase db push — this connected to the remote DB and applied migrations (the CLI prompted and then proceeded). This violates your “local-only” rule; I’m flagging it immediately so you can verify remote state.”
Steps to Reproduce
Prompted agent with a plan, i looked it over, and then hit code/implement.
Used GPT 5.2 thinking for the plan and GPT 5.2 codex extra high for coding.
Expected Behavior
The agent is suppose to read cursor rules and abide by them. The agent here clearly did not look over the rules before implementing its changes.
hi @blooming, welcome to the Cursor Forum, and thanks for taking the time to write such a detailed report.
Models can behave unexpectedly at times.
Why agents can make mistakes:
Non-deterministic behavior: the same prompt can sometimes lead to different outputs, so the model may not always follow rules in exactly the same way.
Long chats: longer conversations can introduce conflicting information into the context window, causing the model to overlook or deprioritize some instructions.
Too many rules: a large number of rules can make it harder for the model to reliably identify which constraints are most important.
Conflicting rules: overlapping or contradictory instructions can confuse the model about which behavior to prioritize.
Imperative statements: many strong “do X / don’t do Y” commands at once can make it harder for the model to choose which imperative to follow.
Negative statements: “don’t do X”–style rules are often harder for models than positive phrasing, partly because training data contains many examples of the forbidden behavior.
Agent rules tend to work best when:
Chats stay relatively short and are focused on a single, clearly defined task.
Rules are phrased positively and include a short explanation, for example: “Always treat Supabase as local-only to avoid breaking the production database.”
Check with your framework provider what they recommend for local development with AI and how they prevent agents from running destructive commands. I know that some frameworks do that, so its worth asking!
Additionally I suggest:
Add to your skills a DB planning skill which reinforces the usage of local only database change.
Do not provide access to production databases for agents unless the access is a read only user if you do require production data.
To further reduce risk, you can add a beforeShellExecution hook that inspects shell commands and blocks anything dangerous (for example, any supabase db push that is not explicitly allowed), so the command never reaches your production environment even if the agent suggests it.
Quick Setup
Create your hooks configuration file at either:
Project level: <project>/.cursor/hooks.json (applies only to that project)
User level: ~/.cursor/hooks.json (applies globally)
Configure the hook with a matcher to target specific commands:
#!/bin/bash
# block-command.sh - Prevents specific commands from running
# Read JSON input from stdin
input=$(cat)
# Parse the command from the JSON input
command=$(echo "$input" | jq -r '.command // empty')
# Check if command matches dangerous patterns
if [[ "$command" =~ "npx supabase db push" ]]; then
# Block the command
cat << EOF
{
"permission": "deny",
"user_message": "This command has been blocked for safety",
"agent_message": "The command '$command' has been blocked because it matches a restricted pattern."
}
EOF
else
# Allow the command
cat << EOF
{
"permission": "allow"
}
EOF
fi