Mandatory Pre-Action Rules Gate

Feature request for product/service

Cursor IDE

Describe the request

Feature Request: Mandatory Pre-Action Rules Gate

Summary

Add a mechanism to enforce that the AI reads and acknowledges specific rules before executing any tool call. Currently, rules files are included in context but there’s no enforcement that the AI follows them.

The Problem

Current Behavior

  • .cursor/rules/*.mdc files with alwaysApply: true are included in context
  • The AI sees these rules but can (and does) ignore them
  • There’s no enforcement, acknowledgment, or verification step

Real-World Impact (From Today’s Session)

In a single debugging session, the AI:

  1. Made a hasty fix without diagnosing root cause
  2. That fix broke something else
  3. Made another hasty fix
  4. Created infinite recursion in database policies
  5. Created function overload conflicts
  6. Required 6 migrations to fix what started as 1 problem

All of this happened despite having rules that said “inventory all errors before fixing.”

The rules were in context. The AI just didn’t follow them.

Proposed Solution

Option A: Pre-Action Acknowledgment Gate

Add a new rule type that blocks tool execution until the AI confirms it has followed the checklist:

# .cursor/rules/safety-gate.mdc
---
type: pre-action-gate
enforceBeforeTools: [write, search_replace, run_terminal_cmd]
---

Before taking action, confirm:
- [ ] I have inventoried ALL errors
- [ ] I have identified the root cause
- [ ] I have considered what this change might break
- [ ] I am making ONE targeted fix, not a sweeping change

When the AI tries to use write, search_replace, or run_terminal_cmd, Cursor would:

  1. Pause the tool execution
  2. Display the gate checklist
  3. Require the AI to output acknowledgment
  4. Only then allow the tool call

Option B: Mandatory Context Injection

Add a setting that injects specific text at the start of every AI turn, not just in the initial context:

{
  "mandatoryPrefix": "Before responding, review .cursor/rules/safety.md"
}

This would appear in every turn, making it impossible to “forget” mid-conversation.

Option C: Tool Call Validation

Add a validation layer that can reject tool calls based on rules:

# .cursor/rules/migration-safety.mdc
---
validateToolCalls:
  - tool: run_terminal_cmd
    pattern: "supabase db push"
    require: "The AI must first list what policies will be affected"
---

Why This Matters

  1. Database migrations are destructive - A bad migration can corrupt production data
  2. AI has short-term memory issues - Even with rules in context, the AI can get “tunnel vision”
  3. Users shouldn’t have to manually enforce - Saying “remember to follow the rules” defeats the purpose of having rules
  4. Trust requires guardrails - Users can’t trust an AI assistant that ignores its own safety rules

Current Workarounds (Inadequate)

Workaround Why It’s Not Enough
Put rules at top of file AI still ignores them when focused on a problem
Use /command prompts User has to remember to invoke them
Add to multiple files More context ≠ more compliance
Bold/emoji warnings Aesthetic, not enforceable

Implementation Suggestion

The simplest implementation:

  1. New rule property: enforceAcknowledgment: true
  2. When this rule is in context and the AI attempts a tool call:
    • Cursor injects: “Before proceeding, you must acknowledge the safety rules in [file]”
    • AI must output the acknowledgment in its response
    • Only then is the tool call executed

This adds minimal friction while ensuring critical rules can’t be silently ignored.

User Impact

  • Developers with complex projects - Can trust that safety rules are followed
  • Teams - Can enforce coding standards through mandatory gates
  • Solo developers - Can protect themselves from AI mistakes

Submitted by: [Your name]
Date: December 31, 2024
Cursor Version: [Your version]
Project Context: Multi-tenant SaaS with 180+ database migrations

Operating System (if it applies)

Windows 10/11

Hey, thanks for the detailed feature request.

This is a really interesting idea. The issue of the AI ignoring rules is known, and there are a few similar reports on the forum:

Right now, there is no mechanism to force the model to confirm rules before making tool calls. The options you suggested (pre-action gate, mandatory context injection, tool call validation) would be new functionality that does not exist yet.

A few current approaches that might help (even if they are not perfect):

  • Use more direct imperatives at the start of your rules (“STOP. Before any action…”)
  • Break complex tasks into smaller steps and ask for an explicit confirmation at each step
  • For critical operations (like DB migrations), use manual confirmation for every tool call

Your feature request is well argued and includes clear implementation ideas.

Thanks for the thoughtful response. As you wrote, this is a significant gap with AI agents and whoever can solve for this makes a significant case for their platform. Also, there are considerable postive ramifications for such a capability beyond IDE environments.

1 Like