Native Agent Compliance Verification (Auto-Critique Loops)

Feature request for product/service

Cursor IDE

Describe the request

Summary
Introduce a native “Compliance Verification” phase where the Agent automatically critiques and self-corrects its own output against a set of strict project rules before presenting the response to the user.


The Problem: “Compliance Drift”

We maintain a strict AGENTS.md (or .cursorrules) file defining architectural patterns (e.g., “No state in SwiftUI Views,” “Always use our wrapper for Logging”).

While Cursor is excellent at reading these rules initially, “Compliance Drift” occurs frequently:

  1. Context Saturation: As the chat session grows, the Agent tends to prioritize recent conversation over the foundational rules.
  2. Laziness: The Agent often defaults to standard boilerplate (which might violate our specific patterns) instead of strictly adhering to our custom architecture.
  3. The “Human Linter” Fatigue: Users currently have to act as linters, constantly reminding the Agent: “You forgot rule #3 again.”

Current Workaround (The Manual Loop)

To solve this, we developed a manual workflow that is effective but tedious:

  1. We run a local script: python3 scripts/verify_agent_compliance.py.
  2. This script parses our AGENTS.md, checks git diff, and generates a Checklist Prompt.
  3. We paste this prompt into Cursor: “Verify your last changes against these specific rules…”
  4. Result: The Agent immediately catches its own mistakes (e.g., “Ah, I used Image instead of ImageView. Fixing now…”).

The insight is that the Agent IS capable of writing compliant code, but it often needs a second “Review/Critique” pass to strictly enforce it.

The Solution: Native Verification Hooks

We propose a native feature that automates this “Draft → Verify → Fix” loop.

How it could work:

  1. Define Verification Rules: Allow tagging specific rules in .cursorrules as [verify] or strict constraints.
  2. The “Think” Phase:
    • User Prompt: “Create a new Login View.”
    • Agent: Generates Draft 1 internally.
  3. The “Compliance” Phase (Invisible to User):
    • The System runs Draft 1 against the defined Verification Rules.
    • System Prompt to Agent: “Does your code in LoginView.swift violate the rule ‘No state in Views’?”
    • Agent Internal Thought: “Yes, I used @State. I must refactor to use the Reducer.”
  4. Final Output:
    • The Agent presents the corrected, compliant code to the user.

Why this is better than “Custom Instructions”

Custom instructions (System Prompts) are passive. They compete for attention in the context window. A Verification Phase is active—it forces an iterative check after generation but before submission.

Impact

  • Trust: Users can trust the Agent to handle complex, strict architectures without constant supervision.
  • Velocity: Removes the “Generate → Review → Reject → Prompt Again” cycle.
  • Enterprise Adoption: Critical for teams with strict coding standards (security, legal, architecture) who are currently hesitant to let AI generate code unchecked.