Dynamic Guardrails in Plan Mode for Task-Specific AI Control

Feature request for product/service

Chat

Describe the request

The Problem: Lack of Fine-Grained, Task-Specific Constraints

Cursor’s “plan” mode is an excellent feature for scoping a complex task and breaking it down into manageable steps for the AI agent. It provides a crucial checkpoint for the user to validate the AI’s high-level approach.

However, a significant difficulty remains: while the plan (the “what”) might be correct, the implementation (the “how”) can often deviate from a user’s specific, implicit, or explicit instructions for that task.

Currently, we rely on general, static rules to govern the AI’s behavior globally. These are great for broad safety, but they are insufficient for task-specific needs. A user might issue an instruction like:

“Refactor this service, but do not alter the existing public method signatures.”

“Optimize this function, but you must use a for-loop instead of Array.map for performance reasons.”

“Add a new feature, but do not introduce any new third-party dependencies.”

The AI agent, in its effort to provide the “best” solution, may override these instructions. It might decide a small API change is cleaner, that map is more idiomatic, or that a new library is the most efficient solution. This violates the user’s stated constraints and forces a frustrating cycle of re-prompting and manual correction. The core issue is that the user’s intent and constraints get “lost in translation” between the planning and implementation phases.

Proposed Solution: Temporary Guardrails within the Plan

I propose enhancing the “plan” mode to include a new, optional section for Temporary Guardrails & Rules.

These guardrails would be defined as part of the plan itself and would be active only for the duration of that specific task. They would be generated by the LLM based on the user’s initial prompt and instructions, and the user could review and edit them alongside the plan steps.

For example, the prompt “Refactor UserService, but do not change its public API” would generate not just the plan steps, but also a set of temporary rules:

Plan

  1. Analyze UserService and identify areas for refactoring.
  2. Refactor internal logic for method_A.
  3. Improve error handling in method_B.

Temporary Guardrails

  • DO NOT modify the method signatures or visibility of any public methods in UserService.
  • DO NOT add new public methods to UserService.

How This Solves the Problem

Aligns Implementation with Intent: By making the user’s constraints an explicit, machine-readable part of the task, the AI agent has a clear set of “dos and don’ts” to follow during implementation. This ensures the final output is stable and adheres to the user’s expectations.

Reduces AI “Over-Creativity”: It provides the necessary boundaries for the AI, preventing it from making “helpful” changes that actually violate the user’s specific requirements for that task.

Improves Reliability: The user can approve both the steps and the rules before execution, giving them much higher confidence that the AI will perform the task correctly the first time.

Separates “Goals” from “Rules”: It creates a clear distinction between the objective (the plan) and the constraints (the guardrails), leading to a more robust and predictable AI-driven development process.

This feature would make the planning mode significantly more powerful by ensuring the AI’s execution remains tethered to the user’s precise instructions for each individual task.

Operating System (if it applies)

MacOS