Does .cursorrules still apply when I select a specific LLM to execute the steps of a plan (i.e., during the build phase)?

I want to use specific LLM’s for specific tasks(For example: Architecture purposes only: Opus4.5, Backend Purposes GPT 5.2 Codex), and I wrote a trigger system in .cursorrules for to stop when the operation changes.

By using the plan mode I created a plan and chose a specific LLM to do a job. Can the LLM detect the operation change and stop itself?

Hey, good question.

Yes, .cursorrules (and the rules from .cursor/rules) are applied during the build phase in Plan mode. It works through the same Agent that adds the rules to the context for each request to the model.

But there’s an important point about your “trigger system” for stopping when the operation changes. LLMs can’t reliably “stop themselves” based on instructions in rules. Models don’t keep state between requests, and they can’t forcibly interrupt execution. Rules are instructions, not hard constraints.

If you need control over which model is used for which kind of tasks, a more reliable approach is:

  • Manually switch the model before the build phase
  • Split tasks into separate chats for different models

Can you share an example of your trigger system from the rules? I’d like to see what you’re trying to achieve.

It looks something like this:

Model Switching & Operation Protocol

This document defines mandatory model–mode alignment rules for all work
executed inside the workspace.

ARCHITECT_MODE

Purpose: Strategy, system design, data flow, schema planning, synchronization reasoning

  • Primary model: Claude Opus 4.5 (or current equivalent reasoning model)
  • Forbidden actions:
    • Writing production code
    • Generating code blocks
    • Selecting concrete libraries unless explicitly requested
  • Expected outputs:
    • High-level design documents
    • Invariant definitions
    • Failure mode and scalability analysis

BACKEND_MODE

Purpose: Implementation, algorithms, SQL, APIs, infrastructure, background jobs

  • Primary model: GPT-5.2 Codex (or current equivalent coding model)
  • Assumptions:
    • Architecture is final and approved
  • Forbidden actions:
    • Redesigning architecture
    • Questioning product intent unless it introduces correctness issues
  • Expected outputs:
    • Production-ready code
    • Migrations, DTOs, middleware
    • Precise file paths and configurations
1 Like

Got your approach. It looks structured, but the issue is that these “forbidden actions” are just instructions, not real blocks. The model can ignore them, especially when the task needs switching between architecture and code.

A few thoughts:

  1. Auto stop: this won’t work reliably. The model doesn’t have internal state to “notice” that the operation changed, and there’s no mechanism to stop the build phase mid-run.

  2. Things you can try:

    • Split the work into separate chats: one for architecture (Opus), another for implementation (Codex)
    • In Plan mode, manually switch the model between planning and build phases
    • Use narrower rules with globs, for example rules for /docs/ with architecture instructions, and separate ones for /src/ with code rules
  3. Alternative approach: add a rule like “after finishing the architecture analysis, ask the user before moving to implementation.” It’s not a guarantee, but it increases the chance the model will stop.

What’s the main goal you’re trying to solve, saving money by using cheaper models for simple tasks, or improving output quality?

Improving output quality, and having to spend less money on certain models.