Agents refusing to follow instructions

Got it, thanks for clarifying about “stop conditions”, that’s a different issue.

The fact that the agent in Agent mode runs the plan without stopping is basically by design. It’s autonomous and does what it thinks is needed to solve the task. If you say “do phase 1” and it does 1-2-3, that’s because it believes that’s part of completing the task.

About rules, there are a few similar reports here:

This is a known issue where rules aren’t always followed by the agent, especially when they directly conflict with the model’s default behavior.

You can try:

  • More explicit prompts. Instead of “do phase 1”, use something like “ONLY do phase 1. Stop after phase 1 is complete. Do NOT proceed to phase 2.”
  • Trying a different model. Can you check if grok-code behaves the same as Claude Opus 4.5 or GPT-5 in this scenario?

For debugging, I’ll need:

  • An example rule from .cursor/rules that’s being ignored
  • A screenshot of the exact prompt where you say “do X” and it does X+Y+Z
  • A test with another model, and let me know if this only happens with grok-code or with all models

A Request ID with privacy disabled would also help. The one you shared (5bcab806…) had privacy enabled.