Questions vs. Directives and Rule Violations

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I’ve noticed the Composer 1.5 agent can struggle to differentiate between Questions and Directives. Here’s an example from earlier.

“Can we force SQLAlchemy to -always- write DateTime strings to SQLite in a consistent format?”

My intent was asking a question, and getting a Yes/No response back with explanations. Instead the AI agent interpreted my prompt as a directive. In its defense, the root cause is English and humans.

If my story ended there, I wouldn’t be reporting this as a Bug.

… rather my -real- concern is that I previously added Rule, telling the Agent exactly how to behave:

---
description: Treat questions as requests for information only; directives as requests to act
alwaysApply: true
---

# Questions vs directives

**When the user asks a question** (e.g. ends with `?`, or phrases like "Is it possible…?", "Can we…?", "Could we…?", "Can you...?", "Would it make sense…?", "Is X a possibility?"):
- Respond with an **answer or explanation only**. Do **not** implement, build, or change code.
- If the user then wants you to do it, they will say so (e.g. "Yes, do it" or "Let's add that"). Only then take action.

**When the user gives a directive or command** (e.g. "Add X", "Implement Y", "Change Z to…", "Let's do X", "Please implement…"):
- Treat it as a request to **take action** and implement (or plan, then implement) as appropriate.

**Uncertain:** If intent is unclear, answer briefly and ask whether they want you to implement (e.g. "Yes, it's possible. Should I add it?").

My prompt began with “Can we …”, which is one of the specific examples I wrote about in Rules.

When I followed up and asked the Agent, “Hey, did you just break a Rule?”, it agreed that it had.

Steps to Reproduce

  1. Write Rules explaining that questions are questions, and not directives.
  2. Ask a question using phrases like “Can we/you”, and the Rule is not enforced.

This has happened several dozen times, across different sessions. My suspicion is that Composer 1.5 was trained strongly to be helpful by taking action and -doing-, rather than asking for permission or verification.

What concerns me most is that Rules are being treated more like Guidelines, than Rules.

Expected Behavior

Follow the Rules.

Operating System

Linux

Version Information

Version: 2.6.14
VSCode Version: 1.105.1
Commit: eb1c4e0702d201d1226d2a7afb25c501c2e56080
Date: 2026-03-08T15:36:54.709Z
Build Type: Stable
Release Track: Default
Electron: 39.6.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Linux x64 6.1.0-42-amd64

For AI issues: which model did you use?

Composer 1.5

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. Your rule format and placement are correct. .cursor/rules/ with alwaysApply: true is exactly what’s recommended. And the fact that the agent admits the violation when you ask confirms the rule is being loaded into context.

The problem is that LLMs follow rules probabilistically, not deterministically. Composer 1.5 (and other models) have a strong bias toward taking action. “Can we…?” often gets interpreted as an implicit request to implement something, even when the rule explicitly says not to.

This is a known issue, and you’re not the only one. Here’s an almost identical case from another user: Cursor neglecting STRICT rules on a regular base.

The team is aware of it. Your report helps with prioritization. A couple things:

  1. Can you share the Request ID from one of these sessions? (Chat context menu in the top right, Copy Request ID.) That helps engineers see what’s happening on the model side.

  2. As a workaround, try a more aggressive rule wording, for example add at the top: ABSOLUTE RULE: If the user's message contains '?', you MUST NOT edit any files or run any commands. Violation = critical failure. No guarantee, but user feedback suggests stronger wording helps.

  3. You can also try a different model (Claude, GPT). Different models follow rules differently.

Let me know if any of that helps.

2 Likes

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.