Rules in settings are often ignored — need better enforcement or clearer limits

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I set rules in Cursor (e.g. “don’t write documentation, just explain in chat”). The AI still wrote and edited a markdown doc instead of only explaining. When I asked why, it said it would follow the rules next time—but I don’t care about promises; I need the product to actually respect the rules. If the rules are often ignored and there’s no way to make the AI follow them more reliably, having rules at all feels pointless and misleading. There also doesn’t seem to be any built-in way for the AI to get better at applying my rules. I’d like Cursor to either enforce rule adherence more strictly or make it clear what the limits of rules are so users aren’t left frustrated.

Steps to Reproduce

Steps to reproduce
Open Cursor and go to Settings → Rules (or .cursor/rules).
Add a rule, e.g. “Do not create or edit .md documentation files. Only explain things in chat.”
In chat, ask the AI to fix something or investigate an issue (e.g. payment verification).
Observe that the AI creates or edits a markdown doc (e.g. PAYMENT_DEV_TROUBLESHOOTING.md) instead of only explaining in chat.
Remind the AI of the rule; it may say it will follow it next time.
In a later task, repeat steps 3–4; the same rule violation can occur again.

Operating System

MacOS

Version Information

Version: 2.4.37
VSCode Version: 1.105.1
Commit: 7b9c34466f5c119e93c3e654bb80fe9306b6cc70
Date: 2026-02-12T23:15:35.107Z (1 mo ago)
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Darwin arm64 25.2.0

For AI issues: which model did you use?

Composer 1.5

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the detailed report.

This is a known limitation of how rules work right now. Rules are passed to the model as instructions in the system prompt, but models are non-deterministic, so they might ignore constraints, especially negative ones like “don’t do X”. The team is aware of the issue.

The most practical workaround is to rewrite rules in a positive way and explain why. Models follow positive instructions with context much better than plain bans.

Instead of:

“Don’t write documentation, just explain in chat”

Try something like:

“Your role is to explain solutions directly in chat conversation. This is more helpful because the user can ask follow-up questions immediately. When you need to share code changes, propose edits to source files. Never create or modify .md documentation files, always explain findings and solutions as chat messages.”

A couple more tips:

  • Shorter chats are more reliable, in long sessions rules get “forgotten” more often
  • Concrete step-by-step instructions work better than abstract bans

These threads have similar cases and more detailed guidance from the team:

Let me know if rewriting the rules doesn’t improve things.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.