Hey, this is a known limitation. Rules in .mdc files are loaded into the model context, but LLMs can’t guarantee 100% compliance. They work probabilistically, so instructions like “always use PowerShell” or “always write comments in English” will be followed most of the time, but not always.
A few things that can help improve rule compliance:
- Keep rules short and direct. The shorter and more specific the instruction, the more likely the model is to follow it. Long rule files often get “diluted” in context.
- Use strong wording in the rule. Phrases like
ALWAYS,NEVER,CRITICALat the start of the rule help. - Make sure
alwaysApply: trueis set in the frontmatter if you want the rule applied to every chat. - Reinforce it in the request. For critical rules, a short reminder in the prompt (“remember, no grep, use PowerShell equivalents”) helps a lot.
- Split rules up. Instead of one big file with multiple instructions, create focused rules, like one for OS commands and another for language or comments.
Related topic about extra context: Cursor Agent not following RULES
The team is aware of the compliance gap. Your report helps with prioritization, but there isn’t a guaranteed fix since this comes from how LLMs process instructions. Still, the tips above should noticeably reduce how often the rules get broken.