Cursor often forgets .mdc instructions

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Through .mdc files I gave Cursor the instructions to never use Linux command (e.g., grep) but assume it’s running on Windows. And still very often it just happily creates a command containing && and grep. Same with comments. I write prompts in another language, but clearly instructed Cursor to always write all comments and code in English. Yet it happily writes comments in that other language.

How do mdc files supposed to work if I can’t trust them to always work?

Steps to Reproduce

Tell cursor (in an mdc file) to always write comments in English
Prompt in a different language
Eventually the AI will write code comments in that other language

or

tell cursor to never use grep, but to use a PS equivalent
eventually it’ll use grep again

Expected Behavior

Respect the information in de MDC files

Operating System

Windows 10/11

Version Information

Version: 2.4.31 (user setup)
VSCode Version: 1.105.1
Commit: 3578107fdf149b00059ddad37048220e41681000
Date: 2026-02-08T07:42:24.999Z
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Windows_NT x64 10.0.26100

For AI issues: which model did you use?

Auto, Opus 4.5

Does this stop you from using Cursor

No - Cursor works, but with this issue

1 Like

Hey, this is a known limitation. Rules in .mdc files are loaded into the model context, but LLMs can’t guarantee 100% compliance. They work probabilistically, so instructions like “always use PowerShell” or “always write comments in English” will be followed most of the time, but not always.

A few things that can help improve rule compliance:

  1. Keep rules short and direct. The shorter and more specific the instruction, the more likely the model is to follow it. Long rule files often get “diluted” in context.
  2. Use strong wording in the rule. Phrases like ALWAYS, NEVER, CRITICAL at the start of the rule help.
  3. Make sure alwaysApply: true is set in the frontmatter if you want the rule applied to every chat.
  4. Reinforce it in the request. For critical rules, a short reminder in the prompt (“remember, no grep, use PowerShell equivalents”) helps a lot.
  5. Split rules up. Instead of one big file with multiple instructions, create focused rules, like one for OS commands and another for language or comments.

Related topic about extra context: Cursor Agent not following RULES

The team is aware of the compliance gap. Your report helps with prioritization, but there isn’t a guaranteed fix since this comes from how LLMs process instructions. Still, the tips above should noticeably reduce how often the rules get broken.

2 Likes

Yes. @deanrie suggestions are spot on. One thing is not to make every rule with always on to be true as each conversation turn with with agent will include th rules with the always on… the means not only does it fill our context window up faster, it increases the token cost.

I tend to reinforce rules from the SKILLS, hooks, commands, etc as needed.

2 Likes

yeah this is probably the most common frustration with rules right now. deanrie’s tips above are solid, i just wanted to add one thing from my own testing.

the rules that work most reliably are ones the model can “verify” against its own output. like “never use grep, use Get-ChildItem” works better than “always use PowerShell commands” because the model can check whether the specific word grep appears in what it’s writing.

for the language thing, i’d try making the rule super explicit. instead of “write comments in English” try something like “all code comments, variable names, and docstrings must be written in English regardless of the prompt language.” i tested a similar rule with British English spellings and it stuck, but i think the key was being really specific about what “English” applied to.

also, are you using alwaysApply: true in the frontmatter? and are these in separate .mdc files or one big file? splitting them up helped a lot in my experience.

1 Like

Thanks for this! I’ll try it out. I also noticed significant differences between models. Opus 4.5 behaves reasonably decent, although it keeps trying grep and head. But auto is terrible. It constantly writes my comments in the prompt language. I am using always: true and am using multiple mdc files, each focusing on a specific domain (eg, cli, layout, coding style etc)

1 Like

Just a thought… I noticed that the model always corrects the English comments after I asked it to do so. So would it be possible to have specific mdc rules that are served after each response? Like before it’s almost done. It could be a final evaluation step: “hey did you write all comments in English?”. I’m pretty sure that would solve that issue.

1 Like

interesting that Auto is the worst offender. that tracks with what others have reported in the plan mode threads too. Auto seems to pick models that are less reliable at following constraints.

your post-generation evaluation idea is actually really cool. like a “lint pass” for rule compliance before the response gets finalized. right now the closest thing you can do is add a rule like “before finalizing your response, verify that all comments are in English” but it’s still the same model checking its own work in the same pass, so it’s not as reliable as a separate step would be.

that would be a solid feature request honestly. a post-generation rule hook that runs after the model’s response but before it’s shown to the user.

Yes indeed. I’ve tried creating an MCP for this purpose. It had a simple “at_end” tool. And I had a super simple MDC file: “MUST call MCP tool: “at_end”. But, no guarantees either. It sometimes calls the tool, but more often than not, it didn’t. Especially when opening a new chat, the model simply forgets about it, not seeming to remember to load those MDC files. Meaning that this solution is suffering from the exact problem we’re trying to solve.

The only solution to make this iron clad, would be for Cursor to implement some actual hooks (pre/post response). Adding those would open up a whole new level of control.

1 Like

thats a really clever experiment. the fact that even an explicit “MUST call this tool” rule gets ignored kind of proves the point, its the same forgetting problem just with extra steps.

i agree hooks would be the real fix here. something that runs at the cursor level, not the model level, so it cant be “forgotten.” like how linters run after you save a file regardless of what the editor thinks. the model shouldnt be the one responsible for remembering to check its own work.

have you filed this as a feature request? i feel like your MCP test results would be a strong argument for it.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.