Cursor is ignoring even the most simple rules in iteration development approach

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Even most simple rules are being ignored and not prompted into the agent / prompt.

Steps to Reproduce

After 3rd to 4th iteration of debuging something - for example building an application with GO that is installed in specific folder - it tries to build it with a general GO command, even tho the rules only contains a few lines and the first one is: NEVER USE go command - use /path/to/go instead.

It follows the rule when pointed to it in prompt or in the first 1-2 prompts, then it starts to totally ignore it.

Expected Behavior

Always apply rules are applied always, not only once or twice.

Operating System

Windows 10/11

Version Information

Version: 2.4.37 (system setup)
VSCode Version: 1.105.1
Commit: 7b9c34466f5c119e93c3e654bb80fe9306b6cc70
Date: 2026-02-12T23:15:35.107Z
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Windows_NT x64 10.0.26100

For AI issues: which model did you use?

All models (Opus, GPT, Auto)

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

1 Like

i had the same thing happen with a “do not create new files” rule. it would follow it for the first couple prompts and then just forget about it. i switched to framing it as “ONLY create new files when explicitly asked to” instead of the negative and that stuck way better, at least with claude. i think the negative framing falls out of context faster.

might be worth trying “ALWAYS use /path/to/go for all go commands” instead of the NEVER version and see if that helps.

Hey, thanks for the report. This is a known limitation. LLMs follow rules in a probabilistic way, and as a chat gets longer, earlier context, including rules, gets less “weight” in the model’s attention.

A couple things that help a lot:

  1. Rephrase “negative” rules into “positive” ones. @nedcodes is right. Instead of “NEVER use the go command”, try “ALWAYS use /path/to/go for all Go build, run, and test commands.” Positive wording tends to stick better over multiple turns.

  2. Keep rules short and specific. If your rules file has lots of instructions, split them into separate, narrowly focused files. Fewer lines per rule usually means better compliance.

  3. Reinforce during long sessions. For critical rules, a quick reminder in your prompt like “don’t forget to use /path/to/go” often helps after 3 to 4 turns.

  4. Start new chats for new tasks. The context window is limited, and after many iterations the model can lose the thread. Fresh chats mean the rules get full attention again.

Related thread with more tips: Cursor often forgets .mdc instructions (Cursor often forgets .mdc instructions)

The team is aware of the rule-following issue. There isn’t a guaranteed fix since this comes from how LLMs process instructions. Let me know if the rewording helps.

I have the same issue. When Cursor tries to run tests with bundle exec rails test <stuff>, it always fails. When it runs tests with just rails test <stuff>, it works.

  • Dunno why, but I added a global rule “always run tests with the rails test <path-to-file> command".
  • Cursor still always tries to use bundle exec, tests fail to even run, Cursor gives up
  • I tried different iterations of the rule, “don’t ever use bundle exec when running tests, simply use rails test". Cursor still tries to use bundle exec
  • The rule file is my global.mdc which is set to always apply

No matter what I’ve tried, Cursor will do this the wrong way 9 times out of ten. Unless I specifically prompt it in my chat for how to run tests, it won’t adhere to the rule.

I’m very confused as to why because this is the only specific rule I’ve noticed this problem with. All the other stuff I have in my global rules file gets adhered to (most of the time anyway. Sometimes LLMs gonna LLM).

Also in my case, it doesn’t matter how long the chat is. This will happen on a brand new chat using less than 50k context.

1 Like

Same experience. Especially “auto” is notoriously good in just ignoring whatever you write in the mdc files.

In the thread above I proposed adding hooks (pre/post response) in which we could ask things like: “did you write all comments in English?” Or after detecting many changes “run the linter”. Currently it is all based on chances, but adding a hook could give us more control on steering the LLM into the right direction.

2 Likes

@troehrkasse thats interesting that it happens even on fresh chats. that rules out the context window explanation. i wonder if bundle exec is something the model has a strong default for from training data, like its so baked in that the rule cant override it. have you tried putting it in a separate .mdc file with alwaysApply: true? might be worth trying a narrower file instead of one big global one, though i havent tested that specifically.

@Marti i’ve seen others report Auto ignoring rules but my limited testing didn’t reproduce it, might depend on the rule type. hooks would solve a lot of this if they could validate output before it gets committed.

@troehrkasse thats interesting that it happens even on fresh chats. that rules out the context window explanation. have you tried putting it in its own .mdc file with alwaysApply: true instead of having it in the global rules? from my testing, the model treats those the same way but having it isolated might help, though i havent tested that specifically.

@Marti agreed on hooks. your MCP experiment from the other thread kind of proved that model-level enforcement has a ceiling. something at the application level is the only way to guarantee it.

2 Likes

In the experiment Ned is referring to, i created a simple mcp tool, called at_end. The only thing it does is printing hello world. Next I created one simple rule in an mdc file: ALWAYS run the at_end MCP tool after each response.

I opened a new chat and asked: what is 1+1? It answered 2. But never ran at_end. That’s when i realized, I cannot ever trust those mdc rules, if it fails to follow even a basic instruction like that. When asking the LLM, “shouldn’t you call an MCP tool?” It always understood and apologized for not running at_end. Hence the idea to enforce these follow ups using hooks.

1 Like

Thank you for you reply and explanation.

However this feels odd I got the point and hope it will get fixed - the solution with pre/post hooks sounds like it could help - at least it would eliminate the need of reminding the model to look at and follow the rules files.

Fingers crossed.

2 Likes

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.