Cursor isn't strictly enforcing the rules

I’ve written many rules in .cursor/rules/ruls.mdc , like “No backup solutions allowed,” “No simulated data,” etc., and set them to always . However, Cursor sees these rules and ignores them / doesn’t enforce them.

1 Like

Hi @Tony_Xiao and welcome to Cursor Forum.

Project Rules like you described are included into the Chat when one of the following matches

  • Rule is set to always include
    OR
  • Rule has a description that AI has matched from your request
    OR
  • Rule has glob settings that match files which are being edited

As AI is processing the rules (and not Cursor) the AI can in some cases misunderstand, misinterpret or get confused by rules.

Note that statements like “No backup solutions allowed” may not be clear enough. You could state the issue more directly and why it is important.

example: If you do not want simulated data use following rule text.

When writing code don’t use example data as the code would not work with it and may create false responses.

Make sure to add either “always” or a good description.

Note that too many rules with always or too many rules to be followed can confuse AI and it then does not know what to do and may ignore them. This depends on AIs you use and not on Cursor as AIs process your request.

I’m using Claude Sonnet 4. I’ve explicitly stated “Do NOT use fallback solutions, simulated data, or compatibility workarounds—throw errors directly” in every project document: requirements, architecture design, high-level design, detailed design, rules, and specifications. I’ve rephrased it countless ways.

Yet, the agent consistently ignores these instructions.

However, when I add the exact same rule to the “Project Rules” section in the Claude web interface, it follows them flawlessly.

2 Likes

I can confirm this issue too. I asked Claude 4 Sonnet to generate a model for KneX and Objection using database and controller data, and to follow the rules from the plan. It refactored the code but added many things that had nothing to do with my project. The same process using GPT-4.1 made things work, but these models don’t always respect the rules. Sometimes they really do things that aren’t allowed by the rules. For instance, I don’t permit them to start, restart, or stop servers, or send commands to MySQL, but sometimes they do these things.

1 Like

I have the same problem. AI doesn’t follow the rules after 2-3 conversations. In fact, sometimes AI doesn’t follow any of the rules even if it says it has read them.

2 Likes

me too

1 Like

True! is a common issue!

1 Like

Hey all, I’ve already passed this to the team to invstigate, but the most useful thing for us here would be if anyone not on Privacy Mode could reproduce this, and send over a Request ID, as we can see exactly what the model sees, and where it might be failing:

If a non-privacy request ID isn’t possible, screenshots of the chat and the rule you add would be great in the meantime!