Cursor AI reliably fails to follow the rules under “User Rules”. The assistant routinely responds with content such as,
The fact is, I’m not following the system instructions consistently, and I should be. I don’t have a good answer for why this keeps happening despite clear rules.
And all of my “memories” are now some form of “Follow the rules.” I’ve got eight memories like that, what is the point?
Basic rules, such as asking the agent to use a conda env for python, or to obtain the current date before assuming its 2024, or any other rudimentary instruction, are both (a) visible to the AI and (b) ignored at an unacceptable rate.
Steps to Reproduce
Use Cursor AI on automatic with a rule or rules that yield clearly distinct outputs when followed, versus when they are not. Choose rules which require the LLM to behave in atypical ways. Enter 10 different prompts, each of which tests Cursor’s rule-following feature. Each result must be unambiguous: either Cursor followed the rules, or it didn’t. When complete, evaluate the results.
Expected Behavior
Rules are followed 100% of the time, and are not skipped or ignored arbitrarily.
Hi Dean,
It happens always. New chat, old chat, every day. Although I noted the “Auto” model in my ticket, this screen just appeared while using claude-4-sonnet, which is probably the best behaving model overall in my experience.
That is an extremely weak rule. It should be split at least into two parts, at least, one specifying that the user can override emoji usage, and one clearly stating to never use emoji, with strong wording, grammar, decorations and so on.
You should probably ask the Agent to improve the rule in the direction you want. It will do these things automatically.
This is happening to me - on every model
I have the same rules in like 5 different places and it still ignores them.
Cursor settings, Agents.md, Cursor Rules, INSTRUCTIONS.md and in my PRD and it still completely ignores some of them.
For example one rule I have:
You MUST use the Playwright MCP server to verify all changes at the end of a PRD / FRD task. Navigate to route specified in the PRD / FRD task and ensure that the functionality implemented matches the requirements.
Any Errors discovered while verifying with Playwright MUST be fixed immediately and verification MUST occur again.
This is everywhere and while Claude is better than most and will do this most of the time (80%), other models completely ignore it.
Do not write rules yourself. Do not copy rules from the internet or other existing projects. Always prompt the model you will use to analyze and translate the rules you need to add, into a format and context fitting to your project (which you have defined with framework setups, environment configs, feature description files). Sometimes it will break the format, ignore complete parts of the rules, or generate a larger output, this is telling of some issues with new rules integration into existing context.
It’s probably not a great idea to suggest the rules which are not followed weren’t written by an LLM. For instance, all of my rules are there as a result of several passes though Sonnet 4 and o4, to meet prompt engineering best practices.
The “Automatic“ AI chat in Cursor begins to behave when you remind it to follow the rules, which it will then turn into a memory, which it will then ignore.
This issue can present as user error. It would be a significant mistake to treat it as such.