Where does the bug appear (feature/product)?
Cursor IDE
Describe the Bug
I have developed a fairly extensive library of rules. Some of them are very large, but necessarily so, as the specificity and detail in them have REALLY helped the agent in performing certain classes of tasks, corralled bad behavior, stabilized command line usage, etc.
There are some issues, however, getting rules to apply. First, it seems the LLMs can rather arbitrarily choose to follow rules or not. Sonnet explained that its âbase natureâ is to âsee problem â fix problemâ and that will often override rules that try to reign that behavior in. I am not sure if some tuning of the API calls that Cursor makes could help on that front, though.
However, I do not believe that is the only reason why rules do not apply. I have noticed in the 1.4.x and 1.5.x versions of cursor, there is now a little context indicator in the prompt editor that shows the rules currently accounted for. These are mostly âAlways Applyâ type rules and âApply to Specific Filesâ rules. I do not see any âApply Intelligentlyâ rules in here, howeverâŚ
I started to wonder about this, so I started putting together prompts to test whether âApply Intelligentlyâ rules EVER actually really apply. I know that some times in the past, I was pretty sure they were, because I could see aspects of those rules being followed in how the agent was behaving and the output it produced. However, when I explicitly tried to probe whether these kinds of rules were in fact being applied, the agent USUALLY says NO!
One of the tests I performed was to create a plan for some upcoming work. As part of the prompt I queried it to tell me which rules it used to help it create that plan. It did not list any of the âApply Intelligentlyâ rules⌠I then prompted it to create a Linear epic and stories from the plan details we had just come up with, and at the end to tell me which rules it had use to create that plan. It again did not list any of the âApply Intelligentlyâ rules.
I then tried more of a mock scenario, to see if I could force it to bring in rules I felt should apply. Again, it did not list those rules. The only rules it listed as being used, were the âApply Alwaysâ and âApply to Specific Filesâ rules.
I was also paying close attention to the context marker for attached rules, AS I crafted my prompts. The list of attached rules never changes. Given the nature of âApply Intelligentlyâ rules requiring a description, I figured that words, phrases, terms in that description are probably a key part of how the rules are applied intelligently. So I started tweaking the descriptions. I eventually had a normal-ish description, then a bunch of terms that I felt should be used to match, separated by commas, in the description. Nothing.
I believe that âApply Intelligentlyâ is broken, or else it works in a much more radically arcane manner than I can think of, and maybe only AFTER the prompt has been issued. I wonder if some refinement here could help. A description is ok, but, there is no real documented knowledge on how these rules work, what makes them apply, when, or any tips or tricks to help get them to apply consistently.
Right now, it seems the only real way that Cursor determines whether to attach one of these rules or not, is the description? I feel this is rather limited and very arbitrary. I wonder if âApply Intelligentlyâ rules need more. Say a list of terms that should trigger attachment to context as a rule, especially WHILE the user is crafting their prompt. Being able to SEE that apply intelligently rules are attached, AS I formulate my prompt, would be IMMENSELY helpful. Beyond just arbitrary terms, I also wonder if it would be useful to be able to list one or more MCPs as attachment triggers. If I mention certain words that would result in MCP usage (say, story, epic, Linear, etc.) by the agent with that prompt, being able to make sure that usage of an MCP resulted in attachment of the rule would be extremely helpful.
In any case, main part of the reason I am asking, is it seems all to frequent that agents ignore rulesâŚand Iâm wondering if that is often because the rules are never attached to the context in the first place, or because dynamic attachment later (once the agent is already workin on the issued prompt) is too arbitrary and ill-defined. I have a lot of rules, some are simplish, most are moderately complex, some are very complex and large. I started fiddling with switching many of the âApply Intelligentlyâ rules to âAlways Applyâ and the number of rules attached by default shot up to about 20, however my context usage also shot up to 60% or more on a consistent basis. Moving more of the rules back to Apply Intelligently, the default rule load dropped to about 7-9, and context usage was 23-27%. I think it would really help, if there was better support for the âApply Intelligentlyâ rules, and more real-time dynamic identification of which such rules should apply, as much as possible, WHILE the user is typing their prompt (before it is issued) so the user can see which rules are in fact actually going to be concretely applied. Further, if there is just no way to know from the prompt while it is being authored, and Apply Intelligently rules then require additional contextual cues after the prompt is issued and the agent and llm are working on it, it would be very helpful to see, in the chat, when an Apply Intelligently rule gets triggered and added to context.
Something, say, more akin to how the Docs contexts work. When attaching docs, which as far as I can tell, only seem to be used by Claude at this point, you can see when the agent starts reading documentation, and a bunch of badges are dropped into teh chat for each part of the docs the agent and llm reviewed. Something similar, whenever rules are reviewed or attached to the chat, would be very helpful in at least allowing us to know, YES, my rules are indeed, actually, really, truly, being factored in.
Steps to Reproduce
Create some rules governing how you want the agent & llm to do things and âApply Intelligently.â
Craft a prompt you think should trigger the usage of said rules.
Issue the promptâŚ
Expected Behavior
Usage, application of, factoring of, rules into the work the agent and llm are doing, should be clearer. Apply Intelligently specifically, I think, needs more ways to trigger attachment. IF the prompt the user is writing can be used to find potential Apply Inlligently rules to attach, those attachments should be made clear in the Attached Rules context marker in the prompt editor.
Operating System
Windows 10/11
MacOS
Current Cursor Version (Menu â About Cursor â Copy)
Version: 1.5.9 (Universal)
VSCode Version: 1.99.3
Commit: de327274300c6f38ec9f4240d11e82c3b0660b20
Date: 2025-08-30T21:02:27.236Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0
Does this stop you from using Cursor
No - Cursor works, but with this issue