Apply Intelligently rules, do not seem to apply intelligently

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I have developed a fairly extensive library of rules. Some of them are very large, but necessarily so, as the specificity and detail in them have REALLY helped the agent in performing certain classes of tasks, corralled bad behavior, stabilized command line usage, etc.

There are some issues, however, getting rules to apply. First, it seems the LLMs can rather arbitrarily choose to follow rules or not. Sonnet explained that its “base nature” is to “see problem → fix problem” and that will often override rules that try to reign that behavior in. I am not sure if some tuning of the API calls that Cursor makes could help on that front, though.

However, I do not believe that is the only reason why rules do not apply. I have noticed in the 1.4.x and 1.5.x versions of cursor, there is now a little context indicator in the prompt editor that shows the rules currently accounted for. These are mostly “Always Apply” type rules and “Apply to Specific Files” rules. I do not see any “Apply Intelligently” rules in here, however…

I started to wonder about this, so I started putting together prompts to test whether “Apply Intelligently” rules EVER actually really apply. I know that some times in the past, I was pretty sure they were, because I could see aspects of those rules being followed in how the agent was behaving and the output it produced. However, when I explicitly tried to probe whether these kinds of rules were in fact being applied, the agent USUALLY says NO!

One of the tests I performed was to create a plan for some upcoming work. As part of the prompt I queried it to tell me which rules it used to help it create that plan. It did not list any of the “Apply Intelligently” rules… I then prompted it to create a Linear epic and stories from the plan details we had just come up with, and at the end to tell me which rules it had use to create that plan. It again did not list any of the “Apply Intelligently” rules.

I then tried more of a mock scenario, to see if I could force it to bring in rules I felt should apply. Again, it did not list those rules. The only rules it listed as being used, were the “Apply Always” and “Apply to Specific Files” rules.

I was also paying close attention to the context marker for attached rules, AS I crafted my prompts. The list of attached rules never changes. Given the nature of “Apply Intelligently” rules requiring a description, I figured that words, phrases, terms in that description are probably a key part of how the rules are applied intelligently. So I started tweaking the descriptions. I eventually had a normal-ish description, then a bunch of terms that I felt should be used to match, separated by commas, in the description. Nothing.

I believe that “Apply Intelligently” is broken, or else it works in a much more radically arcane manner than I can think of, and maybe only AFTER the prompt has been issued. I wonder if some refinement here could help. A description is ok, but, there is no real documented knowledge on how these rules work, what makes them apply, when, or any tips or tricks to help get them to apply consistently.

Right now, it seems the only real way that Cursor determines whether to attach one of these rules or not, is the description? I feel this is rather limited and very arbitrary. I wonder if “Apply Intelligently” rules need more. Say a list of terms that should trigger attachment to context as a rule, especially WHILE the user is crafting their prompt. Being able to SEE that apply intelligently rules are attached, AS I formulate my prompt, would be IMMENSELY helpful. Beyond just arbitrary terms, I also wonder if it would be useful to be able to list one or more MCPs as attachment triggers. If I mention certain words that would result in MCP usage (say, story, epic, Linear, etc.) by the agent with that prompt, being able to make sure that usage of an MCP resulted in attachment of the rule would be extremely helpful.

In any case, main part of the reason I am asking, is it seems all to frequent that agents ignore rules…and I’m wondering if that is often because the rules are never attached to the context in the first place, or because dynamic attachment later (once the agent is already workin on the issued prompt) is too arbitrary and ill-defined. I have a lot of rules, some are simplish, most are moderately complex, some are very complex and large. I started fiddling with switching many of the “Apply Intelligently” rules to “Always Apply” and the number of rules attached by default shot up to about 20, however my context usage also shot up to 60% or more on a consistent basis. Moving more of the rules back to Apply Intelligently, the default rule load dropped to about 7-9, and context usage was 23-27%. I think it would really help, if there was better support for the “Apply Intelligently” rules, and more real-time dynamic identification of which such rules should apply, as much as possible, WHILE the user is typing their prompt (before it is issued) so the user can see which rules are in fact actually going to be concretely applied. Further, if there is just no way to know from the prompt while it is being authored, and Apply Intelligently rules then require additional contextual cues after the prompt is issued and the agent and llm are working on it, it would be very helpful to see, in the chat, when an Apply Intelligently rule gets triggered and added to context.

Something, say, more akin to how the Docs contexts work. When attaching docs, which as far as I can tell, only seem to be used by Claude at this point, you can see when the agent starts reading documentation, and a bunch of badges are dropped into teh chat for each part of the docs the agent and llm reviewed. Something similar, whenever rules are reviewed or attached to the chat, would be very helpful in at least allowing us to know, YES, my rules are indeed, actually, really, truly, being factored in.

Steps to Reproduce

Create some rules governing how you want the agent & llm to do things and “Apply Intelligently.”
Craft a prompt you think should trigger the usage of said rules.
Issue the prompt…

Expected Behavior

Usage, application of, factoring of, rules into the work the agent and llm are doing, should be clearer. Apply Intelligently specifically, I think, needs more ways to trigger attachment. IF the prompt the user is writing can be used to find potential Apply Inlligently rules to attach, those attachments should be made clear in the Attached Rules context marker in the prompt editor.

Operating System

Windows 10/11
MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.5.9 (Universal)
VSCode Version: 1.99.3
Commit: de327274300c6f38ec9f4240d11e82c3b0660b20
Date: 2025-08-30T21:02:27.236Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for such a detailed bug report on your experience with rules not applying when you’d want them to!

To start by clarifying the behaviour here, any rules set to ‘Always Apply’ are, as you’d expect, always sent to / present in the prompt sent to the model. For ‘Apply Intelligently’, the model is sent the description of all the rules, and can choose which rules to read depending on if the description matches the situation - think of this like a mini prompt.

There are two factors that may be at play here.

First is the issue of the model not picking the rules when you would want them to be. You’ve obviously identified this yourself, but by having this step where the model has to read your rule description and decide if it’s relevant, this unfortunately can mean the model fails to correctly read or follow the rules when you’d expect it to.

Secondly, especially in situations where you have a lot of rules for the model, either in separate rules or in a few much larger rules, the instructions to the model can get heavily bloated and the model begins to not follow the rules exactly as written. Think of this a bit like a human - if you give them too many instructions in one go, they’ll undoubtedly miss or forget something.

We’ve thought on this a lot internally, and want to bring more visibility to what rules are being applied at any given time. It’s hard to predict the ‘Apply Intelligently’ rules without submitting your prompt to the LLM first, therefore costing some usage each time. However, this doesn’t feel like an impossible problem to solve.

For now, I’d recommend relying on short, precise rules that are always applied if you are finding significant issues with this, but at the very least, you may have to tune your rule description for those set to ‘Apply Intelligently’ to try to get the model to more reliably pick it when you want it to!

1 Like

Thanks for the info.

I agree that generally, the more precisely matched the rules to the prompt being issued, the better. I would rather have half a dozen or so very well matched rules, than 20 that are somewhat matched. I did read some things about how the size of the context leads to higher hallucination levels, so a 200k context window vs. a 1m context window, it makes sense for Cursor to stick with 200k mostly.

I think some of the challenge though, is mapping rules to any given prompt. The best matcher that rules have right now, is globs. Glob is a long standing file matching protocol and it works very well…for files.

The thing I have the greatest challenge with, is getting rules to attach that are unrelated to files. Part of the reason I have more rules now, and more detailed rules at that, is it has come out of my attempts to corral, manage, guide, and control the agent for various tasks it performs OUTSIDE the realm of code. Matching rules for coding tasks is actually easy, because we have globs. The challenge, is helping the agent learn how to run unit tests properly, or how to and how not to use the terminal and various CLIs in general, or perhaps, to not have the agent run dev servers (I usually have them running, and they auto-start…although, I think the agetn/llms might be getting better at managing servers on their own, so I may change my tactic here).

Some agents like to use the terminal much more than others. Grok seems to prefer usinig the terminal for everything, which is honestly rather annoying. It is far better for Grok to use the built-in find, grep and other search tools, than to try and run terminal commands to do the same thing, as terminal commands don’t have the built-in cursor context access.

Other areas where I have rules that are difficult to get attached automatically are things like planning (a process, where I do NOT want stories written), the creation of actual epics and stories in Linear (which takes a refined plan, then breaks it out into a high level epic, and a logical breakdown of the work into a progressive set of phases captured by stories/tasks). Getting the agent and models to understand how I wanted them to work with me to plan, then to take a plan and put it into Linear as stories, took some non-trivial rules. However they never apply automatically, and if I forget to reference them, then things always go amok. For one, ALL the models, REALLY like to SUMMARIZE! Drives me crazy. I spend all the time on planning out what needs to be done, how, with what libraries, where, etc. and then the model condenses it into 1/10th of the full detail plan and creates 8-10 stories that are no more than 15-20 lines of text each (when the original plan held extensive and explicit detail.) Getting enough rules wrapped around THAT whole process, took a while, and now the rules exist and work if attached…its just making sure they are always attached (perhaps that could be helped, if I could hit Enter without issueing the prompt…I write larger prompts, and having to hit shift+enter every time I need a new line means I’m preemptively issuing my prompts CONSTANTLY.)

Anyway, I think if there could be improvements to help rules dynamically attach according to the prompt itself (in addition to being factored in even after teh prompt is issued) could help manage the amount of rules that are applied to that chat overall.