I guess, my issues are deeper than “Apply Intelligently” and extend to all rules, I guess. I generally see only three rules attached to my context when I write prompts. They never seem to include any of the rules marked as “Always Apply”, and as I type the prompt out, the referenced rules never changes….which to me, indicates that “Apply Intelligently” doesn’t seem to be working.
I have several rules about committing, unit testing, etc. that I have ultimately marked as “Always Apply”…however in actual practice, they NEVER apply, unless I manually mention them inline in my prompts. So if I want to commit, if I request the agent commit without explicitly referencing my @commit-messages.mdc rule file, it not only never gets applied…but, multiple models will actually fumble around and fail to even generate a proper commit command, let alone format the messages properly according to my preferences. The llms will often issue totally incorrect git commands (i.e. git commit -m “feat: first line message” -F “/tmp/commit-msg.txt” which results in an error because -m and -F cannot be used together), and will take a totally non-deterministic approach to how the commit messages are formatted. sometimes they will generate a file and use -F, sometimes they will use \n in a single-line message with -m but teh way its done, the \n chars actually appear in the committed commit message in git, sometimes it will use printf to try and generate a message, which often fails for the same reason (\n and \t will show up in the commit message once committed to git), etc. I’ve also had it try to use -m multiple times, once for each line, which always produces messages with empty blank lines, doubling the vertical height of the message in the terminal (which is terrible.) If the llm DOES generate a file for the commit message (the most reliable way, IME), it will often do it different ways, in two different tool invocations done one right after the other! The non-deterministic nature of models is extremely apparent in such cases.
So, I’ve created rule files, to account for these issues. And yet a lot of the time, they just don’t seem to be applied. Or, maybe they are attached to the context, but, not fully applied? Sometimes it seems like maybe some of the rule is being applied, but not other parts. Perhaps this is just coincidence. Coincidence, maybe, as when I query the agent and model about these issues, they will usually say that the rules may be attached, but not necessarily applied, and that it depends a lot on the point of tool invocation and whether the necessary…cues, I guess, are present for the rule to actually apply?
It would be nice to know, when a rule is being applied vs. not, as right now it seems very ambiguous.