I have never been impressed with Skills. Before they were announced, I already created documents, scripts, checklists, etc., for specific tasks so I never understood what all the fuss was about once I saw what skills were. It seems like the real lesson is progressive disclosure, not “skills” which seems like common sense.
Really agree with “In 56% of eval cases, the skill was never invoked.” since its also hard to spot on when Skill use and does current context use skill? since its different with AGENTS.md that always attach on file.
Yea, I noticed the same thing with most rules files. The agent will often just ignore them. The only one I have found reliable are the global rules in AntiGravity, though the individual project rules in AG have the same problem. I just attach most stuff. It works.
ran a quick test on this. compared AGENTS.md (all instructions in one file) vs individual .cursor/rules/ files (separate .mdc per rule, each with alwaysApply: true). tested with JSDoc and underscore-prefix rules, 2 runs each.
result: both approaches got 100% compliance across all runs. no observable difference in how well the agent followed the instructions.
so for cursor specifically, the advantage of AGENTS.md isn’t about reliability over individual rules. it’s more about what @neverinfamous said, progressive disclosure. one file is easier to maintain and reason about than a folder of .mdc files. but if you want different activation patterns (some rules always, some only for certain file types) then individual rules with globs give you more control.
the vercel finding about skills only firing 56% of the time tracks with what i’ve seen too. agent-decided activation is hit or miss. alwaysApply or AGENTS.md both avoid that problem entirely.
Yea, I don’t think it’s a problem with skills or rules per se. The models just have limited attention span and limited ability to apply the attention they do have. You might call it a lack of wisdom or even common sense. more generally. I’ve caught models ignoring AG’s global rules now, also. They work most of the time. I don’t trust any of it. I either attach or link the relevant files directly. It’s less convenient in the short run but way more convenient than having things missed. Even too complex a single prompt can achieve the same type of failures. Or, at least they used to do so. This is why I often break up multi-step operations, even if relatively simple. The less the model has to concentrate on, go search for something, figure stuff out, etc., the more focus they have for the actual task. Is this context management or attention management or something? ![]()
yeah “attention management” is a good way to put it. it’s basically the same problem humans have with long task lists - the more you pile on, the more things get dropped.
the approach you’re describing (attach/link directly, break up multi-step operations) maps to what i’ve found too. explicit > implicit. if the model has to go discover a rule on its own, there’s always a chance it just… doesn’t. attaching it directly removes that variable.
the tradeoff is ergonomics. alwaysApply rules and AGENTS.md at least automate the “attach it every time” part, even if the model occasionally ignores what’s in front of it. better than hoping it goes looking for a skill file on its own.
