Rules, Commands, Subagents, Skills and other Taxonomy Headaches

Rules are meant to be globally applicable guidance — they define general behavior that should apply consistently. Commands, on the other hand, are explicit actions I trigger intentionally, and I expect them to be fully covered: if I run a command, everything defined inside that command should be executed, not just parts of it.

Skills work differently. A skill is more like a bundle of capabilities, not a strict step-by-step instruction. When a skill is triggered, the agent may only apply the abilities that are relevant in that moment — not necessarily every single thing listed in the skill definition. And importantly: skills are triggered by the agent itself, not by me.

Because of that, I don’t understand why Cursor tries to migrate commands into skills — they are fundamentally different concepts. Commands are deterministic and user-invoked; skills are selective and agent-invoked.

Then there are Subagents. I’m expected to define them in Markdown files, similar to skills, which feels inconsistent. What exactly is the difference between a subagent and a skill — just that I can specify which model to use when it runs? That doesn’t feel like a meaningful distinction. Subagents, in my view, should be spawned dynamically by the agent when needed, with their behavior emerging from skills and context — not something I have to predefine manually.

Overall, it feels like these concepts are not clearly distinguished, overlap heavily, and the boundaries between them are blurry. The system would benefit from sharper definitions and more consistent separation of responsibilities.

Subagents are something completely different. They allow your Agent to launch another Agent, which will have its own rules of conduct.

I use this to optimize my time and expenses: (Continuously Updated) My Real-Time Review of Cursor Subagents - #22 by Artemonim

Yes, that’s exactly what I mean when i wrote: “Subagents, in my view, should be spawned dynamically by the agent when needed” is the essence of what a subagent is (at least conceptually). What I’m pointing out is that Cursor, by giving us the task/ability to define subagents manually in the settings, drifts away from that idea and starts blurring the term into the same space as skills and other taxonomies—making the boundaries less clear.

Well… You need to explain why an Agent would call another Agent instead of doing things themselves, right?

Four of my Subagents have the rules “You’re a subagent. Read the [User Rules file], and then do what the Senior AI tells you,” and the only difference between them is that they’re models of varying quality/cost.

your understanding is correct and some of these terms are confusing

the way i see it, cursor team came up with something similar to skills - commands, but the community little bit later invented skills which are more generally adapted, because of this cursor is migrating to this standard

if I understand what you are saying is, you think an agent sohuld spawn subagent and give it instructions, this is correct, but also you need to first define the subagent and give it some default ‘persona’, so when agent spawns a subagent, it tells it what to do + the subagent has the hard defined context you created, similar to a rule

also subagents are picked based off of their definition, example is default cursor subagents:
bash - runs commands - instead of main agent wasting context on long testing command outputs, it spawns this subagent just to run the test command, summarizes output and gives that to main agent, saving context
explore - main agent says find how xx feature is used in codebase, subagents finds all relevant code, explores codebase, summarize and return to main agent, saving context

rules/skills/commands are all basically the same with minor tweaks, as you said as well

I get the point — there are valid reasons to call a subagent: isolation, different “persona” constraints, specialized tooling, or simply cost/quality routing (cheap model for brute-force, strong model for reasoning, etc.).

But my argument is: that logic doesn’t require a separate, user-maintained taxonomy called “Subagents” in settings. It can live one level higher as policy/metadata in skills (or a routing layer), and the agent should be smart enough to infer when and how to spawn a subagent from (a) the prompt, and (b) the skill’s declared requirements.

In other words:

  • Skills can declare capabilities + constraints (e.g., “needs web search”, “needs long-context summarization”, “needs strict determinism”, “cheap execution acceptable”, etc.).

  • The agent then chooses the best execution strategy, which may include spawning a subagent, and selecting model + parameters accordingly.

  • This keeps the separation clean: skills describe what, routing decides how, and subagents are an implementation detail rather than a parallel user-defined concept.

Right now, Cursor’s approach (manual subagents defined similarly to skills) blurs the boundary: it turns “subagent” into a second way of defining behaviors, when it should primarily be a runtime mechanism, basically a form of multi-threading / parallelization.

Also, practically: the current implementation feels inconsistent. I’ve defined two subagents, but they are never used — even when I explicitly try to invoke them. For example, I defined a “web search with composer 1”-subagent, but cursor still default to something like “Opus 4.5 inherit” web search subagent, even though I did not define them that way. That makes the feature feel less like a controllable abstraction and more like a confusing aliasing layer.

So my feedback would be:

  1. tighten the conceptual separation (skills/policy vs subagents/runtime), and

  2. make subagent invocation/model selection reliable and transparent when users do define them.

Maybe Cursor is thinking about this in a broader way and I’m missing part of the design intent, totally possible. If there’s a deeper rationale for defining subagents explicitly, I’d genuinely like to hear it.

“Skills” is some weird, unclear crap that doesn’t work. “Subagents” is clearly understandable, working crap.

I get your point that conceptually the second can be treated as a subset of the first, but I don’t agree with that.

I like these distinctions and I’m trying in my head to apply a “what” vs “how” distinction but ultimately agree that this distinction is blurred in that both skills and subagent definitions sketch out what they’re for and try to specify how to do them, but the “how” in the two modalities are actually orthogonal: in skills it’s much more prescriptive in the details of the workflow while subagents are more descriptive in the details of the infrastructure upon which the workflow operates.

I could come around and say that there are a lot of blurred lines as all of this is coming together quickly and all of the vendors are trying to stay ahead of their competition but we’re also never going to get to “good” while we have AI agents doing the bulk of the design work… That’s kind of cynical I know and I haven’t really got much to add here except that…

I’ve been trying to keep a degree of orthogonality across my agent instructions. Whenever I sense some overlap or duplication I figure out a way to extract (or “refactor”) that into something that all of the dependents reference together. E.g. I have a few workflows that rely on planning documents, but the planning document skill ends up being shared between all of them. (Although I don’t use it in Cursor for various reasons.)