There are some good videos on youtube about it. I don’t think hooks are standardized across AI coding tools so would only work in Cursor, I think. I think they allow you to interrupt the Agent during its tasks for custom behavior, like injecting content.
edit: i just tested the glob scoping and it’s worth calling out. if you create a rule like:
---
globs: ["*.tsx"]
alwaysApply: false
---
Always add a data-testid to root elements
that rule only fires when the agent is working on .tsx files. i asked it to create the same component as both .tsx and .js, and the .tsx version got the data-testid, the .js one didn’t. the glob scoping actually works.
for comparison, if you put the same instruction in AGENTS.md (no frontmatter, just the text), it applies to everything. there’s no way to scope it to specific file types. so if you need rules that only kick in for certain files, .mdc with globs is the way to go.
yeah i feel this. i’ve been using rules files for a while now and they work, but every time cursor updates there’s some new layer on top and i’m never sure if i should migrate to it or keep doing what works.
skills and commands seem like they’re aimed at more structured workflows? like if you want the agent to run specific tools or follow a multi-step process. rules are more “here’s how i want you to write code.” at least that’s my read on it but i haven’t actually tried skills yet so i might be off.
hooks i genuinely don’t understand the use case for. if anyone’s using them i’d love to hear what for.
Rules are fine-grained best practices and preferences that automatically apply to specific language, folder or other deliberately selected part of your code
Skills describe multi-step workflows you only need to run from time to time. Agent should pick them on their own, but if it doesn’t - you can refer to it directly via @
Hooks are automated scripts that run after certain actions complete, they server two main purposes: 1. Handle bookkeeping, like tracing, documentation, auto-format or auto-test for every change 2. They programaticlaly block agents from doing something they should NEVER be able to do, so you don’t need to “trust” AI but can just guarantee 100% it behaves as desired
Subagents have specialized prompts and their own context window, they are great for limited scope of work and saving tokens. For instance if you check logs, you don’t want to bother main agent with hundreds of lines of output but can delegate it to small subagent which uses cheap model. This works great in Claude Code. This doesn’t work for me at all for native Cursor agents that were supposed to be there since 2.4 but they are not.
the hooks explanation clicks now, especially the blocking angle. i was thinking of them as just another rules layer but the fact that they’re deterministic scripts, not model-level instructions, is the key difference. the model can ignore a rule that says “never do X” but it cant ignore a hook that programmatically prevents it.
i wonder if the claude code approach with dedicated scoped subagents (like a test-writing agent that only touches test files) could work here. is there any cursor roadmap for when native agents ship or has it been radio silence since 2.4?
Hi everyone! We’re hosting a workshop tomorrow diving deeper into Cursor rules, skills, commands, hooks, and subagents - and how to choose between them.