Not gonna lie — I’m fully in love with Cursor subagents.
How it started:
Use /sisyphus /atlas /explore /generalPurpose /librarian /metis /hephaestus /momus /oracle /multimodal-looker /prometheus /vercel-react-best-practices /vercel-react-best-practices And orchestrate among them to accomplish the following: <PRD.md>
And look at the finish!
This is extremely exciting!!! Parallel exploration, agents with real roles, customized set of skills, clean handoffs, and then deliberate execution. It finally feels like working with a Cursor instead of constantly guiding prompts through the whole process. I still have to tag every subagent I want in the orchestration chain, but I hope to find a better pattern in the future!
Once you see multiple agents exploring and planning at the same time, single-threaded prompting just feels archaic. To my surprise, I even discovered one of these subagents remain in think mode for almost 10 minutes. That would not be possible in a single context.

Because of that, I ended up open-sourcing a small side project that is a copy/paste of oh my opencode, but rebuilt specifically for Cursor using its native subagents:
https://github.com/tmcfarlane/oh-my-cursor
Nothing fancy, really. Just copy/paste the agent definitions, rewrote them to work with cursor’s subagent format, added some of the default cursor subagents to the mix:
List of subagents:
- atlas.md
- explore.md
- generalPurpose.md
- hephaestus.md
- librarian.md
- metis.md
- momus.md
- multimodal-looker.md
- oracle.md
- orchestrator.mdc
- prometheus.md
- sisyphus.md
Big props to the Cursor team.
This feature quietly changes everything.
Here are my requests to Cursor for upgrades.
- Chained task delegation aka. SWARM MODE (Allow subagents to spin up new subagents)
- Orchestration Management (Allow user to define how subagents should be orchestrated)
- Subagent Logs + lifecycle (Allow user to export complete logs, including subagent logs)
- Long-running agents with Subagents: This may already be in place, I haven’t checked out the feature just yet, but some of these runs are 1-2 hours on average, would be nice to have in the background.
- LLM Subagent definiton: Again, this may already exist and I excuse myself if it does, but simply we want to be able to define what model(s) a subagent is assigned to use. Simple and effective as subagents have their own cognitive load requirements.




