I’m trying to understand what is currently possible in Cursor regarding long-running multi-agent workflows.
Here’s the scenario:
I wrote a markdown document containing 18 tasks.
The document starts with an introduction explaining the motivation, purpose, and expected outcome.
After that, the document contains 18 independent tasks. Each task has:
-
a definition
-
implementation details / explanation
-
acceptance criteria
The important part is that all 18 tasks are completely independent from each other, so they could theoretically be implemented in parallel.
My idea is something like:
-
a root/main agent responsible for orchestration
-
multiple sub-agents, where each sub-agent is responsible for implementing one task
-
each task/sub-agent generating its own Pull Request (working on a isolate worktree & branch)
=>> Is this kind of automation/orchestration currently possible in Cursor editor? <<=
Additional questions:
-
Is it possible to specify which LLM/model should be used for each individual task/sub-agent?
-
My initial idea is to use GPT 5.3 Codex for most tasks, since the tasks are mostly low-to-medium complexity and the cost/performance ratio seems very good. But I’m also considering using Composer 2 for the lower-complexity tasks. Can the root agent dynamically decide which model to use for each sub-agent/task? For example, could the recommended model be declared directly inside each task definition in the markdown document?
-
Is it possible to keep everything running Cursor editor, while still monitoring execution/progress through Cursor Cloud?
I’d also appreciate hearing from people already using Cursor for long-running agentic workflows or multi-agent orchestration setups.