I’m a product engineer / team lead, been building production workflows around AI coding agents for real-world development: not just codegen, but task breakdown, refactors, migrations, and ongoing maintenance.
While using Cursor and other tools, I ended up with a “vibe‑level” idea for an agent orchestration layer on top of the IDE. In short:
-
agents don’t just reply in chat, they act as structured workflow units;
-
there are explicit tasks, statuses, retries, fallbacks, logs, and metrics;
-
multiple agents can be combined (planner, implementer, reviewer) and wired into real dev processes (backlog, CI, code review, etc.).
I’ve built a small prototype on top of existing APIs (CLI/HTTP + LLMs) and I see a few places where this could complement Cursor:
-
more transparent orchestration for complex, multi-step, multi-file tasks;
-
reusable “context bundles” / agent configs that can be shared across projects and teams;
-
better observability of what agents did, with which prompts, and what the outcome was.
I’d be happy to:
-
show a short demo (video or live),
-
share what already runs in production for us vs what’s still experimental,
-
discuss whether this direction aligns with how the Cursor team thinks about agents.
Questions to the community and the Cursor team:
-
Would you find a more explicit agent orchestration layer on top of the IDE useful?
-
Which scenarios would you prioritize first (refactoring, large migrations, legacy maintenance, onboarding, etc.)?
If someone from the Cursor team sees this — I’d really appreciate your feedback and would love to explore this idea in more detail.
Thanks!