Subagents don't maximize parallel dispatch

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Cursor supports up to 4 subagents running in parallel. When my agent instructions say “dispatch subagents in parallel” for 8+ independent items, Cursor only launches 3 concurrently instead of 4. Only when I add “up to 4 in parallel” explicitly to my instructions does it dispatch 4.
Why doesn’t the agent use the maximum available slots by default? I shouldn’t need to hardcode “up to 4” in my instructions – if Cursor later increases the parallel limit, I’d want it to automatically scale up without changing my instructions.

Steps to Reproduce

Instruct agent to “dispatch subagents in parallel” with 5+ items – only 3 launch;
add “up to 4 in parallel” explicitly – then 4 launch.

Expected Behavior

Agent should use all available parallel slots by default without needing to hardcode the number.

Operating System

Windows 10/11

Version Information

Version: 2.5.20 (user setup)
VSCode Version: 1.105.1
Commit: 511523af765daeb1fa69500ab0df5b6524424610
Date: 2026-02-19T20:41:31.942Z
Build Type: Stable
Release Track: Default
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.22631

For AI issues: which model did you use?

claude-4.6-opus-max
thinking
Enabled MAX mode

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. In the end, the model decides how many sub-agents to run in parallel. The agent interprets your instructions and decides how many to launch. So explicitly adding “up to 4 in parallel” changes the behavior.

Right now the most reliable workaround is exactly what you found: specify the number in the instructions. I get that this isn’t ideal, since you’d want auto-scaling if the limit changes.

I’ve passed this to the team.

I also saw your related thread about batch scheduling Parallel subagents: next batch waits for entire previous batch to finish. It’s the same area to improve, and I’ll flag it too.

If you can share a Request ID from one of these sessions (chat context menu, top right, then Copy Request ID), that would help the team investigate the model behavior more precisely.

Request ID: da57c516-c5a8-4266-9f09-92b23fc01a29

Privacy Mode is enabled and controlled by our org admin. It can’t be turned off.

1 Like