/best-of-n does not run parallel model worktrees — falls back to single agent

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

/best-of-n runs as a single agent instead of spinning up parallel model worktrees. The specified model names are ignored and no comparison output is produced.

Steps to Reproduce

  1. Open a new agent chat in the Editor
  2. Submit: /best-of-n gpt-5.4-high-fast, claude-4.6-opus-max-thinking
  3. Observe only one agent runs - no worktrees created, no model comparison shown
    Note: /best-of-n also does not appear in slash command autocomplete

Expected Behavior

Per the this announcement (Cursor 3: Worktrees & Best-of-N) /best-of-n should run the task across both models simultaneously, each in its own worktree, with a parent agent comparing results.

Operating System

MacOS

Version Information

Version: 3.0.4 (Universal)
VSCode Version: 1.105.1
Commit: 63715ffc1807793ce209e935e5c3ab9b79fddc80
Date: 2026-04-02T09:36:23.265Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1

For AI issues: which model did you use?

gpt-5.4-high-fast, claude-4.6-opus-max-thinking (Auto selected)

For AI issues: add Request ID with privacy disabled

be0d49ec-ee7d-472c-b645-c2f3a34aa166

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey @danielcoder,

This is a known limitation. The /best-of-n and /worktree slash commands are not yet available in the new Cursor 3 (Glass) interface — they only work in the classic editor mode.

As a workaround for now, you can switch to the classic interface: go to Cursor Settings > General > Interface and select Classic. The /best-of-n command will work there.

Alternatively, in the Glass interface, you can use the model picker UI to select multiple models (though this won’t create parallel worktrees like /best-of-n does).

For some reason, I’m seeing the selected model for all parallel agents instead of the relevant ones. “default” everywhere, instead of “gpt-54“, “opus46“, “composer2“. Classic UI

Thanks for reporting this, and for the screenshot — helpful to see exactly what’s happening.

This is a separate issue from the original report. The /best-of-n command is running and creating parallel agents, but the model names from your CSV aren’t being applied to each runner — they all fall back to “default” (Auto) instead of using the distinct models you specified.

This is a known bug that our team is actively working on fixing. For now, the runners may all end up using the same auto-routed model rather than the specific ones you listed.

why did you deprecate/remove the multiple model toggle?!?!?!?! :frowning: i want it back :frowning:

I have been relying heavily on the multi-model interface/toggle and the /best-of-n, besides not working for using multiple models, also doesn’t seem to properly use other skills i reference in the prompt

Altogether this feels like a pretty significant downgrade. The UI pre 3.0 was intuitive and functional