/best-of-n does not run parallel model worktrees — falls back to single agent

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

/best-of-n runs as a single agent instead of spinning up parallel model worktrees. The specified model names are ignored and no comparison output is produced.

Steps to Reproduce

  1. Open a new agent chat in the Editor
  2. Submit: /best-of-n gpt-5.4-high-fast, claude-4.6-opus-max-thinking
  3. Observe only one agent runs - no worktrees created, no model comparison shown
    Note: /best-of-n also does not appear in slash command autocomplete

Expected Behavior

Per the this announcement (Cursor 3: Worktrees & Best-of-N) /best-of-n should run the task across both models simultaneously, each in its own worktree, with a parent agent comparing results.

Operating System

MacOS

Version Information

Version: 3.0.4 (Universal)
VSCode Version: 1.105.1
Commit: 63715ffc1807793ce209e935e5c3ab9b79fddc80
Date: 2026-04-02T09:36:23.265Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1

For AI issues: which model did you use?

gpt-5.4-high-fast, claude-4.6-opus-max-thinking (Auto selected)

For AI issues: add Request ID with privacy disabled

be0d49ec-ee7d-472c-b645-c2f3a34aa166

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey @danielcoder,

This is a known limitation. The /best-of-n and /worktree slash commands are not yet available in the new Cursor 3 (Glass) interface — they only work in the classic editor mode.

As a workaround for now, you can switch to the classic interface: go to Cursor Settings > General > Interface and select Classic. The /best-of-n command will work there.

Alternatively, in the Glass interface, you can use the model picker UI to select multiple models (though this won’t create parallel worktrees like /best-of-n does).

For some reason, I’m seeing the selected model for all parallel agents instead of the relevant ones. “default” everywhere, instead of “gpt-54“, “opus46“, “composer2“. Classic UI

Thanks for reporting this, and for the screenshot — helpful to see exactly what’s happening.

This is a separate issue from the original report. The /best-of-n command is running and creating parallel agents, but the model names from your CSV aren’t being applied to each runner — they all fall back to “default” (Auto) instead of using the distinct models you specified.

This is a known bug that our team is actively working on fixing. For now, the runners may all end up using the same auto-routed model rather than the specific ones you listed.

why did you deprecate/remove the multiple model toggle?!?!?!?! :frowning: i want it back :frowning:

1 Like

I have been relying heavily on the multi-model interface/toggle and the /best-of-n, besides not working for using multiple models, also doesn’t seem to properly use other skills i reference in the prompt

Altogether this feels like a pretty significant downgrade. The UI pre 3.0 was intuitive and functional

Following up on a few things in this thread.

Model routing fix (update for @Vlad_Tokarev): The issue where all /best-of-n runners showed the same model (e.g., “default”) instead of the specified models should have been addressed in a recent update. If you’re still seeing this on the latest version, let me know, and I’ll dig in further.

Multi-model toggle (@kevglynn, @Thomas_Levy ): The “Use Multiple Models” toggle has been moved. It’s now available when starting a new cloud (background) agent, rather than in the local editor or as a follow-up. This was an intentional change. There’s a related discussion in Bring back the multiple models toggle if you’d like to add your voice there.

Skills in /best-of-n (@Thomas_Levy): Could you share more details on what you mean by skills not being used properly? For example, are you referencing skills with /skill-name in your prompt, and the runners are ignoring them? A request ID from a session where this happened would help us investigate.

Please, bring back the multiple models toggle locally!!! It was a game-changing feature, something I use every day to have multiple solutions on a single prompt, ex: UI interfaces. Best-of-n is not a replacement.

1 Like

you do updates almost every day, and on every update I hope to open the model section and see the multi-models feature back

@Davide_Carpini - I hear you. The answer is the same as above — the multi-model toggle was intentionally moved to cloud (background) agents, and there’s no timeline for restoring it locally. The best place to track and voice support for that request is the dedicated thread here.