/best-of-n runs as a single agent instead of spinning up parallel model worktrees. The specified model names are ignored and no comparison output is produced.
Observe only one agent runs - no worktrees created, no model comparison shown
Note: /best-of-n also does not appear in slash command autocomplete
Expected Behavior
Per the this announcement (Cursor 3: Worktrees & Best-of-N) /best-of-n should run the task across both models simultaneously, each in its own worktree, with a parent agent comparing results.
This is a known limitation. The /best-of-n and /worktree slash commands are not yet available in the new Cursor 3 (Glass) interface — they only work in the classic editor mode.
As a workaround for now, you can switch to the classic interface: go to Cursor Settings > General > Interface and select Classic. The /best-of-n command will work there.
Alternatively, in the Glass interface, you can use the model picker UI to select multiple models (though this won’t create parallel worktrees like /best-of-n does).
For some reason, I’m seeing the selected model for all parallel agents instead of the relevant ones. “default” everywhere, instead of “gpt-54“, “opus46“, “composer2“. Classic UI
Thanks for reporting this, and for the screenshot — helpful to see exactly what’s happening.
This is a separate issue from the original report. The /best-of-n command is running and creating parallel agents, but the model names from your CSV aren’t being applied to each runner — they all fall back to “default” (Auto) instead of using the distinct models you specified.
This is a known bug that our team is actively working on fixing. For now, the runners may all end up using the same auto-routed model rather than the specific ones you listed.
I have been relying heavily on the multi-model interface/toggle and the /best-of-n, besides not working for using multiple models, also doesn’t seem to properly use other skills i reference in the prompt
Altogether this feels like a pretty significant downgrade. The UI pre 3.0 was intuitive and functional
Model routing fix (update for @Vlad_Tokarev): The issue where all /best-of-n runners showed the same model (e.g., “default”) instead of the specified models should have been addressed in a recent update. If you’re still seeing this on the latest version, let me know, and I’ll dig in further.
Multi-model toggle (@kevglynn, @Thomas_Levy ): The “Use Multiple Models” toggle has been moved. It’s now available when starting a new cloud (background) agent, rather than in the local editor or as a follow-up. This was an intentional change. There’s a related discussion in Bring back the multiple models toggle if you’d like to add your voice there.
Skills in /best-of-n (@Thomas_Levy): Could you share more details on what you mean by skills not being used properly? For example, are you referencing skills with /skill-name in your prompt, and the runners are ignoring them? A request ID from a session where this happened would help us investigate.
Please, bring back the multiple models toggle locally!!! It was a game-changing feature, something I use every day to have multiple solutions on a single prompt, ex: UI interfaces. Best-of-n is not a replacement.
@Davide_Carpini - I hear you. The answer is the same as above — the multi-model toggle was intentionally moved to cloud (background) agents, and there’s no timeline for restoring it locally. The best place to track and voice support for that request is the dedicated thread here.