Multiplayer LLM Usage Strategies

I’m curious how you all are managing this.

For me:

  • DeepSeek reviews codebase for alternate techniques and blind spots
  • Sonnet or Haiku do initial planning
  • ChatGPT o3-mini prioritizes and improves the Sonnet/Haiku plan and writes the initial backlog.md
  • Sonnet fleshes out the backlog
  • Again, Deepseek reviews the backlog for holes
  • ChatGPT o3-mini does any final checks
  • ChatGPT o3-mini leads out the gate on any backlog item, with full access to my linter and Jest output in a background terminal
  • Claude iterates on that work
  • ChatGPT o3-mini takes over if Claude starts to spin
  • I constantly refer back to the backlog & cursorrules mdc files
  • Return to #1

Claude is creative but prone to spin-outs
ChatGPT for idea chat is usually better handled in my dedicated account, but DeepSeek can output a really good summary
Gemini would work out a lot better if I could use it’s 2mil context on demand
DeepSeek is the world’s best “what techniques haven’t I thought of yet?” reviewer
ChatGPT o3 is without a doubt the most skilled programmer
Claude is still the best at rapid iteration

Gotta bird-dog all of them though.