I’m sharing a project I’ve been building: Proxima.
Proxima is an open‑source, multi‑AI MCP server that runs locally — no API key required. The goal is to make multi‑model workflows feel like real infrastructure: connect multiple models/providers and orchestrate them like a small dev team inside the tools you already use.
As a stress test, I ran a fun (and slightly chaotic) experiment: I wired Claude, ChatGPT, Gemini, and Perplexity into one workflow and had them pitch → vote → plan architecture → write code → merge → roast each other’s code → final ranking while building a website end‑to‑end. Messy, but it’s the closest thing I’ve seen to a “multi‑agent dev team” that actually ships.
I recorded the full experiment (5 AIs collaborating like a dev team):
Watch: https://youtu.be/gyjjkHWkapY
I’d love feedback from this community on:
- MCP tool design best practices: tool boundaries, naming, schemas, versioning
- Routing / hand‑off patterns between models: planner ↔ executor ↔ reviewer loops, arbitration, voting, etc.
- Reliability: retries, timeouts, guardrails—and how you’d structure observability for agent workflows
- Any red flags you see in the architecture or UX

