Enable the option to run a request through 2 or more models in parallel.
- Allows for direct comparison between available models.
- Enables users to spend more money or tokens per request in exchange for choosing the best approach among multiple outputs.
I would consider this feature primarily useful for testing models against each other, but some users might genuinely appreciate having the ability to select what they believe is the best solution from two or three different models at once.