Queued Requests Are Model Specific

Feature request for product/service

Chat

Describe the request

When queueing requests, the requests would maintain the model selected when they were added to the queue. I believe now, when a queued request is processed it uses the model selected in chat at the time. By having different models per queued request, I can better match the request with the best model (capability vs cost). Also I could update the model of a queued request by the same dropdown model selection if it hasn’t been ran yet.

Then having the model used to generate the request in chat with the request text after it has been processed would be also helpful information, so you could look through chat and see which response came from what model as you scroll down.
image

Example use: start a request with a higher end model (gpt-5) to get some of the foundation down, then have cheaper or free models (Auto or gpt-5-mini) to do smaller tasks that are related to the changes the higher end model made. Then add another gpt-5 request for a different feature, and the cycle continues.

Operating System (if it applies)

Windows 10/11