Not knowing which model is answering is what is preventing me from using this auto mode. Please add transparency, it costs nothing. It’s important to understand well the tools we are using in order to get the best of them. Thanks in advance.
I also vote for it. The actual call results of the model are also an important reference for evaluating whether a model is easy to use, and can give developers a better choice of processing methods.
Last time I asked it said 3.5 sonnet.
So, no, this workaround it’s neither consistent nor reliable. And the solution is extremely simple (everyone here could code it in minutes, not even need for AI)
I don’t think “making look bad” is being considered by the team, it does look bad when you have bad answer thinking it was using a good model though, whereas if we knew the model, we could then switch to a manual model.
The idea should be: the prompt complexity is evaluated, and an appropriate model is selected as auto according to the complexity and the availability. And let us know what level of complexity was estimated and which model was selected.
I am completely OK for a simple model to take care of my simple prompts (though most things I do require complex thinking.)