How does "Auto" mode work?

The Auto mode for choosing the llm model is interesting, I really want to know how it chooses the llm model based on the task, is there a default thing here or a sophisticated algo working in the back?

1 Like

Yes, please!!

I’ve been documenting the models it chooses for the past week or so.

So far it’s only ever chosen sonnet 4.5 except for 5 chats which it used codex.

That’s from well over 500 chats at this point.

I’ve not seen it be “smart” yet and choose the right model for the right job but at least it’s always choosing the best model not the dumbest.

1 Like

How did you measure? I expect it to use some cheap / free dumb model like grok-code-fast, or gpt4.1. Most expensive model makes no sense TBH.

I think they are choosing a new model every week
And you can’t know which modal they have chosen this week.

For example, this week they have very bad modal, idk what this is!

I ask the model to report what model it uses before starting. I was also asking it to self identify at the end of the chat to confirm it was still the same model all the way through but it never seemed to change the model after starting the chat so I just kept recording the starting model instead.

Auto has been very consistent in it’s output over the last few weeks for me. Honestly almost no issues at all.

The last week or two on the basic plan were hell - the model was total garbage. Or maybe it was just my prompting it, I’m still not sure if it was me or the model - which is why I have it self identify in every chat.

1 Like