Auto mode: please inform us which is the exact model being used

Not knowing which model is answering is what is preventing me from using this auto mode. Please add transparency, it costs nothing. It’s important to understand well the tools we are using in order to get the best of them. Thanks in advance.

Vincent

7 Likes

I also vote for it. The actual call results of the model are also an important reference for evaluating whether a model is easy to use, and can give developers a better choice of processing methods.

1 Like

I think you can set Cursor Rules like “Always Tell me what model are you” before replying anything.

The model itself is not always aware what is it’s model code name

1 Like

Most models have a clear name and know it :slight_smile:

This workaround bloats each response and can confuse models in later requests, due to such bloated context.

This feature addon is important and long overdue.

1 Like

If you think it through the answer is simple: if they did use a good model, they would tell you 100% because it would make them look good. Soooo… :slight_smile:

+1 agree

auto use alwayw gpt 4.1 for me

Last time I asked it said 3.5 sonnet.
So, no, this workaround it’s neither consistent nor reliable. And the solution is extremely simple (everyone here could code it in minutes, not even need for AI)

I don’t think “making look bad” is being considered by the team, it does look bad when you have bad answer thinking it was using a good model though, whereas if we knew the model, we could then switch to a manual model.

The idea should be: the prompt complexity is evaluated, and an appropriate model is selected as auto according to the complexity and the availability. And let us know what level of complexity was estimated and which model was selected.

I am completely OK for a simple model to take care of my simple prompts (though most things I do require complex thinking.)