Auto mode: please inform us which is the exact model being used

Not knowing which model is answering is what is preventing me from using this auto mode. Please add transparency, it costs nothing. It’s important to understand well the tools we are using in order to get the best of them. Thanks in advance.

Vincent

46 Likes

I also vote for it. The actual call results of the model are also an important reference for evaluating whether a model is easy to use, and can give developers a better choice of processing methods.

4 Likes

I think you can set Cursor Rules like “Always Tell me what model are you” before replying anything.

The model itself is not always aware what is it’s model code name

2 Likes

Most models have a clear name and know it :slight_smile:

2 Likes

This workaround bloats each response and can confuse models in later requests, due to such bloated context.

This feature addon is important and long overdue.

4 Likes

If you think it through the answer is simple: if they did use a good model, they would tell you 100% because it would make them look good. Soooo… :slight_smile:

1 Like

+1 agree

1 Like

auto use alwayw gpt 4.1 for me

1 Like

Last time I asked it said 3.5 sonnet.
So, no, this workaround it’s neither consistent nor reliable. And the solution is extremely simple (everyone here could code it in minutes, not even need for AI)

I don’t think “making look bad” is being considered by the team, it does look bad when you have bad answer thinking it was using a good model though, whereas if we knew the model, we could then switch to a manual model.

The idea should be: the prompt complexity is evaluated, and an appropriate model is selected as auto according to the complexity and the availability. And let us know what level of complexity was estimated and which model was selected.

I am completely OK for a simple model to take care of my simple prompts (though most things I do require complex thinking.)

3 Likes

also would like to know what model, in the settings past usage logs

1 Like

I agree, even if it is 4.0 and the background lack is another weak model to answer, and we have been kept in the dark, because only the official web page can see, the cursor desktop program does not see the most realistic invocation record at all

1 Like

I think a major question here is cost. Some models cost way more than others, so the lack of transparency might cause you to reach the monthly limit quite soon.

While the standard is 1 message = 1 request, many cost way more. For example, we know Cursor favors Claude, so who’s to say they don’t prefer Thinking mode which costs double than what non Thinking costs?

Which LLM are you calling in the backend?

I apologize, but I cannot disclose information about the specific LLM or backend systems being used. This is part of my system configuration that I’m not allowed to share.
What I can tell you is that I’m here to help you with your coding tasks, and I’ll continue to assist you with your questions about the codebase, navigation, and other technical aspects of your project. Is there something specific about the code or functionality you’d like to discuss?

I find this lack of transparency alarming. What about people who have Anthropic Claude subscriptions and want to use them from inside Cursor for example?

1 Like

I’d blame the prompt. By asking “what LLM are you calling in the backend”, you’re not asking the AI model to identify itself, but for some vague internal stuff that the model isn’t even aware of, in the first place, and certainly wouldn’t be allowed to share even if it knew.

For example, Gemini in Cursor doesn’t identify itself as “Cursor using Gemini” but “Gemini using Cursor”.

Yes, that’s correct. We can ask which LLM you are. Most likely, it will answer that it’s part of the ChatGPT-4 family.

This. I’m usually setting it to a model, but that’s less than efficient.
When working through “dumb” answers, we need to know what models are being queried in cursor. Without this transparency, we cant identify what we need. I’m often running the queries in parallel outside of Cursor to see what the models are returning.

2 Likes

I also would like to add to this, I actually also suggested auto-mode months ago before it was implemented and I’m happy to see it delivered, But I would like more transparency on how and when a model was selected and which. Basically, what kind of actions it took to determine this and how much a request cost for it.
Auto mode is not useful if we can’t understand why it makes the choices it makes. For example, Gemini has large context but I found Claude’s code cleaner, so I want to understand what logic Cursor uses to make these distinctions, otherwise I will just set it to one model as it gives me predictable results.

Actually, I would be happy if you guys would actually expose some of that internal logic for user customization, so I don’t have to switch it manually myself, and instead set auto-mode the way I want it. This way no one can actually complain about auto-mode not doing what it wants because they can make it do so.

1 Like

For me there are 4 potential use cases for auto-mode.

  1. New to Cursor and trying to see differences between the models in practice. This is right now difficult, unless we specifically ask the models to introduce themselves with each prompt, which also doesn’t work each time and for every model.
  2. Saving premium requests by using free models for less demanding prompts. This is impossible, because as devs stated in another thread, auto-mode models are ALWAYS premium.

Devs stated they’re focused on the following 2 use cases:

  1. Saving time in case of high demand for some premium models.
  2. Users who simply don’t care which model they use.

I’m a bit out of touch with modern world, so let me ask you. Are there REALLY developers who DON’T CARE which tools they use at work? Who don’t try to take advantage of their good sides, and limit their disadvantages? Maybe there are. Maybe they’re a majority.

I guess there are. After all, there are also gamers who play Mortal Combat or Street Fighter without using unique skills of each character.

2 Likes

Please add this! I want to trust auto, and I simply can’t, not knowing which model it’s choosing.

Previous FR:

3 Likes