Agreed!
Cursor’s opacity makes me think they are about to collapse as a company sooner than later.
Hey, thanks for the feedback on this (and sorry for the late reply!)
While we always try to be transparent in the behaviors of Cursor while you use it, Auto Mode is a problem to solve rather than it may seem on the surface.
When you use Auto Mode, it intelligently routes to a premium model based on multiple factors, including the model’s availability and any downtime from the model providers.
What this means in effect is that we don’t know what model your query will be answered by until it has finished being responded to.
Not only will this require some infrastructure changes to surface the model to the client, but we also introduced auto mode, so that people who didn’t want to learn about the models don’t have to!
As such, while I agree this should be something we show per message, it’s not a top priority for the team right now, unfortunately. I will pass this back, and have added this to our feedback tracker to ensure we add it in the future.
Too many updates and not consistent, always changing major basic actions. PLEASE STOP IT. Sonnet 3.5, 3.7, 4 are in waiting always. Auto mode is recommended and not at all usable; doesn’t have context and doesn’t work like Claude.
Hello!
Most of the time I’m using the “Auto” mode.
Despite I can definitely tell when the model changes, to me it is really difficult to build an opinion on which model is best for which tasks if I can’t tell which one was used to generate a response/operation.
That being said, I think it would be really really helpful to be able to see ( or enable to see ) which model generated the response.
Thank you very much for your hard work!
First, I would like to apologise for re-opening something that is closed in 2 different places ( see below my references )
But I clearly didn’t explain myself good enough, so I would kindly ask for a re-visit of the matter:
It would really help is to know which AI has generated a reply.
Please notice: After generating it!
Why?
It would heavily increase transparency, and the most important point: We would be able to improve our workflow.
Moreover, what @danperks explains about how the models are chosen and used, would not be a problem.
How would this help improving our workflow?
By learning about which model is better solving different issues.
- I prompt in Auto
- AI replies a TERRIBLE answer
- I look at which AI has generated that terrible reply. and I remember: This AI is no good for this type of problems.
- I change to a more reliable AI ( or to an AI that has previously shown to be great at this type of problems. )
Again, sorry for re-opening a topic, I hope this makes clear which feature I’m requesting, and why and how that wold help.
Thanks
Previous conversations reference:
and
Please add this!
I’ve been enjoying the auto trial, but some answers have been better than others. I think adding this feature will make auto a better tool for model discovery, optimal model choice, and cost efficiency.
Also, adding a custom rule is a decent workaround, but models hallucinate and context gets saturated. It’d be great to have an answer I could be 100% confident in. It’d also be great to see other params like OpenAI reasoning effort, etc.