Currently, when I select Auto in Cursor Chat, the IDE decides which model to use for my code edits and responses. However, the chat header only shows “Auto”, so I cannot see which specific model (e.g., GPT-4, Claude, etc.) was actually used to generate the response.
Request:
Please update the UI so that when Auto is selected, Cursor displays the actual model chosen for that particular response. Possible options:
• Show the resolved model name next to “Auto” (e.g., Auto → GPT-4).
• Display the model used inline with the response metadata.
• Provide a tooltip or log entry with the model info.
Benefit:
• Improves transparency when Auto decides between models.
• Helps users evaluate output quality and consistency.
• Makes debugging and learning easier since we know which model handled the task.
There is a couple related feature requests, but most of the chatter about identifying what model Auto is has been in discussions, not exactly a feature request.
Doesn’t hurt to have this feature request too, but Cursor has gone out of their way to make sure we can’t really identify what Auto is using. The model even used to say it could not answer questions about its origin/provider.