Track LLM per Reply & Preserve Model per Chat

The chat tools are becoming easier to use, and flexibility has increased both with the number of @ commands and the number of models provided. I find myself switching between models to save on my fast requests. So I have the following two feature requests:

  1. Each reply form the LLM in chat or composer should list the language model used in the reply. Why? Because if I switch LLMs during a conversation, the LLM changes, but there is no record of the previous LLM used.

  2. Each chat should track the LLM used separately. Currently, when I change LLM in a new conversation in chat, and then return to a previous chat conversation, it is the new LLM that is selected, rather than the last LLM used in the previous chat thread.

2 Likes

Agree, I’ve been wanting this for a while now. I was demoing to my team how I used a particular custom mode to solve a problem in cursor, and when I went into that chat history, the mode and model were not saved on that chat. I couldn’t help but pray for this feature to be in the app soon.

1 Like

Alright, it’s been over a year on this thread with no reply (and multiple similar threads in the same time period).

Can Cursor address this lack of feature? Currently, it does not show anywhere in the chat history which model was used/what the Model Selector was set to when that prompt was run.

We just need it to show whatever the value was on the Model Selector at the bottom of each response (Maybe on the bottom right near the context usage?). Nothing fancy, if it was ā€˜Auto’ or ā€˜Premium’ then just list those values, or if it was a specific model list the model name or ID.

Without this feature, it makes it almost impossible to branch/experiment with different model results because it’s impossible to keep track of which response used which model. People need to be able to see the intelligence and thinking level used with each response so they are able to judge the likely quality of the response and tell if it was was built using a SOTA model, or if it used a lower level model. I want to avoid building a plan with a dumb model, then executing that plan with a smart model, but as it stands it’s impossible to tell unless you just sent the last prompt and you remember what it was set to.

On top of this though, because the model selector is not chat specific (Changes to the selector apply to ALL Cursor chats), it is also easy to switch to a cheap model in a side-chat, then return to a main window and accidentally run the dumb model on an advanced without realizing it. With the current lack of history on each response, if you don’t notice that the model was set incorrectly when you are sending the message, then you will never realize that response used the dumb model rather than the smart model.

You guys see how much of a usability issue this is, right? It sems like a fairly simple feature to have it store the text string for the model used and surface it somewhere on the response UI. I imagine the model and thinking level must already be somewhere in the response data, we just need to extract it and surface it for the user.

This would be great. Also add timestamps, at least when we click on the ā€œā€¦ā€