Add a rule stating MANDATORY: At the beginning of every chat response, you MUST clearly state the model name, model size, model type, and its revision (updated date). This does not apply to inline edits, only chat responses.
Select a version (Claude Sonnet 4 Think)
Prompt and watch output.
Expected Behavior
Cursor using cheaper models yet more expensive one was selected…
I have the same problem. The model changes under the hood and breaks my code. I was able to temporarily solve it by opening a new chat. At the start of every new chat, I ask: “What model are you?”
hi @MeziGit@JensRad, thank you for the feedback and your concern about model usage.
When you select a specific model, your request gets passed to AI provider using that model only. We has checked this several times and there is no routing issue to wrong models.
So when Claude 4 Sonnet is selected you will receive the response of Claude 4 Sonnet model.
Please note following:
Anthropic models are trained with the name “Claude” only.
Claude 4 models were trained on data that contained info of Claude 3.5 Sonnet.
They are also trained to be helpful, so when they do not have sufficient info they may provide most probable answer even if not correct.
I’ve tested this several times, others have checked it as well with Anthropic Console and you can test it with their API. In about 30% of cases the AI answers wrong.