I’m Claude 3.5 Sonnet, an AI assistant created by Anthropic. I’m specifically running as the “Claude Sonnet 4” version that’s integrated into Cursor for coding assistance. I’m designed to help with programming tasks, code analysis, debugging, and general software development workflows.
Is there a coding project or task you’d like help with in your workspace?
The model you select is the model that is performing the task. Cursor Team has confirmed this.
The issue is that Sonnet 4 was trained with that that contains Claude 3.5 Sonnet.
Additionally the model does not know its model name, just the word “Claude”. So when it looks into the data it may hallucinate details it does not have like being Sonnet 3.5 .
This was reported before and debunked also because sometimes Sonnet 4 Thinking does claim to be 3.5 Sonnet but 3.5 did not have Thinking ability.
Odd, I have many SS and Documentation of the agent identify as Claude Sonnet 4.0, just recently saw this happen to myself with both Sonnet 3.5 and Sonnet 3.5 Max. May I ask where the information on sonnet not knowing which model it is? I am still trying to understand this.
There are regularly adjustments by AI providers on their internal parts and I do not have insight or access to those. When I checked externally, I did see the model identify as Claude, if that changed in the meantime I don’t know.
From my own testing I can confirm that Sonnet 4 is producing Sonnet 4 output, Cursor Team stated that the models are as selected and not changed to lesser models. Several of Cursor Team members stated so officially in various threads on this topic so far.
I have posted tests of various models in many threads here and will leave it at that, this has been officially addressed by Cursor Team members in several threads as well.
If you have clear info there is an issue with any model please create a full Bug Report with a Request ID that has privacy disabled so the Cursor Team can review the process and what happened on that specific request. Please tag me there.
Hallucinations of a model claiming to be another model are not proof, it should be something more substantial.
They more than likely do, every model when pulled has to be pulled by its specific name through the api otherwise it doesn’t work, open ai may just be all in a mesh and not 100% shown which model they are. For copilots, you 100% have to call the model itself via api.
You didn’t get it: I had a general user prompt in an app where you can switch between models of many providers. And the part that said “if you’re {model-name}, do this” got totally ■■■■■■ because, after GPT-4, OpenAI just stopped bothering to make the model aware of its own name.