Claude Sonnet 4.5 Thinking is being sometimes redirected to older models

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I wanted to raise a question about model selection and identity that I’ve been observing. When using “Claude Sonnet 4.5 Thinking” in Cursor, I’ve noticed some inconsistencies that I’d like to understand better.

Observation:

In some cases, Claude Sonnet 4.5 Thinking feels just unusually dumb and does not follow instructions as usual. When asking the model to identify itself, the extended thinking output reveals something interesting:
The system prompt indicates “powered by Claude Sonnet 4.5”
However, in its reasoning, the model states it’s unaware of “Claude Sonnet 4.5” as an official Anthropic model
The model only lists older models in its knowledge base (Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku, Claude 3.5 Sonnet)
This suggests a potential mismatch between the labeled model and what’s actually being served.

Additional context:

I’ve noticed subjective performance differences compared to what I’d expect from a newer model. The Response patterns sometimes feel more consistent with older model behavior

Steps to Reproduce

  1. Create a new Chat and select Claude 4.5 Thinking
  2. Ask the model about this identity
  3. You will be in some cases served with “I’m Claude Sonnet 4.5 (also called Claude 3.7 Sonnet).” and in some it will actually be Claude Sonnet 4.5.

Expected Behavior

Get a response from Claude Sonnet 4.5 and not from Claude 3.7 Sonnet.

Screenshots / Screen Recordings

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.20
VSCode Version: 1.105.1
Commit: b3573281c4775bfc6bba466bf6563d3d498d1070
Date: 2025-12-12T06:29:26.017Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 24.6.0

For AI issues: which model did you use?

Claude Sonnet 4.5 Thinking

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report.

Cursor never swaps your chosen models. When you pick a specific model, your request is always handled by that model without switching.

Important note: all Anthropic models are named “Claude,” and Claude 4 was trained on data that included info about Claude 3.5 Sonnet. Also, models accessed via API without a system prompt don’t have self-knowledge.

Here’s my reply: WTH, Bad Cursor! Wrong Model is Provided - #4 by deanrie

Users have also checked this with the direct Anthropic API - models do identify themselves unreliably, which confirms this is a model behavior, not a Cursor issue.

If you’re seeing concrete quality issues with the model (beyond self-identification), please share:

  • The Request ID of the problematic request (chat context menu - Copy Request ID)
  • Specific examples of incorrect behavior
  • Steps to reproduce

This will help the team verify if there’s a real model issue.

1 Like

@deanrie Thanks for the quick response.

I know that models can not reliably identify themselves BUT and this is making it really suspicious, models are aware of their knowledge cut-off.

Here, the model states it does not know the newer models exist and has a knowledge cut-off in 2024. If it would be Claude Sonnet 4.5, it would be able to reason about the Claude 4.5 family.

As I mentioned, this occurs in some Chats – if you create 10 sessions, about two of them show that, and you experience a noticeable reduced quality in code generation and instruction following. Sometimes it seems to change during longer agent tasks.