Sonnet 3.7 thinking model isn’t thinking at all. It’s literally just the normal model. What’s even weirder is that if you ask the model about it, it simply says “Sonnet 3.7,” which makes sense because that’s how it works.
I would also like to point out that when the model first came out, you could clearly see the thinking process and how it worked. Now, it just returns the result instantly like a regular model.
It is not thinking. I’m on Cursor 0.45.17 and it just returns the result as a normal model. When Cursor 3.7 thinking just came out a few days ago I could clearly see the Thought Process that you describe, now it just acts as any normal modal.
The details i posted are from about 1h ago. so it IS thinking.
Depending on the prompt it may not have a long thought, but some prompts make fairly long thinking process. quite detailed.
Let’s run the same prompt then on a new file and compare the outputs with a screenshot. because I’m telling you, it is NOT thinking. And I’m not the only one.
for me is not thinking too and is says that its the claude sonnet 3.7 but not even a thinking progress at all when cursor had naunched it it was working correctly and i was seen thinking progress not it just instantly answers without thinking
Not sure why one would expect the thinking model to say it is the thinking model. It might not be even trained to do that.
which model are you, explain your capabilities and differences from other models of your provider
Thought Process
The user is asking me to identify which model I am and explain my capabilities and differences from other models provided by my company (Anthropic). Let me provide a complete response about my identity as Claude 3.7 Sonnet and my capabilities.
I am Claude 3.7 Sonnet, developed by Anthropic. I'm part of the Claude 3.7 family, which represents Anthropic's latest generation of AI assistants.
---
My capabilities:
Advanced reasoning and problem-solving: I can analyze complex problems, follow multi-step reasoning paths, and provide detailed solutions across various domains.
Coding expertise: I can help with numerous programming tasks including:
Writing code in multiple languages
Debugging existing code
Explaining code functionality
Implementing features
Making architecture recommendations
Following best practices
Tool usage: I can use various tools to assist you, including searching codebases, reading files, editing code, running terminal commands, and more.
Natural conversation: I can engage in fluid, contextual conversations that maintain coherence over extended interactions.
Nuanced understanding: I can comprehend complex instructions, follow detailed requirements, and adapt to your specific needs.
How I differ from other Anthropic models:
Compared to Claude 3 Opus: Opus is Anthropic's most powerful model, with superior capabilities for extremely complex tasks. While I (Sonnet) excel at most tasks, Opus has higher reasoning abilities and can handle more intricate problems.
Compared to Claude 3 Haiku: Haiku is Anthropic's fastest and most cost-effective model. I offer more advanced reasoning and capabilities than Haiku, though Haiku provides quicker responses for simpler tasks.
Compared to Claude 3.5 Sonnet: As Claude 3.7 Sonnet, I represent an improvement over the 3.5 version with enhanced reasoning capabilities, greater accuracy, and improved tool usage.
Compared to earlier Claude generations: I offer significant improvements in reasoning, knowledge understanding, safety, helpfulness, and overall performance compared to Claude 2 and earlier versions.
In this environment, I'm integrated with Cursor IDE, allowing me to directly interact with your codebase through various tools rather than simply describing what you should do. This makes me particularly effective as a coding assistant that can take direct actions to help implement solutions.
i mean its not showing the thinking ui that is actually thinking before it answers and its only showing instant answers that thinking model did not some hours before
I have noticed this exact same problem. I am on the pro plan, but ran out of fast credits so i’m in the slow lane after spending $40. I noticed today I was finally able to use 3.7 thinking again. After 30 or so questions back and forth with 3.7, it only ever showed the thought process once. Otherwise it would instantly spit out answers - even telling me it was reading files when it never scanned them.
I will say, when 3.7 actually thinks its pretty impressive - it fixed a long running bug I had. I’d love if we could actually use the full potential of 3.7, like we can when using it direct from anthropic. There is some serious potential there!
I think anthropic has started to limit the thought process because of the load. Or even switching to some other model like Claude-3.5. You can feel it in the responses. At the very beginning when they just launched 3.7 - the nature of the work was strikingly different (in a positive way) from what began to happen after a while…
Apparently the workload is so heavy that they have to do everything they can to get by… I don’t like it either. But such is the era… Hopefully after a while we will remember it with a smile…
I think the issue is that on the initial prompt it has no context other than your words, there is not much to think about.
Even if you “add” a bunch of files to context all you are really doing is referring them, its thoughts are “I should read those files”. It’s down to Cursor trying to be efficient with context.
The way around it is to do it two steps
Gather all the context
Then with the context it can think
I believe in Cursor it might be best to use the non-thinking model for 99% of requests and leverage thinking when you get stuck.
If we had a way to “Actually add this file to context” then the thinking module would be more effective (but the costs would go up)
Same issue. for a whole Sonnet 3.7 Thinking would just output answers …
After an hour of use then it suddenly outputted again so it must have been an intermittent issue