Today, I tested the Claude 3.7 Thinking model in a new chat thread. It worked perfectly and increased the premium models’ call count from 78 to 80. Since the following tasks were much simpler, I switched to the Claude 3.7 (non-thinking) model for the next instruction in the same thread. However, after running the non-thinking model, I noticed the premium models’ call count increased from 80 to 82, which seems incorrect for a non-thinking model call.
It’s amazing to me that claude and other agents continuously fix issue A, causing issue B, then when you go to fix issue A again, you realize it is in a perpetual loop of creating the same errors over and over again. Yet we still get charged for a pristine perfect service. I guess I should blame consumers for putting up with this nonsense.
Hey, when you switch models in the same thread, sometimes the actual switch doesn’t happen despite the visible change of model. Try starting a new chat to check this.
Yes, that’s what I thought and what I did. But it’s still a bug and makes things inconvenient because starting a new chat thread will lose the context of the current thread