Like the name suggests it, when using o3, you can’t click on the Thinking process of the model to view its reasoning, like you can with Claude or Gemini (see screenshot).
This is very useful because you can see the moment the model derails from what you want to implement, and can re-guide it once again.
o3 is like a very smart -but cocky- senior eng. It doesn’t tell you what it’s doing. It just does stuff, but as we know with AI models, you can’t just let them go free into the wild without any supervision.
Steps to Reproduce
Use o3 as your selected model.
Try to click where it says “Thought for X seconds”.
You won’t get the typical dropdown message where the reasoning of the model is.
Expected Behavior
When you click on “Thought for X seconds”, the dropdown message appears and you see the model’s reasoning
Isn’t that a bug then? That you can’t see the reasoning of OpenAI models? Because the reasoning is happening. When you hit enter, you can see this reasoning briefly before it gets hidden in the “Thinking for X seconds” menu, so the string context exists.
For a brief moment, and then it just disappears into the “Thinking…” input.
I’ve seen that in around 10% of cases you can actually click on the “Thinking…” input and see the whole reasoning, like with Claude and Gemini, for example.
But for the vast majority (90%), you have no access to the reasoning (napkin math, ofc). So I find myself constantly asking o3 things like “tell me what you’re doing”, “what’s your plan?”, etc.