It would be useful to output the 03-mini (and other thinking models) chain of though reasoning in the agent mode so I can see if the model has correctly understood the prompt request.
Right now it’s a bit unsettling how o3-mini just makes code changes with no explanation. I would trust it more if I could see it’s reasoning. This is especially important since 03-mini is so much more powerful than most other models right now for doing heavy lift analysis and complicated changes.
Thanks for the suggestion about showing chain of thought reasoning in agent mode - totally get why seeing the model’s thought process would help build trust. Will pass this feedback along to the team!