I’ve been using GPT-5 for planning, and cheetah for a number of things lately. I have noticed that I am unable to expand the thinking cycles of either model. I have found, over my time using Cursor, that being able to expand the thinking cycles of the models is CRITICAL to being able to work effectively, especially when it comes to troubleshooting the models which often exhibit odd, often just flat out wrong, and sometimes risky and dangerous behavior (i.e. they may want to nuke my entire git workspace, losing all the changes therein, that often have never been committed before.)
Why are the thinking cycle details hidden for these models? I have noticed you guys have hidden the details of the thinking cycles in the past, and every time it made using the agent harder. As a developer, I find it ABSOLUTELY CRITICAL that I have access to that insight. When the insight is blocked, I’m often at the mercy of a model that may in fact be doing things I really don’t want it to do. Being able to examine thinking cycle details allows me to understand what the model is trying to do, which then propagates into my next prompts, often the creation or update of rules, maybe the creation of a custom command (to add a more deterministic and well specified flow to certain agent behaviors), etc.
Please expose the thinking cycle details for these models. Including in plan mode (which, I think, may be hiding thinking details for ALL models right now?) We really need this insight.
I have the same question - seeing the AI’s reasoning is critically important, because reading its thought process makes it much easier to steer it in the right direction and refine the request. This dramatically speeds up development, as you no longer need to blindly clarify queries over and over again without understanding where the AI went wrong in its reasoning.
Perhaps I just haven’t found it yet - is there an option to deliberately enable visibility of the AI’s internal reasoning? Right now, I see it “Thought” for 9 seconds, but when I expand the step, I only see something like “Adding early return,” which isn’t informative at all.
Alternatively, maybe I need to disable a certain model - if so, how can I tell which one it is? I’d gladly switch away from it without hesitation.
I have never found an option. I also can’t quite tell anymore, when thinking models are even thinking anymore. Perhaps they just think less, I am not sure…Sonnet 4.5, Opus 4.5, I can see them running tools plenty, reading files, searching, etc. However the “Thinking” or “Thought for X” entries, are far less frequent now. I am not sure if that is fundamentally the model, and if so, then its just the model and that is fine. My concern, though, is that thought cycles aren’t even being shown anymore….and hiding thought cycles would be very deterimental, as it is often the only source of insight into WHY a model is doing what its doing, and the only way to refine your prompting to ensure the model does the right thing.
To the Cursor team, please don’t eliminate or hide thinking cycle details. They are a critical aspect of managing agentic development. I know that the hard core vibers out there don’t want to see anything, and don’t care about anything but “shipping to prod”… They don’t reflect everyone who uses Cursor, though, and many of us are trying to use agentic IDEs to accelerate our work, not replace good quality software development practices with a firehose to prod. Many of us NEED the insight that thought cycle details give us.
This was very useful. I would frequently intercept the AI while it was thinking, and catch errors it was making mid-thought. Also useful to go back and see how it arrived at conclusions, and what shortcuts it took, so I could correct it.
This is a step in the wrong direction for Anysphere - less user customization and freedom to use the tool is bad.