ChatGPT-5 is simply bad because of the thinking. It can’t be that I ask it to back up and work on the project, only for it to say three iterations later, “I modified the backup to later copy it to the root.” It just destroyed everything.
With restrictions like “don’t think, don’t reason, just execute,” I managed to make it work better, but it loses context and destroys everything.
I see the possibility in OpenAI to choose GPT-5 without reasoning. Are you planning to include a GPT-5 that doesn’t reason? I think it could yield excellent results, not like the results from the thinking model.
I hope to see the non-thinking model soon, in the IDE. Thanks.
+1 GPT-5 does way too much reasoning. I ask a simple question to change some UI, but then it starts reasoning and reading half of my codebase, just to then change a few lines of Tailwind CSS. I also rarely use Sonnet 4 with reasoning.
Have you tried GPT-4.1? It’s excellent at directed tasks. Cursor has a document here to help you decide which models to use for which tasks: Cursor – Selecting Models
asoning/Thinking models have their place (preliminary planning is a useful one.) For most of the coding tasks I do these days, though, the non-thinking/non-reasoning models are so much faster. Welcoming the addition of GPT-5 non-!
gpt5-low is decent for that, it’s fast enough, does less thinking but i suppose a gpt5-nothink would be an option. I prefer it to think A LITTLE bit, unless my prompt is absolutely perfect which is isn’t. Sonnet non thinking seems to think a little bit , i found little difference between sonnet thinking and non thinking UNLESS my prompt is something that needs to be conceptualized and not a logic action.