I’m confused… why is it so fast (for GPT)…? …like Claude-fast in that first text coming through as a response that feels way more natural than 5.1. I don’t want to get my hopes up…
I was holding back a comment like this because it isn’t well-informed enough in regards to GPT-5.2 experience, but it seems like an ok share based on other thread comments:
GPT-5.2 coding-model probably coming early next year, so no comparison with GPT-5.1 Codex, …is what I hear. I always had better results with non-Codex models anyway
Gemini was terrible for me. I turned it off after a couple of days of trying it and always undoing its work (real coding here; in a large codebase)
Opus 4.5 is my daily driver right now, especially via Claude Code VSCode extension being basically unlimited usage at the $100 tier so far. I’ve even used its plans to drive Cursor agent implementations with Composer and then review. …this a pattern I want to try with GPT-5.2…maybe as a reviewer.
GPT-5.2 seems really good though so far, but isn’t as conversational (it doesn’t feel like a team-member sitting with me pair-programming). I want to use it more in Debug-mode. So far I am loving Debug.