Interesting analysis of model costs, token volume growth, and model usage value

This is a very interesting analysis of the cost of model usage, notably with the advent of thinking models that use ever greater amounts of “reasoning” (which greatly pumps up the amount of tokens, and thus cost!)

I’ve been using GPT-5 mostly lately, however the sheer amount of time it spends “reasoning” or “thinking” is starting to drive me crazy. When I flip back to Sonnet, which will spend 1-5 seconds most of the time, and maybe 7-8 seconds occasionally, and it just rips through my tasks without pausing for 20, 40, 70 seconds at a time every few tool invocations.

Users need to get wiser to the cost of “thinking”…especially considering this:

I’ve probed multiple models about this Anthropic study, and apparently it really is a fundamental issue about reasoning models. More is NOT better! (Except for the AI companies bottom lines, that is!!)

Its up to us, to vote with our pocketbooks, and not let the big AI companies like xAI and OpenAI overcharge us by jacking up reasoning token usage to line their pockets. Its unnecessary!