I said $100M+ deliberately, wasn’t trying to inflate numbers without receipts.
Yeah, I missed the latest $500M ARR update, but that only proves my point more clearly : this isn’t some accident or “bug.” It’s calculated growth built on usage-based monetization.
You don’t scale to half a billion ARR by leaving token efficiency to chance.
To me it looks like they are over-inflating Cache Token usage. Both Models were given a task document to follow on a blank slate/empty project. I’d also love to know how Gemini had 6m input tokens for a 1300 line document when claude only used 363k. So it’s either a bug or …
request id: aa7ac888-98a7-44a6-a8df-80bad2684d9b
i one memory and couple of user rules (less than 100 words) and that consumes over 25K tokens? it has been like this for a while…