Confirming Critical Bug: 96M Tokens drained in 28h due to Cache Read Loop
I am experiencing this exact issue on the latest version. My usage logs confirm a massive recursive loop with claude-4.5-sonnet-thinking.
Here is the data from my CSV export:
Timeframe: Jan 30 - Jan 31 (~28 hours)
Total Drained:96.2 Million Tokens
The “Smoking Gun”: Out of these 96M tokens, 82.9 Million were just “Cache Reads”, while the actual Output was only ~800k tokens.
The Agent/Thinking model clearly got stuck in a “Reviewing” state, repeatedly re-reading the entire context without exiting. This burned through my entire quota/credits overnight.
I have already emailed support with the logs, but posting here to confirm that v2.4.x has a critical loop bug.