Are the Models going Insane?

You know what they say about the definition of insanity; doing the same thing over and over expecting different results. This went on for 10 more loops beyond the screenshot before I stopped it. Good thing I was paying attention too. This is completely unacceptable token burn on an expensive model, (Gemini 3 Pro). What is Cursor doing to compensate for this kind of waste which just seems to be getting worse? I’ve noticed this happening with other models as well so I’m pretty certain this is a framework/tool-chain issue and not a model issue. What’s going on? I’m burning through $40 in tokens a day and barely getting any work done compared to how it was months prior.

Can you share request id of this exact issue? chat window → open chat where this happened → three dots → copy request id, also you need privacy settings disabled

Try not to use Gemini 3 Pro. In many cases, it gets stuck in a loop.
If this happens, create a bug report and include your AI details, request ID, or any relevant logs so the Cursor team can help debug it.

1 Like

i see this in general with gemini (on other platforms like anti gravity)
not a cursor issue rather a google issue

2 Likes

Stay away from Gemini 3 Pro. It has high hallucination rates: AA-Omniscience: Knowledge and Hallucination Benchmark | Artificial Analysis

Give “GPT-5.2” a shot instead. Opus is my favorite but GPT-5.2 version is pretty good.

2 Likes

You’re using the wrong model. Use Opus-4.5 - “cost” is not the same as “price”. Pay more, use the best, and it will cost less overall :slight_smile:

2 Likes

Gemini over thinks, over complicates a simple task. Not with it

1 Like

very good point, thank you