Thank you @danperks! What I mean on how it breaks is that it usually can’t get past 2-3 queries where it does all the thinking/tool calling UI but then just stops altogether, no output in the code and all the UI under the chat messages goes away and the input goes back to normal as if it finished and I should make my next query.
This on top of that none of my queries show up in the usage summary and the All Raw Events table always shows grok as using 0 tokens when it actually does work. I’ll show in the images below. It’s seemingly a free model! lol (if only it worked more )
It looks like with the recent cursor update the usage for grok 3 mini actually shows up in the usage summary and also it’s no longer just 0 tokens all the time in the all raw events tab. However it is still erroring out more than half of the time and still has no thinking icon
Any luck? I would love to use this model more to not use up my usage. So far I am going to run out 10 days early (my fault for using Claude models, way too expensive for lower performance anyway). I wouldn’t even get close using grok 3 mini
After many updates to Cursor over the past few days it is still not working (not taking any action and abruptly ending). Clearly thinking is part of the process but still no thinking icon for the model.
I think with Grok 4 released, the development team focused their time there.
Unfortunately, I would guess that the usage / demand for Grok 3 Mini is likely too low for this to be a high-priority fix. If you can reproduce the abrupt ending bug (preferably with Privacy Mode disabled), grab the request ID, and I can pass it to the team - if it’s an easy fix, we may be able to get that over the line.