Please fix Grok 3 Mini! It is one of the best models if not THE best!

Describe the Bug

  • Grok 3 mini breaks all the time
  • It does not have a thinking icon next to it despite being a reasoning model

This is a shame given how good yet cheap this model is. With these two considerations in mind it might be the best model out right now despite how long ago it was released: Comparison of AI Models across Intelligence, Performance, Price | Artificial Analysis

Steps to Reproduce

use Grok 3 Mini, it breaks all the time, no brain/thinking icon next to it

Expected Behavior

Grok 3 Mini works and it the reasoning icon is shown.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.2.4
VSCode Version: 1.99.3
Commit: a8e95743c5268be73767c46944a71f4465d05c90
Date: 2025-07-10T16:53:59.659Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0Version: 1.2.4
VSCode Version: 1.99.3
Commit: a8e95743c5268be73767c46944a71f4465d05c90
Date: 2025-07-10T16:53:59.659Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

2 Likes


Hey, I’m looking into the thinking icon being missing now.

Can you share more on how it “breaks”? I don’t often use the model, so some info on this would be helpful!

1 Like

would you mind providing the source to these images? I would love to read more about it.

1 Like

The link is in the post! :grin:

1 Like

thanks… only looked at the images… :man_facepalming:

Thank you @danperks! What I mean on how it breaks is that it usually can’t get past 2-3 queries where it does all the thinking/tool calling UI but then just stops altogether, no output in the code and all the UI under the chat messages goes away and the input goes back to normal as if it finished and I should make my next query.

This on top of that none of my queries show up in the usage summary and the All Raw Events table always shows grok as using 0 tokens when it actually does work. I’ll show in the images below. It’s seemingly a free model! lol (if only it worked more :sweat_smile:)


So the frequent breaking, improper usage recording, and no thinking icon for the model gives me the impression there is something bugged with it.

here is a good example of what it looks like Bad update this morning

though this it not new as of this morning like that that topic is, it has been like this for grok 3 mini for a long time

Appreciate the detail here! I’m going to give Grok 3 Mini a stress test and see if we can figure the is one out :folded_hands:

1 Like

Thank you! Please keep me updated if you can!

It looks like with the recent cursor update the usage for grok 3 mini actually shows up in the usage summary and also it’s no longer just 0 tokens all the time in the all raw events tab. However it is still erroring out more than half of the time and still has no thinking icon

Any luck? I would love to use this model more to not use up my usage. So far I am going to run out 10 days early (my fault for using Claude models, way too expensive for lower performance anyway). I wouldn’t even get close using grok 3 mini

After many updates to Cursor over the past few days it is still not working (not taking any action and abruptly ending). Clearly thinking is part of the process but still no thinking icon for the model.

thinking VVVV

Screenshot 2025-08-04 at 1.09.11 PM

abruptly ending VVVV

Was grok 3 mini discussed at all during the Cursor/xAI collab?

I think with Grok 4 released, the development team focused their time there.

Unfortunately, I would guess that the usage / demand for Grok 3 Mini is likely too low for this to be a high-priority fix. If you can reproduce the abrupt ending bug (preferably with Privacy Mode disabled), grab the request ID, and I can pass it to the team - if it’s an easy fix, we may be able to get that over the line.

Thanks! Sure thing!

Here they are:

  1. 97d4ba74-58c7-4920-bae7-ad2486663d2d (worked, although it was weird with little feedback while it was running)

  2. Doing it as I am typing this and I can see it does not let me get the ID of the failed request

  3. 24324299-2647-4509-aba0-fe185594daaa (worked, not much feedback again)

  4. baa4a899-a83a-483b-ab4d-013a02151c42 (same)

  5. a9e2992f-05e8-44a2-9eeb-d70c68455eff (worked in that it gave feedback, but then failed the tool call twice and then abruptly ended)

All of these were done with privacy mode disabled. Hope this helps! Thanks!

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.