Please add official support for xAI’s latest flagship models — Grok 4.1 and its faster variant Grok 4.1 Fast — as first-class model options in Cursor.
+1, what is involved for this on the Cursor side? Curious what goes into supporting new models from families already on the platform.
i hope they doing this
Is there a way to add it via Custom Model with a Custom Model?
Temporary workaround until Cursor adds it.
Grok 4.1 Fast is available for free exclusively on OpenRouter until December 3, 2025.
Steps:
- Create an OpenRouter account
- Generate an API key
- Add it in the Cursor API settings
+1 needed this on cursor
Crazy they have not added this yet. Never seen such a delay between API release and availability in cursor
They train on your prompts and data in free versions. Use with caution on proprietary and sensitive codebases.
It seems to me at this point it must be a business decision to not add the model. Perhaps xAI is not extending their zero data retention agreement for Grok 4.1 Fast or something. Maybe with the release of Composer, xAI views Cursor as a competitor and is being uncooperative.
But it doesn’t make any sense, because the previous Grok4 is there already. Price and everything is same, so it doesn’t even cost more.
Grok4.1 is an advance over Grok4, so protection is worth more for that model. And yanking Grok4 would cause more PR trouble than silently not providing Grok4.1.
I have created a Manifold Markets question for this feature here: Will we ever get Grok 4.1 Fast support in Cursor? | Manifold
Please add 4.1 to cursor.
I’m going to cancel my subscription if grok 4.1 does get added soon, and just use the openrouter instead
cheaper and works the same
yes, please add. surprised Grok 4.1 is not here yet???
Would love any type communication around this. Is it planned? The why behind not adding it? What can we expect as users of Cursor in this respect?
Canceling by the 13th if 4.1 isnt added. I think it’s that grok 4.1 is better (LMArena) than:
Claude Opus 4.5 thinking (sum of in+out+cached price: $22)
but worse than
Gemini-3-Pro (sum of in+out+cached price: $16)
and
grok-4-1 thinking is (sum of in+out+cached price: $1.45)
So for 2nd place performance, for such a low price would cause less usage, less money for them.
Hey all.
Thanks for the feedback.
For the moment, we don’t have plans to add Grok 4.1, though you might find Grok Code Fast 1 works well for your needs—it’s xAI’s model explicitly built for coding tasks!
Grok Code, as of today, is also free for users of Cursor.
Colin
Hi Colin, thank you. I have cancelled my subscription.
for those interested, please see screenshot on how to setup grok 4.1 using your own grok api keys
I would love to hear a follow-up on whether 4.1 seems better than grok-code for your use cases after you have used it for a bit!
Worse performance. Way smaller context window, 3x the cost.
Also, from each model card:
| Benchmark or metric | Grok 4.1 Thinking (reasoning) | Grok Code Fast 1 |
|---|---|---|
| CyBench unguided success rate | 39% | 22.5% |
| WMDP Bio accuracy | 87% | 72.0% |
| WMDP Chem accuracy | 84% | 52.7% |
| WMDP Cyber accuracy | 84% | 62.1% |
| VCT (Virology Capabilities Test) | 61% | 28.7% |
| BioLP-Bench | 37% | 19.9% |
| MASK dishonesty rate | 49% | 71.9% |
On the model-card benchmarks, grok 4.1 Thinking scores about 65-67% higher than Grok Code Fast 1, and is 67% cheaper per output token. So its a lose-lose.



