This feedback reflects my personal perspective — of course, everyone is entitled to see things differently.
What’s the name of the model?
I’ve seen this question come up repeatedly: more posts, more threads — “Tell us, what is this model?”
So here’s my opinion:
Who is this model, and why aren’t they revealing it?
This model appears to be specifically fine-tuned for use with Cursor and its tools. I’ve tested it multiple times, and it’s clearly improving. It’s not the same model every time, which suggests that it’s being continuously trained behind the scenes.
Why doesn’t Cursor reveal this?
It seems pretty clear:
Because they want to avoid getting into trouble with “Cough Inc.” — a company that’s constantly under pressure from competitors trying to clone it. And despite that, their models are still the most widely used within Cursor. No one wants to clash with their primary provider.
Why do I believe it’s not just some other model?
Nothing is 100% certain, but this model is surprisingly good when used within Cursor’s ecosystem. It’s faster than any other model I’ve tried.
If you want to test it yourself — give different models a task that requires real tool usage and code generation, and see if any of them perform as well as AUTO.
Again, this is just my opinion — and I could definitely be wrong.
Is it worth the money?
I’m currently paying $200 [oops, now it’s $300], so I’m not sure — but for someone on the $20 plan? Absolutely.
The model can now handle TODO lists, run for long durations, and tackle complex tasks. It reads rule files and many other things it didn’t handle as well at first — it now seems well-trained for them.
Weaknesses:
- It’s still not a “thinking model,” and there are research-level tasks it can’t handle.
Biggest UX Weakness:
The biggest usability issue for me is the current on/off button for AUTO mode — and that’s exactly what’s frustrating.
Yes, there is a toggle, but it takes time and interrupts the workflow.
What I really want is for AUTO to be selectable like any other model — directly from the model list.
Even better? A keyboard shortcut or something equally quick and smooth.
Honestly, I’d be more than happy to contribute the actual code for this.
I’ve got an AI-powered coding assistant that’s excellent at handling these kinds of UI tweaks —
I could even have AUTO mode itself write the patch for free😁.
Other Wishes:
A model with a larger context window would be incredible.
AUTO mode with extended context support is easily my top feature request.
Even something just like 1 million tokens would be… nice.
My Conclusion:
AUTO mode appears to use the same underlying model every time, and it runs extremely fast.
That alone is a huge reason I use it so much.
It’s a very valuable feature — especially when you get unlimited access to it.
This is something people often overlook when discussing the pricing changes:
Yes, the pricing changed — but now you get a high-quality model with no limits.
Before, you had unlimited requests, but they were slow — this is much faster.
If my assumption is correct, then the model will just keep getting better over time.
Thanks to the team behind this — you’re doing great work. Looking forward to future improvements.
And apologies in advance if I’ve written anything incorrect or mistaken.