To my fellow Ultra users:
If Cursor gave you the chance to choose one model to be truly free and unlimited I mean MAX mode, no limits whatsoever which one would you pick?
I’d go with GPT-5.
What’s your choice?
And if they charged a higher price for it?
How much would you be willing to pay extra for something like that?
Opus 4.1 no contest. Opus 4 was the reason I paid for ultra in the first place, only to find out to my disappointment that usage only lasted a week or so. I understand though and am still grateful I got to use with cost beyond the ultra subscription cost, but there’s not much holding me back now from using claude code with max subscription instead, where I can actually use Opus 4.1 without noticing any usage limits.
What makes OPUS 4 so great is its general reliability and detailed consideration in following your codebase patterns. It’s a very conscientious model and that’s something I greatly value from a coding tool.
I have not found SONNET to be better. I am not impressed yet with GPT-5 in comparison, but I hear the fast version (gpt-5-fast) makes a difference so I’m reserving judgment until I tried that one too
The truth is, there’s something about Anthropic’s models that run more smoothly, but there’s something in their financial approach that lowers the chance they’ll agree to offer such an Ultra plan.
Right now, I think OpenAI and X.AI are the only ones likely to agree.
Google might agree to offer a large plan, say $1,000 for $200, but I don’t believe they’d agree to unlimited.
OpenAI is the most likely to agree to unlimited.
Sorry to say, but it seems you were misled.
gpt-5-fast is exactly the same model you just get the answer faster, that’s all.
It says it’s the same model, but you know how it is with these LLM models. They sometimes perform differently when they go through different inference path even when it causally doesn’t seem to make sense.
My answer - Sonnet 4 - because it has unblocked, guided me in the most difficult situations in programming and made me earn hell a lot amount of money.
What’s with the GPT-5 craze? Its not even good increment on their previous models. You make GPT-5 and o3 play chess against each other, o3 wins. OpenAI couldn’t justify the need for this new model, all they sold is that this new model has multiple “phds” in it - instead of telling how many new parameters, is it better and faster at compute, is it more accurate. The benchmarks were made up (model is just out, how can you depend on the benchmark when hardly anyone has used it), multiple youtubers were paid to promote it - ■■■■ even Cursor was paid by OpenAI to promote it.
Many people have earned money by Vibe coding and fixing bugs on production apps with claude’s - but not with GPT models. Anthropic models are more applicable for real world usecases.
Saying “create a todo app” to GPT-5 and making it your favorite model even when its only out just last week smells like you’re paid/trying to promote GPT-5. How did the model help you make money? how did the model solve your problems that other models were not able to solve? With the limited access of GPT-5, I doubt you might have even got 300 prompts done by now due to the insane rate limitations and slowness of GPT-5 processing.
The path itself which the models take from their parameters to your codebase/chat interface seems to make a difference in how they perform. Same model, same parameters, still different result because they take a different path even when the difference of path doesn’t seem to have anything to do with the output quality. That’s why the gpt-5-fast option may result in different kinds of output even though it originates from the same parameters as gpt-5.
My feeling and I don’t know if I’m right is that all models are sometimes better or worse depending on the goal, the development environment, the language, and the edge case.
What I’ve seen is that Anthropic simply does a better job for people who aren’t programmers: they don’t just do what you asked, they also take care of all the edge cases and every possible scenario. That’s good for non-developers, and good for the company since they get paid per token.
GPT-5 is an amazing model that solved many things for me that even Opus couldn’t, but it needs very clear instructions, and it handles only exactly what you asked for. I think that’s the reason people prefer Anthropic, while for me someone who wants only specific things and not fifty extra things I didn’t ask for I’m happy with GPT-5.
I can’t say for sure, since I don’t have the data to confirm it. What I can say is that the same models in Cursor have been performing really well for me, while with some competitors… I haven’t been able to get the same results.
Honestly GPT 4.1 is my daily driver and has been for a long time now. It’s just been solid and reliable and I especially like that it never makes changes without explicit direction to do so.
I think some of this is due to the codebase I work on being an ancient and monolithic Rails app. I have extensive rules set up to define rules such as version, architecture, business logic, etc. GPT 4.1 just consistently understands and stays within those rules.
Occasionally try other thinking models when trying to crack particularly difficult problems, and they do seem better at brainstorming and “thinking outside the box” than 4.1 is, but they’re too eager to start making changes and I’ve found the code they write hit-or-miss.
Exactly, I share the same opinion [and it’s also the most realistic], but you know it’s the same model you just get computing priority for a faster response.