If only we could have a 15 day trial as well
In my experience (heavy, daily, for months) Cheetah has been the best model I’ve worked with. Amazing work, well done Cursor! ![]()
I have to ask though, does it sometimes fall-back to other models? I got some very Claude-like responses from it a couple of days ago.
Why does he want to fire online requests the whole time!
I haven’t tried it, but I’m pretty surprised with the pricing.
I assumed the reason for an in house model would be cheaper inference to provide more value/usage to subscriptions after the change from 500 Requests to $20 worth of API usage.
Pricing it the same as GPT-5/Codex, Gemini Pro 2.5, and 6-20x more than Grok-4-Fast seems a bit odd as it’s main selling point of speed just means blowing through limits much faster. ![]()
Competing against OAI, Google, xAI, and Anthropic is basically David vs. many Goliaths so I wish them luck!
Is there a free trial period?
To my mind, there is a plenty of options in GPT-5 tier models, but I do stick to GPT-5 mini, as it just around zero-cost and allows me to use it calmly, w\out constant overwatching usage limits. I wish there were more competitors.
GPT-5, Haiku 4.5, Composer must be same - they all melt usage, isn’t it?
They removed the cursor-small at the same time when they introduced the composer-1.
Absolutely two different models. Why remove cursor-small?
Cursor-small was good for the questioning and file scanning purpose, cheap, and pretty fast.
I guess they are trying to show that composer-1 is a new cursor-small. But of course it’s not.
The Composer-1 is much pricier and more like a coding model. The coding model that nobody knows and trusts yet.
They should bring the cursor-small back.
Exactly, this was my initial impression. I tested both (GPT-5 and Composer) and they produced comparable results on a simple code review request. Cost about the same, but Composer 1 did it like 10x faster than GPT-5. Not sure how it will hold up with more complex requests. If it is as powerful as GPT-5 but as fast as Grok-Code, than maybe it is a good alternative. But if it lacks in power, than I will probably use Grok-Code instead if I need simple changes done quickly and like 1/10th the cost.
You can probably still add it manually. ![]()
I have similar observations to others commenting in this thread: if the cost is the same as for GPT-5, the only advantage of this model is its speed. After a quick test of this model, I can confirm that it is quite good and very fast.
However, I also noticed that for a relatively simple task (creating a simple plan and then implementing it, the task was to add one graphic element to a WordPress template), it used up almost half of the available context. If this trend continues, meaning that it will also consume the available context in other tasks, it will turn out to be even more expensive than GPT-5 in the long run. Is it worth it? I don’t know, but probably not.
[EDIT]
I would just like to clarify that these are my preliminary observations; I have not conducted extensive testing, so I may be mistaken in my assessment of this model.
I just created a Forum account just to ask, but I had to go from Auto (using composer 1) back to Sonnet 4.5 because composer was unable to edit files?? Seemed like the model knew what to do, but no access in agent mode which is weird.
Anyone else having the same issue?
I agree that it’s much too expensive. 0.33 feels about right for the quality you get (will take a week to really know), and the number of prompts you must fire.
Stupid models have very limited use for me.
this!
The model is good but i’ve just burned all my remaining call for this month. I thought since its cursor proprietary model was like auto… well it wasnt ![]()
My bad, didnt’ checked first
I have been using Cursor in Auto for the last three weeks and this week I noticed a significant decrease in the quality of the results. For tasks very similar or almost identical to the ones I was completing last week with good results, this week takes many prompts, lots of reasoning, consuming way more context than last week and the results are sub-standards.
Like a super junior dev trying to code. To the point that I decided to do it myself.
I will try to compare today model to model, but I’m less than happy with the changes.
same speed same intelligence same context window
so yeah its cheetah ^^
Everyone seems to be missing another advantage, which is that Composer 1 can potentially integrate much better with the Cursor IDE. Better tool calling, just smoother operation in general.
I’ve been using Composer 1 for the last day or so. So far, it seems fairly good. It is very fast, with is the primary draw, but it does seem to do a pretty good job.
It does, however, exhibit one of the same fundamental, what I’ll call FLAWS, of most models (Claude Sonnet 4.5 excluded, as it seems to be the only model that doesn’t exhibit this issue):
It just DOES. Its goal is to DO, DO, DO, DO, DO. And in DOING, it will plow forward and achieve whatever goal it deems it needs to achieve, no matter how much devastation it has to wreak in order to achieve. This is, IMO, the greatest flaw of all models today: They just DO.
There is a key difference that I have noticed with Sonnet 4.5, that I really wish Composer 1 would adopt: Sonnet 4.5 is able to determine when it needs more information, and will ask the user for input, often giving explicit options, recommendations, etc. then STOP. Allowing the user to intervene and provide an option, or analyze and request implementation of a given recommendation, etc.
I used Cursor’s built in “Fix with AI” feature to instruct the agent to fix a syntax issue with the code. Turned out, it was because the TS service needed to be restarted, and I realized that part way through Composer 1’s working out the problem. Before I could stop it, however, the model had generated a new type, PrismaClientWithVLAnalysis that extended the normal Prisma client with an additional model and methods, and then replaced all the code usages of the normal Prisma client with its new type.
I then it was done, before I could even stop it. I told it the TS Server just needed to be restarted, and that it should undo its changes…which, it seemed to try to effectuate, but the code didn’t actually change much. So I’m now left with kind of a mess, that I have to manually clean up here, because the model just churned, and iterated, and DID, without considering the implications of what it was doing, or considering it needed more information. The model even noted that the TS Server was probably “caching”, and yet it still plowed forward and mucked up the code.
This is not unique to Composer 1, Grok, Cheetah (which maybe was an early test of Composer 1?), even GPT and heck even Sonnet 4.
The only model I have seen that actually takes a “conversational” approach, where it explicitly involves back and forth communication with the developer when the model determines it has too many potential paths or questions and decides to ask for explicit intervention and clarification. This is the main reason I’ve switched back to Sonnet 4.5 for the majority of my work. It does not just DO, it doesn’t just plow forward with the intent of achieving the goal SOME WAY, even if the way it achieves it is terrible and destructive. I really like that, and it gives me many opportunities, before things get screwed up, to make sure the quality and correctness of my code is as high as it can be.
I would really love to see this kind of back and forth, conversational approach, integrated into Composer 1. As right now, I’m manually going through and trying to clean up the mess it just made.
I do wonder, in part, if the fact that Composer 1 is not a thinking model, might be partly why it behaves the way it does. At least, I don’t see a brain icon next to it and it doesn’t seem to involve reasoning cycles. Sonnet 4.5 non-thinking, is not quite as good at stopping and asking for more information as the thinking version.
In any case. This is so far the only real issue I’ve found with Composer 1. It does what most models to, and it tries to achieve its goals NO MATTER WHAT, which is often the best way to create more work and waste more tokens. It would be really great if the model was smart enough to know when it has too many options to choose from, and can’t determine the best, and asks the user for help.
I have exactly the same problem! You’re not going mad. I wish they’d be more transparent about what models / logic was going on over the past few weeks when things were INSANELY good.
I may be misunderstanding you, but when you choose Auto as a model, there is no way to know for sure what model it is routing your requests through. It may have been composer 1 or maybe not. Regardless, Auto not being able to edit files is definitely a bug. Maybe worth submitting a bug report so they can address, especially if you can replicate it reliably.
