Composer 1 model

What do you think of Cursor’s own Composer 1 model? How much did you use it, and what were the results? It’s not a thinking model, but it seems nice that its expecting feedback from you based on your flow.

Its fast like grok code.

2 Likes

I just wondering about the price ? Is it free or must pay :smile:

2 Likes

It’s priced the same as GPT-5, and I hope it’s smarter than GPT-5.

Otherwise, if they cost the same, why wouldn’t I just use GPT-5?

image

8 Likes

I cant say much about how well it performs yet, but to me it feels more like a business move to create a USP. In the changelog is very little about its performance and so far most AI coding environments rely on the same models and this way cursor has a chance to promote a model which nobody else has.

That said I am skeptical about it’s performance, but also curious to find out.

We do have a detailed blog post about Composer incl. performance and speed.
The model is optimized for coding and tool usage. We do see here in the Forum and on social media a lot of users being happy with its performance. The model used to be called Cheetah in previous version as preview.

11 Likes

It’s reasoning just like Cheetah, it’s just hard to see :wink:

Good first impression, found stuff in my code.

1 Like

@tangjun its significantly faster than GPT and optimized specifically for coding and tool usage in Cursor. GPT-5 may be smarter in more complex use cases but its nowhere as fast and many prefer not to wait.

@leoing It is a newer and improved version of Cheetah :slight_smile:

4 Likes

Its a REALLY fast model, I’m mostly using it to implement plans since it can read/write files at an incredible speed.
Its also good to review if the plan was coded correctly, I use it to “judge” plans before or after they are implemented as code.

Some other users used it with the Browser feature, but I find it to be around the same speed as to other models, since the Browser itself forces waiting sometimes.

3 Likes

great time to come back, i played with cheetah, the best model yet…

Gonna give composer a run now - congratulations guys!! :smiley:

1 Like

To be honest, I was hoping Cursor’s own model would be significantly cheaper — or even free for subscribers, like gpt-5-mini. Yes, it’s a very fast model, but I find it hard to trust it for real-world testing when, for the same price, I can use GPT-5, and rely on gpt-5-mini for simpler tasks.

7 Likes

why would it be either of those? if its a beast of a model as it sounds, gonna cost alot of $$ to run, so rightly so they charge what is accurate for it. With cheetah i did some insane amount of work for the price, with accuracy also, so the pricing is a non issue

1 Like

and im not just smoking their trumpet, i dont care who its from, if its good, i’ll pay my money that its worth. simple as :smiley:

1 Like

I hoped too, but I guess that’s the best way to monetize for them. They aren’t getting that much from API costs (especially considering that for 20/60/200 plans you get way more api usage than you pay for), and other features like tab/indexing are free. So, self-hosting a model while selling its API is a nice revenue source for cursor, I doubt they will make it cheaper. Also explains now why they don’t want to add cheap OSS models or ways for us to add them, since that will hurt selling their own model.

1 Like

@theio Models must work well in Cursor as users expect them to be able to use tools and write code. If some models have issues with those requirements it makes them less suitable for agentic usage no matter how cheap they are per token. We did have to remove some models for that reason as there were too many complaints.

2 Likes

Honestly, I still don’t get what Composer’s even for in Cursor, considering its price tag.

I had it run a plan written by GPT-5 Auto and it finished, but burned through $1 for the job with like 2.5 million tokens—most of which were just cache, lol. That means I get, what, maybe 20 runs a month with this model (your own in-house model, in your own software)? That’s not even one request per day, which is just hilarious.

There were a bunch of tool uses, but the model still didn’t fully nail the task or stick to the codebase patterns. So right now, I honestly don’t see the point, especially when the Auto frontier models give better quality anyway.

Is it just for speed? But what’s the point if the quality’s meh and the price is basically the same as GPT-5?

3 Likes

I mean, I understand that point for models you add inside selector, you want them to work good (but then why did you add old Kimi K2 and never fixed a bug win new lines for each token?). But at the same time, restricting users to use models on their own risk isn’t falling under that argument, if user want to add some experimental model, why not let them. Especially now, when you support up to 4x simultaneous work of different models on the same prompt, it makes even more sense(for users) to use expensive + cheap (but understandably worse) models for that feature. I don’t think many people ready to run a prompt on more than 2 10$/m+ models.

Instead, we get semi broken OpenAI override (which doesn’t allow using GPT and custom models at the same time, and now it’s even more broken with recent bug where it uses that override for gemini calls). I’m not really criticizing you, I understand the business perspective here and that this seems like a good choice for the product you run. You sell (I hope) a profitable API, you get all the data to improve it since you’re in full control, and that’s an exclusive for your product which will bring you more users. But, as a user who builds agents and likes testing models in real world applications, I’m just upset that I’m not allowed to use models on my own risk in cursor, that’s all.

1 Like

Privacy settings apply to all models. Only when users explicitly allow data usage for training we do that, but not when privacy is enabled.

API Key / OpenAI URL override does have a complexity that makes usage of ‘custom’ models not trivial, especially if you wish to switch between them. As untested or not-optimized models result in more bug reports and issues to handle, it has larger impact at scale but contributes less to usability.

Parallel AI model usage is optional but we do see that users like the ability to test different approaches and even run different models, so they can choose the best result.

Composer model does still need powerful hardware to run, which does have a cost. From testing phase and even now live, many users are very happy with its performance, and not just speed. As with any model, output depends on a lot of things, therefore different users prefer different models and have less success with others.

1 Like

i remember seeing this:

having no idea what it meant, but did think hopefully it would lead to your own agent model :smiley:
and thunderkittens


haha

2 Likes

Composer 1 is Cheetah :slightly_smiling_face:

Cheetah is now gone in the model list.

6 Likes

I’m still hesitant to use Compose 1. The price feels high to me, especially coming from a developing country. So I stick with GPT-5-Mini, Haiku 4.5, or Grok Fast 1. I hope Cursor releases a model like Grok Fast 1

4 Likes