Share your experience with Composer 1.5!

Main announcement · Blog


Now that Composer 1.5 is available, we’d love to hear how it’s working for you.

Some things we’re curious about:

  • What types of tasks are you using it for?
  • How does the adaptive thinking feel in practice? Do you notice it being faster on simpler tasks while taking more time on complex ones?
  • Have you hit any situations where self-summarization kicked in? If so, how did it handle longer context scenarios?
  • How does it compare to Composer 1 for your workflows? Any specific improvements or differences you’ve noticed?

We’d love to hear your feedback. What’s working well? What could be better?

Composer 1.5 is quite expensive and appears to be used by default by the Explore agent. How can we force it to use Composer 1, which is sufficient for the Explore agent.
It feels like a dark pattern…

EDIT: added a critical rule to force to use Composer 1, instead of Composer 1.5 for explore agent, and it seems to work. But i still believe Composer 1 should be the default on explore sub agent.

3 Likes

I think for me, with that pricing it needs to outperform Sonnet 4.5, or at least be very close. How does it fare on benchmarks compared to Sonnet 4.5?

3 Likes

1.5 is even more expensive than Codex-5.3 :thinking:

3 Likes

Very exciting to hear there’s a new composer model, an update to Composer has been on my mind. Love the speed and performance:cost efficiency of Composer 1.

However the pricing was a shock, >2x Composer 1 pricing and priced even higher than Sonnet 4.5!? That entirely changes the value proposition from Composer 1.. I know this is obvious but it basically halves our mileage.

As someone who exceeds the Ultra plan most months using almost exclusively Composer 1 (which I know is crazy generous because of the 2x limit) and who is happy with Composer 1 90% of the time I’m not sure the move to 1.5 will make sense. The 10% I use Sonnet / Opus for is always front end styling fixes / tweaks, so I will test it on those and report back.

Edit: I remembered a small front-end issue involving styling of a 3rd party component that I wasn’t able to get fixed with Composer 1. Ran Composer 1 vs Composer 1.5 vs Sonnet 4.5 Thinking to attempt to fix.

  • :cross_mark: Composer 1 did not fix the issue (even when given the guide to 1.5’s fix)
  • :white_check_mark: Composer 1.5 fixed it in two prompts.
  • :cross_mark: Sonnet 4.5 did not fix the issue.

Unfortunately, I’m not aware of an easy way to assess the cost of all three runs. I could have sworn we used to be able to see this. I am sure that with better prompting or more persistence, all three models might have been able to fix the issue, but it was a pretty good first test for Composer 1.5.

Why not share any benchmarks or tests?

2 Likes

I’m trying to understand the pricing and value proposition of Composer 1.5, and honestly I’m confused why I’m expected to pay for it in its current form.

Right now, Composer 1.5 is more expensive than Sonnet 4.5, and significantly more expensive than Codex 5.3. Given that pricing, I’d expect either clearly better performance or at least strong, transparent evidence that it competes at the top tier.

But that’s the problem: I don’t see standard, widely accepted benchmarks (or any comparable evaluation) that would let users compare Composer 1.5 against existing models without having to spend time and money running their own tests. And the thing is, many of us already have access to well-established, well-documented models from Claude and GPT that have a track record and plenty of public results.

So from a practical perspective, why should someone switch (or pay more) for Composer 1.5 when:

  • it costs more than familiar alternatives,
  • there aren’t clear benchmark results to justify the premium,
  • and the “default” options (Claude / GPT) are already proven and widely trusted?

If there are internal benchmarks, third-party evaluations, or clear use cases where Composer 1.5 is consistently better (coding, long-context reasoning, tool use, latency, reliability, etc.), I’d genuinely like to see them – because right now it feels like I’m being asked to pay a premium without the usual transparency.

Would appreciate any official clarification or real-world comparisons from other users.

3 Likes

I’ll at least throw this one a bone, on a couple of small tasks I see that Composer 1.5 completes at task in fewer tokens than Codex 5.3 so ultimately a lower price, but that’s just spot checking.

Silly question, how are you able to see how many tokens it consumed? Via https://cursor.com/dashboard?tab=usage?

I think that’s a fair concern. A free or even discounted to Composer 1 pricing period to assess the new model on our own would have made a lot of sense.

1 Like

Do you all plan to share any more detailed benchmarks for Composer 1.5 vs. other frontier models? I see there is an image with the Cursor Bench score - where do the other models currently rank on Cursor Bench?

Is there a model card publicly available? I feel these are standard for new model launches.