Improve time-to-adoption for new models

I have been very pleased with Cursor since shortly after launch, and have been a pro subscriber all the while. Recently, however, I feel the team has been dropping the ball on supporting newer models and inference engines. I know I could configure something like Openrouter etc. to most models as I please, but the allure of Cursor is that it “just works”. 4o and Sonnet are good, but I’ve been dying to try Coder V2, Llama 3.1, etc. – not to mention, now that I’m using chat and composer ever more frequently, I’m finding the latency gets in my way more than it used to, and having something like the speed of Groq baked into Cursor would be a lifesaver.

Given that model improvements like those I’ve mentioned have been slow to osmose into Cursor, I’ve been gradually more tempted to cancel my subscription in favor of something like Codeium or Sourcegraph Cody, or even Double bot. These other tools all seem to be adopting new advancements more rapidly.

I’d love to continue supporting Cursor, because again, I am quite pleased with it as a one-stop solution for ai-coding, but it’ll be tough to justify the recurring $20/month for much longer when competitors seem to do everything I need them to do, and faster, and for a potentially lower subscription fee (as low as $9, it looks like).

All that said, I once again want to express my appreciation for Cursor, and I wish the team the best in the continued improvement of the product – I won’t abandon ship quite yet.

Update: Been trying Double.bot for a few hours now, it’s definitely not the Cursor killer.

1 Like

Tried all my old tools yesterday - Codeium still has no sonnet 3.5 (it used to be competitive in the gpt-4 days).

1 Like

To test other models, I use continue.dev

2 Likes