Title. Deepseek just released a new checkpoint that surpasses all benchmarks of Gemini 2.5 Pro, at a fraction of the price.
yesyesyes we want plspls
When could we expect this new version to be available on Cursor?
I think any thinking model is not suitable for programming because it wastes tokens and takes more time
agreed. the CoT is so so necessary in some cases when done to proper extent (Gemini 2.5 Pro Research is a great example of verbosity running riot - it just doesn’t fit some mediums), but it is such a fickle balance because it has the potential to mis infer in the middle of the CoT completely exploding your token usage. it is very tough to figure out in such a fast generation window to hit the stop button
all we need is voting
I would also like to know if there will be a possibility for R1 to start calling tools. In the official documentation, does DeepSeek already allow providing tools to the model?
Thanks for the request, I believe the team are already aware of the models but it make take a few days to get these models scaled up on our infrastructure!
And then, expect it only for fast requests… ;p
A few months ago, the Cursor team indicated (on the documentation page) that they would add agent support to DeepSeek R1. However, that information has since been removed from the page.
Still, I believe the team should move forward with this work.
DeepSeek appears to be positioning itself as a major AI company, offering strong models that rival those from OpenAI and Anthropic.
The question is if the new R1 is good enough for Agentic tasks, the last one didnt make it.
its already usable in cursor go and test it
Well, what did you expect — yes, it’s 1 premium request, because ‘DeepSeek-R1-0528’ is very expensive and resource-intensive to maintain.
Free sugar is only for students with diabetes, and only from the specified countries.
Cursor allows you to use your own API key. Since the OpenRouter API is compatible with the OpenAI API, you could use OpenRouter-deepseek-0528
in Cursor, paying approximately $2.50 per million output tokens.
yes, add it
That seems like a significant upgrade, why didn’t they just name it R1.1 or R2
…but it doesn’t work in agent mode
Sure. I was supporting this idea by arguing for the benefits of having R1 work with the agent.