I was using it then after few prompts I have this model does not work with your curent plan or api key
Same issue, I was really enjoying using it and then this started happening to me as well.
Same here! Cursor just released a new version and the access was cut off!
Cursor shows:
INVALID MODEL
The model deepseek-r1 does not work with your current plan or api key.
I am using the pro plan. @jake what happened?
Same here
Same, unable to use it anymore. Well, it was nice for the few hours it lasted. Super frustrating but I’m sure - or at least I’d like to think - they’re working on it. I’d be more than happy to add my API key, in fact, that’s what I’d prefer (rather than seeing the model being marked up). Looking forward to it working again, hopefully tomorrow.
They have left the deepseek-v3 working. It appears much slower than r1 and sonnet. I mean I would expect r1 to be given full access as part of the Pro plan, if not even lowering the price of the Pro plan given that R1 is ONE THIRD of the price of sonnet.
We are back ladies and gentlemen, R1 works.
The Cursor team is looking into this. Rapidly implementing, testing, and fixing bugs as they go.
R1 is ONE THIRD of the price of Sonnet.
Not quite. Please see comments here.
As I’ve suggested here, I know it is be impossible to make all models be accessible in composer mode: But sometimes we do need to integrate diff features from various models—
For example, the current Claude sonnet 3.5 is good for quick responses but would run into a dead cycle when facing a complex issue. o1 and deepSeekR1 are good for complex issues, though, but respond slowly. So, if both chat and composer mode context could be shared, then they can interact to gain the best result.
I have the deepseek-r1 model but doing anything with ctrl k is just timing out…
I get: Error connecting to fireworks. Please try again in a few moments.
Request ID: 02858408-46d8-4fcc-ba2c-3ba2074cd6eb
When it works it’s very slow to get started, it sometimes takes 5 minutes before it start generating. I guess fireworks can’t keep up with the demand.
I tested this and it so much better than claude. So much.
agree, probably it will fix their capacity issue with sonnet 3.5.
I’m trying to use Deepseek R1 on Cursor Compose, but I feel like it only becomes aware of the files added to the context or tagged in the prompt. It doesn’t search the entire project across pastes and similar files, unlike 3.5 Sonnet. For more complex projects, this becomes inefficient. While it has a great thinking step, it often ends up creating a new file for fixes or changes instead of addressing the core issue
Hey, working across the codebase is possible in agent mode, since this model doesn’t operate in that mode yet. You can use the @codebase function for a similar effect.
That is exactly what we all hope for.
I’d still pay if main LLM would be local Deepseek
To be clear, DeepSeek (both v3 and R1) is included in Cursor Pro now as premium models, so you no longer need an API key to use it in Cursor.
For now, these models do not yet work with Agent mode, however!
Thank you so much guys, great quick work.
I’m back to Sonnet atm as R1 was just toooooooo slow. I’m hoping to use R1 in chat mode more now.
Does this mean that if we have finished our 500 premium requests, we will have to wait even for DeepSeek models?
I would assume so, yes, but the queues will be more akin to those on the GPT models, not Claude!