I feel like Deepseek is intentionally saying that they are slowing down the responses for R1 because of how it’s designed right?
It literally writes it’s thought process out in the open and then writes code. To me this is great !
BUT - I have had many instances with cursor abruptly cutting off when responding because the context is out of bounds or something or the other.
I’m worried this will occur when I interact with Deepseek and it needs to think a lot before writing the code.
any thoughts on this @cursor ?
Can I just say " Resume where you left off " to Deepseek and not have it start over it’s thinking process all over again ?
Hey, we actually run DeepSeek on Fireworks, not through DeepSeek’s own systems, so we may have different limitations here, but I would assume you can tell the AI to resume, and it would do so!
Best thing to do is try it out, as DeepSeek R1 is available in our Pro plans now.
I’d be surprised if it was, but I cannot confirm which version we are running at this moment in time (different team, all of which are asleep right now haha!)
DeepSeek v3 is a non-premium model, but DeepSeek R1 is a full premium model, as the cheap costs offered by DeepSeek are discounted directly with them, but the costs to run the model ourselves (which maintaining privacy and security) is higher!
@debian3 Yes, we use Fireworks, as we have existing agreements regarding privacy and security that still apply to DeepSeek R1.
@Rain Not currently. Your only other option is to use an API key, which would let you use DeepSeek (or any model) in the chat and CMD+K, but API keys dont work in Composer.
Hi @danperks, have you been able to confirm if DeepSeek r1 via Fireworks and the way it integrates with Cursor uses the 670 Billion parameters model?
Regarding premium use, once the first 500 requests are exhausted, will we be able to use r1 as a “slow request” like with other models or will it work differently?
Hello. I’m new to Cursor so I’m kinda lost a bit on what models are Premium and what aren’t - in my account settings I see that only Sonnet 3.5 (btw, Sonnet 3.5 and Sonnet 3.5 20241022 are the same or 3.5 is the older model?) and GPT4o are considered to be premium models, while GPT4o-mini and Cursor-small have unlimited usage. Are you saying that DeepSeek R1 usage will be counted into Premium quota (500 fast per month), while DeepSeekV3 is unlimited like 4o-mini? Thanks.
P.S. Off-topic - what about Gemini models that are listed in the Cursor IDE settings? Are they counted towards premium models or have unlimited fast usage?
I think it would be a nice touch to have this shown in the IDE - for example an icon displaying or for premium models, and ∞ for unlimited non-premium models.
Given the reduced cost of R1 compared to previous premium models, do you plan to have this count as 1 premium use or 1/3 - 1/5 premium use per request?
Currently, the cost for us to run it puts it in the premium models group, as DeepSeek are likely discounting their API to gain traction, but we do not use the API directly due to the lack of privacy agreement.
We are actively working on DeepSeek support, both in speed and cost, so I would expect to see some changes on this in the near future.
How is deepseek r1 available? I was trying to use the hack to use the openai configuration, but that doesn’t enable deepseek for composer. Did you somewhere post - preferrably with screen shots - how to configure it? Thanks!