Potential concern with Deepseek R1

I feel like Deepseek is intentionally saying that they are slowing down the responses for R1 because of how it’s designed right?
It literally writes it’s thought process out in the open and then writes code. To me this is great !

BUT - I have had many instances with cursor abruptly cutting off when responding because the context is out of bounds or something or the other.

I’m worried this will occur when I interact with Deepseek and it needs to think a lot before writing the code.

any thoughts on this @cursor ?

Can I just say " Resume where you left off " to Deepseek and not have it start over it’s thinking process all over again ?

2 Likes

Hey, we actually run DeepSeek on Fireworks, not through DeepSeek’s own systems, so we may have different limitations here, but I would assume you can tell the AI to resume, and it would do so!

Best thing to do is try it out, as DeepSeek R1 is available in our Pro plans now.

3 Likes

Got it. Thanks Dan! @danperks

I hope you guys can put some more time and energy into this model as it’s pretty friggin revolutionary.

One last question - is this the 670 Billion parameters model ?

1 Like

I’d be surprised if it was, but I cannot confirm which version we are running at this moment in time (different team, all of which are asleep right now haha!)

Understood. Thanks @danperks

Let us know what you find. Keep up the great work at cursor!

1 Like

You’re right, happened to me many times.
All I say then is “continue” and the model will resume where he left off.

Really happy to hear this. I was concerned about data security with the official DeepSeek API, but shouldn’t be an issue on Fireworks!

1 Like

Hi @danperks , quick question, is DeepSeek r1 counted as a premium model request or 1/3 of a premium model request (like haiku)?

DeepSeek v3 is a non-premium model, but DeepSeek R1 is a full premium model, as the cheap costs offered by DeepSeek are discounted directly with them, but the costs to run the model ourselves (which maintaining privacy and security) is higher!

3 Likes

and by ourselves you mean fireworks.ai?

1 Like

@debian3 Yes, we use Fireworks, as we have existing agreements regarding privacy and security that still apply to DeepSeek R1.

@Rain Not currently. Your only other option is to use an API key, which would let you use DeepSeek (or any model) in the chat and CMD+K, but API keys dont work in Composer.

Hi @danperks, have you been able to confirm if DeepSeek r1 via Fireworks and the way it integrates with Cursor uses the 670 Billion parameters model?

Regarding premium use, once the first 500 requests are exhausted, will we be able to use r1 as a “slow request” like with other models or will it work differently?

  1. Not yet, will follow up once I hear back from the team.
  2. Yes, it should do!
1 Like

Hello. I’m new to Cursor so I’m kinda lost a bit on what models are Premium and what aren’t - in my account settings I see that only Sonnet 3.5 (btw, Sonnet 3.5 and Sonnet 3.5 20241022 are the same or 3.5 is the older model?) and GPT4o are considered to be premium models, while GPT4o-mini and Cursor-small have unlimited usage. Are you saying that DeepSeek R1 usage will be counted into Premium quota (500 fast per month), while DeepSeekV3 is unlimited like 4o-mini? Thanks.

P.S. Off-topic - what about Gemini models that are listed in the Cursor IDE settings? Are they counted towards premium models or have unlimited fast usage?

3 Likes

I think it would be a nice touch to have this shown in the IDE - for example an icon displaying :coin: or :gem: for premium models, and ∞ for unlimited non-premium models.

5 Likes

Hey, you are correct with this, v3 is a non-premium model, but R1 is premium at the moment.

I believe the Gemini models are premium, but I’d have to check with the team.

1 Like

Given the reduced cost of R1 compared to previous premium models, do you plan to have this count as 1 premium use or 1/3 - 1/5 premium use per request?

2 Likes

Currently, the cost for us to run it puts it in the premium models group, as DeepSeek are likely discounting their API to gain traction, but we do not use the API directly due to the lack of privacy agreement.

We are actively working on DeepSeek support, both in speed and cost, so I would expect to see some changes on this in the near future.

3 Likes

How is deepseek r1 available? I was trying to use the hack to use the openai configuration, but that doesn’t enable deepseek for composer. Did you somewhere post - preferrably with screen shots - how to configure it? Thanks!

Hey, you should be able to enable it in your Cursor Settings, under the models page!