Please add DeepSeek R1 model

Any idea how i can use R1 API with cursor?

Use Open Router as the custom OpenAI endpoint, add the model name as it appears on Open Router, disable all other models.

As mentioned above, this needs to be better. There’s no reason in 2025 to still be using hack workarounds when an additional slot for an Open AI compatible endpoint could be added.

4 Likes

I mean, their investors are OpenAI :joy: Only thing keep me from jumping from to other IDE is loyalty and support. Cursor helped me alot when it was first released.

For some reason when posting deepseek v3, I have no usage added to either the premium model or the free model in Settings on the site

Hey @AbleArcher tried this method but isn’t working. Not sure what I’m doing wrong:

  1. Open Router’s endpoint: openrouter.ai/api/v1
  2. Model name on Open Router: deepseek/deepseek-r1 or deepseek-r1
  3. my-deepseek-api-key
  4. Disabled all other models

Anything I’m doing wrong?

In fact, everyone is wondering when deepseek r1 will be added, but there is no explanation from the Cursor team on the forum. Many questions have been asked but no answer.

Please at least give a date or mention if it will be added soon. Because the model is very successful. Until this was added, I even postponed the development of a project I had been working on for a long time.

2 Likes

Hey, we don’t have anything to say about this yet. I think we need to test this model first, just like we did earlier with deepseek-v3, which we added to the list of models. As soon as we have more information, we’ll let you know.

11 Likes

I’m using it now, but it is not stable.
The conversation could be broke by different kind of resposne error like:

The first message (except the system message) of deepseek-reasoner must be a user message, but an assistant message detected.
or
deepseek-reasoner does not support successive user or assistant messages (messages[1] and messages[2] in your input). You should interleave the user/assistant messages in the message sequence.

Had reported bug:

1 Like

+1, and please enable it for agents.

4 Likes

I am really looking forward to agent mode supporting reasoning models like DeepSeek R1, the o1 series and especially the o3 series.

2 Likes

+1 to this

Currently using it via Cline and it’s impressive.

Alternatively, please, let us override the API used on a per-model basis so we can add custom models without needing to totally kneecap all of the existing models (and features like agent mode with them.)

Right now, if we want to use something like OpenRouter we need to use the OpenAI API settings to do it, but then doing this also requires disabling all other models which isn’t great if you just want to use a given model via the chat/normal composer (like ones that don’t/can’t support whatever is required for agent features, such as deepseek-r1.)

Having something like per-model settings that let you override the API type, key, and endpoint without having to use those settings for everything would be greatly appreciated.

5 Likes

please guys add this R1 model and also add it to agentic mode ASAP!!! it has been ages already without you guys even adding deepseek v3. this space moves fast and you should too. Cline implemented R1 one day after.

5 Likes

Would we get more usage for the same monthly fee because deepseek is much cheaper ? :thinking:

2 Likes

still looking for it as well

now it’s available, turn it on in the setting

3 Likes

its already in!

1 Like

why is it still not in agent mode?

2 Likes

still it won’t support for agentic operations from model provides itself

1 Like

They are using fireworks for deepseek v3 and deepseek-r1 costs $8.00 per million tokens input and output on fireworks, so it will be considered just another premium model.

It looks like already added . Here

1 Like