Deepseek-R1 is cut off

DeepSeek-R1 was generating tokens and stopped here:

What is the max number of tokens for DeepSeek-R1 set by Cursor?

1 Like

Hey, sorry for the hassle.

If we could get a request ID, that would help a lot so we can pinpoint the source of the error.

With privacy mode disabled:

  1. Press cmd+shift+p
  2. “Report ai action”
  3. Select the bad one
  4. Select “copy request id”

Same here, 2nd time today. Hope this is the correct request id - c5a3cd33-10b4-4d94-8857-6474f5a69eb1

also met this problem

I believe this is due to a timeout error. Right now, there is a maximum time limit for responses, after which it will timeout.

Because DeepSeek R1 is quite slow to run and can have very long running responses (deep pontification), this combination can result in timeout errors like this.

The engineering team is actively working on a solution. Thank you for reporting this.

1 Like

I lost the request ID…
Next time, I will ensure I include it in the bug report. So, what should I do for this one? Just consider this as a timeout issue/normal behavior? (from a user perspective it’s frustrating…)

I would at least expect a button that enforces somehow the model to continue generating tokens (like in LM studio), but not sure if this is possible in the case of Cursor… it’s not the same thing…

Try and keep the R1 requests a bit simpler while the eng team work on a quick fix :slight_smile:

2 Likes

Same, I actually have yet to get a full response from R1 because of how long the thinking takes.

1 Like

Hey, we are working on this currently, DeepSeek R1 works in a different way to o1, so requires some work to ensure Cursor is fully compatible and integrated with it’s output.

There will likely be some changes around this in upcoming updates within the near future!

2 Likes

A few days ago I conducted some unscientific testing of R1. The only R1 failure in Cursor was a cut-off, seemingly due to a timeout around the 5-minute mark.