Hello,
I enabled deepseek r1 model with openAI api key.
It could work when the conversation is short, but when the conversation gets longer, I would always get this response error:
deepseek-reasoner does not support successive user or assistant messages (messages[1] and messages[2] in your input). You should interleave the user/assistant messages in the message sequence.
Looks like cursor somehow split one response into two and broke the conversation ? Why can’t we insert a dummy message between the two?
I’m not sure if this is a bug, or the deepseek doesn’t strictly follow openai’s api.
But I would very appreciate if you can take a deeper look into it.
Thank you!
Jack