I don’t understand why the recently added DeepSeek-v3 model is extremely slow to generate responses. I don’t know how many tokens it should be generating, but it’s VERY slow. When testing on the official website, chat.deepseek.com, the difference is noticeable. I have to wait almost 5 minutes for it to generate a relatively short block of code. Is this intentional? What provider does the Cursor team use for DeepSeek-v3? At the very least, please allow us to add our own API key in a separate section, as you do for Google, Anthropic, and OpenAI.
Hey, I’ve already answered this question: