Is it possible to add DeepSeek v3 to Models? Cause its really better than Claude
The official response 9 days ago said that testing was underway, but there has been no further news since
WHy?
Whats wrong with asking that Q? Has it been āTavistockādā or āRothschildādā
Weāre constantly evaluating models and weāll add them when they surpass our existing model range on our internal benchmarks!
While DeepSeek might be working well in certain use cases, we havenāt yet proven it to be worthwhile adding right now.
Another point to consider for your internal benchmarks: DeepSeek-V3 is currently much, much cheaper than gpt-4o and Claude.
1M input tokens
- $0.14: DeepSeek-v3 https://api-docs.deepseek.com/quick_start/pricing
- $2.50: gpt-4o https://openai.com/api/pricing/
- $3.00: Claude 3.5 Sonnet Pricing \ Anthropic
1M output tokens
- $0.28: DeepSeek-V3
- $10.00: gpt-4o
- $15.00: Claude 3.5 Sonnet
Iād like to request a more seamless integration of Deepseek with Cursor, particularly when using our API key. Currently, I need to disable the OpenAI API key in Cursor every time I want to switch to another model besides Deepseek-Chat. This workflow is frustrating and inefficient.
I use Deepseek frequently for simple and medium tasks, and while itās a fantastic tool, having to toggle settings each time is quite tiresome. Could you please consider either adding Deepseek directly to Cursor or providing a more user-friendly way to switch between models without having to modify the API key settings repeatedly?
Thanks for your consideration!
In fact, the task completion rate also exceeded 4o-mini, even 4o
DeepSeek v3, while it may not be perfect in some ways, is inexpensive enough for Cursor to consider it.Image recognition?No, not many people need it.
it would make cursorās life so much easier if you just gave us a respectable custom model section in the settings:
I second the above post from raw.works.
WE LOVE CURSOR BUT PLEASE LET USE ADD CUSTOM MODELS AND ADD OUR OWN OPENROUTER KEY ONLY FOR THAT MODEL.
lemme check
Hey guys, we are looking at improving the integration of custom LLM APIs in a future update!