im new to cursor and so far I like it so much , I was using Claude sonnet 3.5 model before but since it got slow and you guys added deep seek v3 I used it , its good and fast but not so smart and most of the times doesn’t understand what you really want and make mistakes, however since you added deep seek R1 I used it and im really shocked of how smart and good it is , but in order to make it usable cursor team really have to figure out how to fix this , if you have a long file lets say its 300 lines , since R1 do the thinking first it takes from the context so when it write codes mid writing it spits this error and fails so im forced to go back to V3 , I hope cursor have a work around for this.
So if you ‘please continue’ it, it picks up where it left off pretty good.
Premium models counter will +1
Perfect business strategy, isn’t it?
Cursor’s business strategy is not to frustrate their users.
The team is working tirelessly to provide the best-in-class AI IDE experience. If the Cursor team could click a button to eliminate all bugs and fix the long tail of edge cases that crop up when working with LLMs, they would.
Until then, the team appreciates any and all product feedback.
Hey, the DeepSeek v3 and DeepSeek R1 models are still in experimental mode and may produce errors. We would appreciate it if you could share any issues you encounter. Thank you.
I tried ( please continue, continue right where you left ) maybe 5-6 times its still starts to think again and since the code is long ( i asked for a redesign of the UI/UX ) it still fails , so continue will not work all the time
I dont think its a business strategy tbh cuz in other non reasonable models i dont encounter this issue the issue is only withing the contex/token limit , even if its idc im already at the end of my subscription and i maxed out my premium fast requests
I really think if you guys can fix the deepseek R1 it will be a really great deal , since 500 fast request is nothing for 30 days and it will be maxed in 2-3 days if you work on a big project and after the 500 fast request cursor becomes unusable because i rather give up on my project than waiting 10 minutes for claude to answer and only to left 2 erros for me , but since R1 and V3 is added im really happy and i think most of cursor customers is happy too because they are fast and really smart , so if R1 issues are fixed i believe many of us will not rely on claude again
Thank you , we will post any issue we face no doubt
To be fair, DeepSeek-R1 is performing reasonably well. It’s comparable to OpenAI’s o1 and definitely outperforms Claude 3.5 Sonnet on most coding benchmarks.
Aider published their results yesterday (with a setup of architect/editor modes), and here are the results:
When DeepSeek-R1 acts as the Architect and Claude 3.5 Sonnet as the editor, the performance surpasses OpenAI’s o1!
I took a break from Cursor since DeepSeek-R1’s release, and based on my tests, I find this combination (DeepSeek-R1/Sonnet 3.5) quite effective. I’m not sure if Cursor is “tightly coupled” with certain models (like Anthropic’s or OpenAI’s), but I hope Cursor will continue to adopt the best available models to provide users with optimal and accurate results. Otherwise, they risk losing users.
cf. R1+Sonnet set SOTA on aider’s polyglot benchmark | aider
I didnt do any complex tasks with R1 yet , but i did some UI designs and its 1000 times better than sonnet
Something tells me you don’t hear it enough! THANK YOU! Thank you for everything you are doing. I know you are in a tough position. OpenAI has gazzillions of dollars riding on AI being expensive. I doubt they are very supportive of you all using R1 or deepseek, but if you do… You will win the market and your profits will go up
try reverting to previous working version then open new composer
We believe very long or slow DeepSeek requests are timing out right now, but we’re hoping to increase the thresholds in the next day or so to stop this happening as much.
Is the model running on Cursor’s servers or is it sending our data to China?
yes even v3 sometimes times time out , if you guys fix this soon it will be a great improvement , R1 IS A GAME CHANGER
no they are running it through their provider (fireworks) that’s why its slower its not on the chines server