I really appreciate Deepseek R1’s capabilities, but I think it’s crucial to address its current speed issues. While the model delivers high-quality responses, its slow response time significantly impacts usability in many scenarios.
Having tested R1 on Grok, I noticed it’s considerably faster there x6.5. This speed difference makes a huge impact on the overall development workflow. A faster R1 implementation in Cursor would greatly enhance the user experience and make it more practical for day-to-day coding tasks.
Would it be possible to optimize R1’s performance in Cursor? The quality is already there - we just need the speed to match.
As you can see on T3 - Chat, the speed is fabulous. And the price remains at $8 including all models, so costs still seem reasonable.
Take the time to think about it :3