In my opinion, the SOTA model used in Cursor is DeepSeek R1.
Claude Sonnet 3.5 is useful for writing short code snippets, but it lacks the ability to generate consistent code that aligns with design intentions. Therefore, it has limitations when working on tasks that require long-term codebase maintenance or structured code development.
O3 Mini has an excessively long queue, making the waiting process tedious, and its inference process is hidden, making it difficult to correct misdirections. This results in users wasting a significant amount of request allocation and time, especially when fast requests are needed.
On the other hand, DeepSeek R1 offers sufficient performance and provides a transparent inference process, making it the most suitable AI model for Cursor. Moreover, since it is an open-source model, it has strengths in scalability and community feedback integration.
I hope that Cursor will take a more proactive approach in integrating and supporting DeepSeek R1 in the future. Additionally, please expand support for DeepSeek v3!