Request to Integrate NVIDIA DeepSeek-R1 FP4 via Model Providers for Cursor AI

Hi Cursor AI Team,

I mostly use DeepSeek R1 in Cursor because it’s incredibly affordable and far more useful than Sonnet 3.5, solving a wider range of coding and reasoning tasks effectively. I noticed Cursor already supports DeepSeek R1, but I’d like to suggest integrating the NVIDIA-optimized DeepSeek-R1 FP4 version. This model, available on Hugging Face, delivers 25x more revenue at 20x lower cost per token, with 99.8% FP8 accuracy on MMLU, and is quantized for TensorRT-LLM on Blackwell hardware.

Could you explore partnering with one of your current model providers—or finding a new one—to incorporate this FP4 version into Cursor AI? It would enhance performance and cost-efficiency, aligning with my workflow and the value I get from DeepSeek R1.

Thanks

Sources: https://x.com/NVIDIAAIDev/status/1894172956726890623?t=Htnd7fe3QeB6KgPm3pmRTw&s=19

1 Like