I notice when I use r1 and I look at my Cursor usage, It doesn’t increase premium model monthly request limit. It doesn’t increase my fast requests used either.
I would assume that its unlimited use based on this, but I’m not sure. It seems like the Cursor team rushed to get the r1 model live (which I’m happy about), but the usage limit system has been confusing for a while. How is it calculated?
seems like they are hosting it ■■■■ is cheap to run if they charging same price as sonnet they are crooks, they can use it to replace sonnect entirely and kill api usage to premium model and just run deepseek on their own hardware until sonnet improves and can offer better.
Yesterday, after about 7 queries I got a message stating that deepseek was unavailable. How long until it is supported as a usage based pricing model? I would spend $100 a month on this model!
I think the issue is that we do not see DeepSeek R1 counting towards the ‘fast-premium’ count, which is the premium most people mean. It would be helpful if Cursor would communicate this better or differently at least.
@kupferdu Currently it’s a premium model, so will count towards your usage, but we may change this in the future as we get our deployment more efficient!
@seth Should work on usage-pricing already, I believe this was an issue on our end but should be all fixed!
Hi, one question do i have unlimited access to the premium models with pro or is the cap 500 requests, because as i understand it i can use unlimited premium models but after 500 its just slower?
Surely a replica of Cursor can be created that charges $5 per month and we use DeepSeekR1 on usage-based via their API? I guess Cursor is going to max out on the $20 until a competitor comes in lower, forcing them to either drop their price or fade out? I assume Microsoft VSCode will release something on par soon?
I’ve tried DeepSeek v3, but the coding skills are not as good as Claude-3.5-sonnet. DeepSeek R1 seems to be unlimited also, but it doesn’t support image.
R1 is almost on par wiht Claude-3.5. With good prompting it’s fine, I haven’t had a problem and it’s loading very quickly. Game changer as Cursor can run it for cheap on multiple servers to ensure fast response times, wherien Claude and OpenAI is limited to their servers and get’s throttled heavily. That’s my assumptions. In summary - Cursor + DeepSeekR1 rocks.
I’ve never seen someone so stupid before, R1 is now cheap , 75% during off peak. Only issue here is cursor is using Fireworks which is $8 / mil output vs Deepseek direct API at $0.55 per million