Premium models usage with the Pro plan

Sorry, but I am still confused after digging through a gazillion of posts here about the usage of premium models. I don’t really grasp that. Slow usage, fast usage, enable usage based pricing. One simple question.

Which models (please be explicit) can I use unlimited without activating usage based pricing when I am on the Pro plan? No matter if fast or slow. I only want to know which models I can use permanently without activating the usage based pricing. Many thanks if someone can answer this.

Hey, here’s an explanation about the models:

Without limitations:

  • gpt-4o-mini
  • cursor-small
  • cursor-fast
  • gpt-3.5-turbo
  • gemini-2.0-flash-exp
  • gemini-2.0-flash-thinking-exp

Premium models:

  • claude-3.5-sonnet
  • claude-3.5-haiku
  • gpt-4o
  • gpt-4
  • gpt-4-turbo
  • gemini-exp-1206

Based on usage:

  • o1
  • o1-preview
  • o1-mini (10 free requests per day)
  • claude-3-opus (10 free requests per day)
1 Like

That one is 10 per day too.

1 Like

First let me say thanks for the quick answer. I appreciate your support. Honestly, I am very pleased with my experience with cursor. My trial will expire in 4 days, and I am considering buying pro. Sorry for being a bit slow in comprehending this, but I still am confused. Do I understand this correctly, that with the pro plan I have 500 fast requests per month for the Premium models, right? Now what happens, if I have used them all and I don’t want (or can’t) buy another 500 for 20$ and I also won’t activate the per usage pricing? Can I still use them but only with slow mode till the end of the month? Sorry, but it is essential for me to know that. I don’t want to get used to models, that I then can no longer use till the next month. Thanks a lot!

Thank you for correcting me.

That one is gemini-exp-1206

1 Like

You can use 500 fast, then unlimited slow. So depending on your timezone and how much capacity they have, there might be a queue or not. There was a lot of complain here last month, queue were getting long. Now they added capacity and there is virtually no queue. (I have seen a queue once in the last month). So you can use as many request as you want.

1 Like

Thank you very much. Appreciate your help. This was what I wanted to hear. Not trying to exceed fair use against cursor, but I need access to a certain model I am used for specific tasks. I have no problem with waiting for quite a while, as long as I still can get an answer. Ready to subscribe yet. Thanks! :fist_left:

Maybe this is not the place to ask this, but
Why can’t I use my own Azure’s O1 for which I pasted an API with my Premium Cursor?
Every other model works fine. There’s another discussion for it but nobody cares for it.
Is o1 unpopular or just Azure?

my guess is that whatever API_CURSOR_AI_AZURE_API hooks are needed have likely not been fully fleshed out because who the heck uses Azure?

I had a guess that the reason nobody responds to Azure o1 model API isn't working, but the 4o model API is functional ticket is because Azure isn’t popular, and you solidified this suspicion. Alas, some of us have to, due to some rare circumstances.

1 Like