Why is the o1-mini model limited?

I am constantly being rate limited from using the o1-mini model with an exponential backoff algo.
I pay money each month for having a chat window in my IDE, and now it paywalls me from using this model, even though I have fast requests left?

What the heck is going on?

your fast requests arent for o1-mini but you can use it 10 times a day without charge

or did you pay via API and still ratelimited?

Hi @simplexx

This model works on usage-based pricing. Here’s some additional information:

2 Likes

Why is this model usage based? It is my prefered model when coding and I am already paying 20usd per month, so I don’t want to pay any more than that.

hes trying to say, you can use that one 10 times a day, then you pay for each use

IF you switch after 10 tries to your APi you pay per token instead ( if your files are really long its gonna be quite expensive, probably more expensive than what they charge)

2 Likes

Thanks for the clarification. What I am trying to say is:
what is the point in me paying 20 usd per month when my prefered model costs 10cent per prompt anyway?
When it comes to paying more with own API keys: I do not think this is likely with the o1-mini model, which is barely more expensive than 4o - Unless the files are insanely long.
To sum it up: I do not want to use my own api keys, or pay extra for a model that costs barely more than the the models that can be used with fast requests - especially when I have a lot of fast requets left.

What is the cursor team going to do? Lock all future models behind a second paywall, even for paying customers? I just think it’s wrong and quite frankly over the top greedy.

1 Like

You don’t seem to understand how 4o or o1-mini differ or how the are billed. I encourage you to try it out for yourself with the OpenAI API to get a sense for the relative costs. In my experience, a request to o1-mini is around 10x the cost of 4o or 3.5-Sonnet.

1 Like

Your $20 payment won’t buy you 100s of dollars of worth compute from advanced models.

2 Likes

o1-mini API pricing is $3/12 vs. $3/15 for Sonnet 3.5.

Granted o1 uses CoT tokens per query which adds to the output length. But it isn’t a huge difference for a lot of coding queries - where writing out code tends to dominate.

I use the my own keys with the APIs within Cursor. I can tell you with full confidence that o1-mini costs substantially more than 4o. The internal context tokens are billed at the output rate. It’s still not terribly expensive, especially if you’re a professional developer, but we’re talking several dollars a day for modest usage vs tens of cents.

4 Likes

The usage allowances provided in the $20/month subscription is decided based on what best fits 90% of users.

Pro allows people to use models from all the major providers, with unlimited usage of the lower models and a generous allowance for the “premium” models. Some people use the faster models more, some the better, but the Pro subscription allows Cursor to distribute that usage against all the users, so everyone gets the best value for money regardless of their model choice.

While o1-mini can be a very good model for coding, it’s very dependent on the job you ask of it, and is expensive and slow to run. Also, it’s just as good at getting things wrong as it is right. Because it’s not quite as core to most users as the other models, and due to its cost, it doesn’t make sense to add it into the Pro subscription as a major allowance.

Many users just want to try it, so Cursor allows users to have 10 requests for free, to see if it’s worth enabling the usage billing for their use case - a bit of a “try before you buy” feature.

The selection of models and their allowances will shift and change as the models available get better and cheaper. You may find in the future that o1-based models have their allowances increased if they get cheaper, or better. But the usage is set as it is now to keep the Pro price low, while still allowing the usage billing for those who want to pay more for 01-based models.

3 Likes