We’re currently receiving a large number of slow requests and could not queue yours. Please try again. If you see this message often, please contact hi@cursor.sh
Often mean every message I’m trying to send now? ha
So this has become unusable now, unable to do any kind of requests with cursor, it doesn’t even half update something now might make a slight change then get the unable to use slow requests/too many people email if keeps showing up etc
I’d buy more credits but seems the paid credits I did use didn’t exactly result in anything from constant errors, deleting code etc doesn’t feel like the smartest ideas. Least I’d get the errors returned quicker for a few hours ha
We’re currently seeing a high demand on our slow request infrastructure. While we are looking to reduce these errors, you can bypass this by purchasing some additional fast request allowance through the website.
The slow pool is intended to be a buffer for people who may be just over their fast allowance at the end of the month, and don’t want to pay for extra usage, but we cannot guarantee the slow pool’s availability compared to fast requests.
I exhausted my Premium models, 500/500, about a week ago. I don’t want to upgrade to business so I’ve been running on Hard Limit ever since. But it seems that has ran out too. What do I do? Will adding Fast Request bring my Premium Models back?
If you log in to your account on the website, you have the option to increase the quantity of fast requests.
You can do this with the “Add Fast Requests” button on the left side of the page.
If you add another 500 fast requests, for $20 extra a month, you’ll bypass the slow queue. Note this is a monthly package, so it will auto-renew on your billing date, but you can cancel or increase it at anytime.
Will Fast Request give Premium Model’s response and not gpt-4o-mini or cursor-small? I have a No Limit gpt-4o-mini requests available in my acc, Not really interested in it.
All fast requests run on the best, premium models (Claude 3.5 Sonnet, GPT-4o). We never downgrade you to a lower model.
The only time we force a different model is if the one you want to query is unavailable, in which case we swap to a similarly premium model as an alternative.
It seems to me that we are paying for “fast requests until they run out an then use slow requests”.
But your comment seems to be more in the line of “fast requests and then we can’t guaranty that you’ll be able to use the software”.
Which one is the correct one?
i wasted 100’s of my fast requests because the composer kept making the same mistake over and over. Took me 5 hours to sort and in the end had to go in a different direction even though the thing it messed up was created by itself a few days prior and worked fine.
Slow requests are intended as a sort of “backup” if you run out of fast request but, being unlimited, we have to limit their usage to avoid $20/m users using $100s of LLM services.
To do this, we use the “slow pool”. We dedicate a set amount of resource to the slow pool that we can run while still being able to allow for unlimited usage and still keeping the price where it is. While the pool does queue these requests usually, sometimes the queues fill up and the request cannot be queued at all.
You can also use the smaller models at any time without the queues, but we have to carefully balance the resources we dedicate to the slow pool to ensure we don’t burn through money and end up having to increase the price!