I think you are not being transparent about the slow requests.
2 weeks ago, there was an error regarding anthropic pool being full and that’s when the slow requests became extremely slow. It was my 2nd week at Cursor and my premium requests had just ended, that means I had not used that many slow requests, but I was facing the same waiting time as others 3-5 minutes. Proves that less requests do not mean prioritized spot in queue.
It went on for a few days and then got back to normal (few seconds of waiting time which is great) although I must mention that only composer was fixed. The chat still took 3-5 mints (not sure if this was a bug but I had noticed one day chat was fast while composer was slow and the next day opposite).
Now the slow requests have got extremely slow again (chat and composer both) and I have used more slow requests during these 2 weeks, but my waiting time looks similar to before when the issue originally started.
Your team had said the issue is from Anthropic then care to explain why GPT is also slow?
Suggestions:
-Put DeepSeek into agent mode.
-Lower the fast requests from free trial (honestly, 20-30 requests were enough to convince me, so genuine consumers will buy no matter what).
-Increase a couple of dollars for Pro (I don’t mind paying a couple dollars up if it means to get me free from this frustration).
15-30 seconds is still reasonable but this 3-5 minutes frustration is making me look at your competition.
Hey, thanks for sharing your experience with the slow pool!
We are currently working on adding support for DeepSeek to be both faster and available on agent mode, which hopefully acts as a stopgap while our issues with Anthropic continue.
For both of your other suggestions, we want to keep things as close to where they are at as possible, to ensure pricing is stable and predictable for both existing and new users.
We’ve added premium requests to our usage-based pricing feature now, so for $0.04 per request, you are able to get fast Claude requests without locking into another block of 500, which may be enough for you to avoid the slow pool more each month! You can set a limit (e.g. $10 for 250 requests) in case you want to set a fixed budget each month.
I believe there is some dishonesty here. Besides the overselling done by cursor (and still actively selling btw, even when you know you cannot deliver) the fact that GPT and Sonnet have the EXACT same delay is very suspicious. This product is unworkable (for us) at the moment. Waiting 8 minutes for a reply or a timeout (50/50) and then getting a reply from what i can only guess is sonnet’s little brother with down syndrome is ridiculous.
With AI advancements and the technology cursor is providing, $20/month realistically won’t give you a premium experience, many users including myself have to budget more than $40 - $60/month to truly enjoy the premium experience.
Jeremy, the real issue isn’t that $20/month can’t provide a premium experience—it’s that Cursor is advertising one thing but delivering another. If $40-$60/month is what’s actually required for a smooth, reliable experience, then that should be the price from the start. That way, customers can make an informed decision about whether it’s worth it. Right now, they’re luring people in at $20, only for them to realize later that they need to pay more just to get what was promised. That’s not a fair business model—it’s a bait-and-switch.
Yes you are right on that but I believe the $20/month is being advertised in a way as its own experience, they guarantee you 500 fast requests and say slow requests without specificying how long, gives you access to their autocomplete model and also gives you access to their codebase search and composer
So from the face value, they did provide what they advertised but I can understand the bait and switch feeling where it feels like the slow pool should have been specified a bit more but also its hard to really determine that when the user scale grows and the AI Providers offer different limits
They offer 500 fast request and unlimited slow request for $20/month. They also offer other alternative model (R1 and Deepseek V3, which currently are both unlimited) and a bunch of other o1-mini (10 per day), Opus (10 per day), unlimited 4o-mini, etc etc. They also offer you the option to pay $0.04/query to skip the queue if you can’t wait or you want to use the tool during peak usage while you ran out of your fast request. They provide incredible value. They are certainly paying for each request people make and adding the composer (with agent) certainly increased the amount of queries people make on average. I find they do a very good job in delivering the best they can while building a very complex tool. When they have extra capacity, they let people use it without any waiting time. They are very generous. I’m sorry you feel this way
Let’s be real: this isn’t just a queue, it’s an intentional throttle. Cursor advertises “unlimited slow premium requests,” but what we’re getting is a minimum 4 to 10-minute delay, no matter the time or load. That’s not “may be queued,” that’s an artificial bottleneck.
Worse, they’re still selling Pro plans while admitting their system is overburdened. That’s textbook overselling—charging for a service they can’t actually provide. And blaming Anthropic for this is highly suspicious when GPT-4 has the exact same crippling delays. If the problem were truly on Anthropic’s side, why is OpenAI’s model just as broken?
Cursor needs to come clean. Either fix the service, or be honest that “Pro” just means paying to wait.
Are we paid users being pooled in with Free users on the slow queue? Maybe they could offer some ‘not quite so slow queue’ slow queue for those of us who pay, and stick us in the pool with the freebies.
I do agree, something should be done. These waits are ridiculous at times.