Slow requests due to overload - Time for alternative?

I’ve reached my limit with Cursor AI. Paying $20/month should guarantee a premium experience, yet I’m consistently facing 4 to 10-minute wait times for each request. This isn’t just a minor inconvenience; it’s a complete disruption to my workflow.

Key Issues:

  • Severe Productivity Loss: Waiting several minutes for each response turns a tool designed to enhance efficiency into a bottleneck. The cumulative effect is hours of lost productivity.

  • Overloaded Servers Due to Over-Subscription: It’s evident that Cursor has taken on more users than its infrastructure can handle. This over-subscription leads to prolonged wait times and frequent errors, indicating a lack of foresight in capacity planning.

  • Unjustifiable Cost: There are free alternatives that offer faster and more reliable performance. Paying a premium for a subpar experience feels like a blatant rip-off.

  • Lack of Transparency and Accountability: The generic responses about capacity issues with partners like Anthropic are wearing thin. It’s time for Cursor to take responsibility and provide concrete solutions.

Call to Action:

Cursor AI needs to:

  1. Stop Overloading Servers: Implement measures to prevent over-subscription and ensure the existing user base receives the service they’ve paid for.
  2. Provide Immediate Compensation: Offer refunds or discounts to affected users until these issues are fully resolved.
  3. Communicate Clearly: Move beyond vague apologies and provide detailed explanations and timelines for fixes.

If these problems aren’t addressed promptly, I won’t be the only one reconsidering my subscription. Cursor AI must act now or risk losing its paying customers en masse.

Hey, unfortunately, the issues with slow Claude 3.5 Sonnet requests is out of our hands right now, but we are working to get this resolved as soon as we can. We don’t have an ETA, as this is not an issue with Cursor itself, but with the capacity Anthropic gives us!

For now, I’d recommend trying alternative models, as the queues should be much shorter. DeepSeek v3 (which can be enabled in your settings) is apparently on par with Claude 3.5 Sonnet, and is a non-premium model, so will have no queue!

If you are wanting a refund, feel free to drop us an email at hi@cursor.com, and we’ll be happy to get back to you!

While supplier issues might play a role, Cursor ultimately oversold its capacity, and the responsibility lies with them to deliver what was promised. Also, as far as I know, DeepSeek V3 doesn’t even support Composer, so that explanation seems questionable. Transparency and actionable solutions are what’s needed here.

Care to explain why GPT is also slow if the issue is with Anthropic’s capacity?

1 Like

Nice find… i hate GPT so never tested that. But side by side using a stopwatch i get the EXACT delay time in both. That almost seems programmed delay instead a slow pool…

This product is unworkable for us now. And emails are not being answered. So great service al around. Overselling, blaming the supplier and seemingly creating your own problems.

OpenAI models will still have a queue if you are on slow requests, but a much lower queue than Anthropic models.

DeepSeek models can be used in the Composer, but we are currently working on adding agent mode, which will hopefully be available soon!

Perhaps it is something with my install (on my personal macbook, imac but also windows machine) then because this last week i have timed all interactions with sonnet and gpt and i have a minimum of 7 and maximum of 19 minutes to get a reply from this “Slow request”.

Also sonnet and gpt have at the same times the exact same delays. This is absolutely unworkable. Also i have send an email regarding this issue more then a week ago and gotten no reply… So… Please assist.

If someone reading this knows of a good alternative please let me know.

Hey, to be clear, this is mainly the case due to your super high usage of the slow pool.

The slow pool was always intended as a backstop for the fast requests, so that you were never left without the ability to query a premium model, even if you had run out of your fast request usage allowance.

Many users, including yourself, now rely on the slow pool for a lot of their usage, and while this is possible, we have to prfioritize those users who do rely on it as a backstop.

As mentioned, you always have the ability to use a different model for requests that could be answered with no issues by non-premium models.

So let me get this straight: you’re openly admitting that I’m being intentionally throttled because I use the service too much? That’s not a technical limitation—it’s a business decision.

This directly contradicts the “unlimited” service that Cursor advertises. Now it turns out that if you rely on the slow pool too much (whatever that means), you get deprioritized. Nowhere was this made clear in the marketing.

Also, your explanation doesn’t hold up. I’ve gone days without using Cursor and still hit these ridiculous 20-minute delays. How am I suddenly a “super high usage” user when I don’t even use the service every day? And even if I did—what exactly does unlimited mean to you?

Instead of fixing the issue, you’re just telling users they’re the problem. This is misleading, unacceptable, and frankly, a garbage way to treat paying customers.

Hey, firstly, to be blunt but clear - yes, you are programmatically being throttled based on your volume of use.

We do this with the best interests of the vast majority of our users at heart, who use closer to ~600 premium requests a month, vs someone like yourself with >6000 premium requests a month.

We could’ve gone the other way, to offer some kind of fair usage limit, but decided there would never be a number that would be a good fit for everyone, hence the system we have in place.

Regarding your statement about going days without using Cursor, the queue algorithm uses a longer scope than that for what requests affect your queue time.

You still have unlimited usage, even at more than 10x the fast usage you have in your plan, but we have to protect the users I’ve described above.

If you are unhappy with this, feel free to email us at hi@cursor.com, and we’ll be happy to issue you a refund for the $20/m Pro plan, and cancel your subscription.

Dan, thanks for also admitting what we already knew: Cursor deliberately throttles paying users, all while advertising “unlimited” usage. This isn’t just misleading—it’s outright deception.

You claim this is about “protecting the majority,” but that’s nonsense. No one forced Cursor to support heavy usage. If you didn’t want to, you should have been upfront about it instead of luring users in with false promises. Now, when people call you out, you suddenly admit there’s a hidden throttling system? That’s not a policy, that’s a bait-and-switch.

And let’s be clear: your infrastructure issues aren’t our problem. Every serious SaaS platform scales its pricing based on usage. You chose to go the shady route instead—hiding limits, throttling users in secret, and only admitting to it when people start complaining. Worse, even when users cut back on usage for days, they’re still stuck in the “slow lane” because of a vague, undisclosed queueing system? Where was that ever mentioned before?

The most insulting part is you still call this “unlimited.” If requests take 10+ minutes because of a secret cap, that’s a limit. Spinning it as “you still have unlimited access” is just dishonest.

And now, when users push back, your best solution is “take a refund and go away”? That doesn’t fix the fact that Cursor intentionally misled its customers. The issue isn’t just about $20, it’s about trust—and Cursor has burned through all of it.

We waited patiently, assuming these delays were technical issues, but instead, it was intentional throttling all along. That’s why we’ve canceled our subscription and will be telling others to do the same. There are better alternatives out there who are not dishonest.

Cursor had a chance to be honest, but instead, you doubled down on deception. Good luck with that.

I appreciate your opinion. I have just issued you with a reply in another thread.