I wanted to share my recent experience and see if anyone else is running into the same issue or has a fix.
Previously, I was making “slow” (i.e., free/low-priority) requests to some of the top AI models (Claude-3.7, 3.5, Gemini), and honestly, they worked almost as well as the paid options. Sure, you’d have to wait a bit longer, but you’d eventually get your answer.
Suddenly, that’s changed for me. No matter which of these top models I’m trying (3.7, 3.5, Gemini), I’m now getting no response at all. The requests just hang indefinitely and return nothing—no error, no timeout, nothing.
Is anyone else having this problem with slow/free requests? Did something change recently with how these providers handle free requests or rate limits?
It really feels like Cursor got their userbase established and are now trying to squeeze out or heavily restrict free usage. Has anyone else noticed this shift?
Would really appreciate any insights or if you’ve found a solution!
Yes, free, slow etc unlimited is not possible, they have made limits. The new version of slow requests is very slow, ten minutes or more. You can turn on usage based pricing.
He is not expensive. There is nothing wrong with the limitations The big model cost is expensive, if you use your own API you’ll see that inside cursor is very cheap
Slow requests still work, they’ve just become slower than before for a long, long time. I noticed this love in the new version. And claude sometimes shows a load that doesn’t work.
I’m having the same issue mate, As if I’ve sent my request off to a whole new world light years away.I swear this happens like once a month minimum and just destroys my ux.
“I’m having the exact same problem. None of the advanced models are usable for me. I’ve been waiting in the queue for hours now with absolutely no response. It’s incredibly frustrating.”
My responses from Gemini pro early yesterday were within 10 seconds. Today they’re taking over 4 minutes every time. This happened on Monday, the 5th, as well.
Is there any chance it’s on Google’s end rather than Cursors? Surely they must be monitoring average response times across the models.
So many people seem to notice these massive slowdowns at the same time, and there’s no communication from Cursor about what causes them.
Pretty sure it’s a design choice; they have attracted so many new users with a free subscription for students and unlimited slow requests, that now they want to squeeze them. They are genius, they give you for free 500 fast requests (also nerfed because you don’t have full context) that you finish in 1 or 2 days, and then force you to pay with usage-based pricing with a crazy high price for a nerfed context window oooor you can pay an extra 20% for the full context window… I don’t want to play their game, so I will use my free student subscription for the 500 fast requests and then use Cline or Roo Code with my Gemini API keys when I finish them. I will pay less and have more or the same in the worst-case scenario. If I wouldn’t have free pro subscription and they keep this design I would 100% cancel my subscription. Please consider doing it because only then they will change something.
(If a staff member is reading this, I would 100% prefer don’t have free pro subscription and pay 20$ but having true unlimited slow request that this ■■■■)
unlimited slow requests, but not unlimited replies? Haha good wording right… they give us unlimited requests and we get the response Connection failed. If the problem persists, please check your internet connection or VPN