Since the latest update every thinking model has become painfully slow. When I send a slow request, responses now take 20 – 30 minutes sometimes and even the regular models need 5 – 6 minutes to generate an answer. I’ve been digging around and found mixed explanations like some say the slowdown is intentional, while others claim it’s a bug introduced by the update. Is anyone else seeing this lag and has the team issued an official statement?
Can you provide some more details about the issue? Is the request stuck saying ‘Generating…’, or does it say ‘Slow request, get fast access here…’? Are you getting consistent resume or try again requests? Have you ran an internet speed test or checked ping?
I attempted to draw attention to this issue and some other factors we’ve seen on the forum. The post seems to have been filtered out or shadow banned unfortunately.
Please use ctrl+shift+p and type ‘Developer: Capture and Send Debugging Data’ to provide cursor with adequate information. Also, the bug report template helps too in these situations.
Hey @jdubb75 thanks for stepping in to help. After I run “Developer: Capture and Send Debugging Data” where does the report actually end up? I don’t see any confirmation and can’t find a file in my workspace, so I’m not sure what to attach. For context, every prompt just sits on “Generating…” with the “Slow request, get fast access here” text, and even simple questions or small code-gen tasks take about 5–6 minutes on regular models and 20–30 minutes on Thinking. Any fix would be great, thanks! (Side note: I’ve been on a paid Cursor plan for a few months and have never seen slow waits like this before, even for slow requests)
Quick update: I rolled back to v 0.49.6 and the lag is exactly the same. Thinking-model prompts still sit in the queue for 20–30 min and I spun up brand-new chats and even a blank project to rule out bloated context, but every request crawls basically unusable for now
Could you check if you used up you 500 fast requests? There have been some slowdowns reported in slow queue, I see also some providers in certain regions are running higher load.
Getting the exact same issue ahahhaa, been using cursor for over a month now, used my fast requests in the first 2 days and over the last few days its just severely slowed down to the point its unusable?
Check [r,e,d,d,I,t]. They have timers now. 240 seconds to be exact. Probably a cost saving measure. You don’t end up on the back of the queue if you over use the requests, you end up on a timer. Like a time out. Not documented anywhere. You gotta go well past the 500 to trigger it though. Still, dishonesty is dishonesty. SAAS models rely on making money off the users which under utilize the service, and potentially losing a bit on those who overuse it. You can’t have the best of both worlds, at least not legally unless you are transparent about it.