Is there nothing I can do?
The unlimited slow requests they promised don’t even work.
@seonedir We believe we have found the route cause and pushed a fix out, but you may need to start a new Composer session to check if you are still facing this bug. Let me know if you’re still having issues!
@yuege Hey, what do you mean by this? Our slow requests should work the same as our fast, just with the potential for a queue during peak times.
Yesterday it was somewhat working, 30% of the time getting the “slow request, fast access here”, then no queue, and then response usually a few seconds later (the rest of the time, failing but it could be my connection issues, and I could just keep retrying again, annoying but workable).
Today it shows “slow request, fast access here” much more consistently after just a few seconds (that part is great), but only to keep failing EVERY SINGLE TIME with “try again”. I tried with Claude and with GPT4o, with composer and chat, and https://status.cursor.com/ didn’t show any problem… I checked connection and it is good today. Yikes… "If the problem persists, please check your internet connection or VPN, or email us at hi@cursor.sh. Request ID: bc00c3ee-2f9c-4106-a704-4dce4b3f8c57 "
I write here because of the consistent failings despite paying every month. Hope to have a working queue system that is not failing (when the queue is necessary during high demand), so that I don’t have to worry if my request is just going to be ignored and spend all my time checking on this, cancel, then retry.
– Edit/feedback –
It could have been a temporarily problem, either because of high demand or a technical issue not global/not reported on status page, because staff was fast to answer and there was no problem the rest of the day.
For newbies at coding like me, those tools have become an essential part of our job, so it can get frustrating when not working as expected, but make no mistake, what it really means is that we are grateful when it does work though we might only leave a message when things go wrong, ah ah!
I dont even believe Cursor team when they are blaming Anthropic for this. GPT4o requests also take 3-5 minutes for chat messages to go through, and between agent calls.
$20 a month for such a ■■■■■■ product.
Yes, the problem I mentioned has been solved. However; the response time is around 5 minutes. I have stopped working with Cursor for now. I have started working on classic claude again. If this speed problem will not be solved, I will also cancel my 1 year pro membership.
Hey, wanted to clarify what was happening here!
When we first built Cursor, and demand was low, the balance between Claude users and GPT users was pretty even. At the time, we therefore built the slow pool queue to be shared across all the premium models.
As you’ve rightly suggested, due to the issues with Claude, this doesn’t make as much sense anymore. As such, and based on your comment, we’ve just separated the queues for Claude and OpenAI models, meaning your queue to use either GPT-4 or GPT-4o should be shorter than Claude hopefully.
We believe the change is available now, but there’s a chance it takes a short while to propagate, but thanks for pointing this out, as this makes a lot of sense to implement.
Thanks for clarifying Daniel. The last 2 changelog post are Dec 17 and Nov 24, so i hope the update that you mentioned will be coming within the next week.
Hey, the queues are already separated but, as this is a change to our backend servers and not to the client itself, it wouldn’t end up in the changelog (although maybe it should!)