For AI issues: which model did you use?
Model name (e.g., Sonnet 4, Tab…)
For AI issues: add Request ID with privacy disabled
Request ID: f9a7046a-279b-47e5-ab48-6e8dc12daba1
For Background Agent issues, also post the ID: bc-…
Additional Information
Add any other context about the problem here.
Does this stop you from using Cursor?
Yes - Cursor is unusable
Sometimes - I can sometimes use Cursor
No - Cursor works, but with this issue
The more details you provide, the easier it is for us to reproduce and fix the issue. Thanks!
The first thing that jumps out is your version. 2.4.37 is from February 12. The current stable is 2.6.x, and there have been a lot of fixes for streaming timeouts and agent hangs since then. This is very likely related.
Here’s what I’d try:
Update Cursor: Help > Check for Updates, then restart after it finishes. Make sure you land on 2.6.x.
Run network diagnostics: Cursor Settings > Network > Run Diagnostics. Share the results here.
Test in an empty folder: File > Open Folder > pick an empty directory > open a new chat > send a simple message. This helps rule out project-specific issues.
The team is aware of these hanging timeout issues, and they’ve been addressed in newer versions. Let me know how things go after the update.
@HungLePPlus@AnyKamisato can you also share your Cursor versions? Help > About Cursor > Copy. Same troubleshooting applies, make sure you’re on the latest.
I am running the newest version of cursor and have the exact same problem. It looks like this is some Claude service degradation issue, but this basically leads to burning tokens for no output this way. By looking at the Claude status dashboard it looks like they do not have any unresolved issues, so this is strange.
Hey @HexadecimalHUN, thanks for chiming in with your version, that’s helpful. This rules out the outdated version as the only cause.
Can you share a couple of Request IDs from the stuck or slow requests? Chat context menu at the top right > Copy Request ID. That’ll let us trace what’s happening on the backend.
Also, which model are you seeing this with? You mentioned Claude, is it Sonnet 4 or something else?
The team is tracking this issue. Your report, and those request IDs, will help increase visibility and narrow down the root cause.
Let me know how things go, and drop those request IDs when you can.
@deanrie Sure, no problem.
9a1e6093-6460-43cb-b17c-51526f7c8410
I am generally using Opus 4.6 model. I am not sure if the same issue happens with another models like sonnet, but it looks Claude models are the ones effected by this service degradation.
The conversations ending today with Taking longer than expected, but for this specific request I had not paused the conversation so you can trace it back. In many other cases I just paused it, as it kept burning tokens which caused me around 4-5% of my overall Ultra usage, that is really heavy!
I am facing same issue ..taking longer than expected for opus4.6..also if it starts streaming..token generation is way too slow….also taking around more than 5 mins to generate I think around 200-300 tokens
@HexadecimalHUN - thanks for the request ID and details, that really helps.
It looks like Opus 4.6 is under higher load right now, so requests can get stuck on “Taking longer than expected” and stream slowly. The team is aware and is monitoring it.
About the burned tokens, I get it, it sucks to lose 4 to 5% of Ultra usage on stuck requests. If you think usage was charged incorrectly, email support at [email protected] with the details, and the team can check.
@Prince - same situation. Please share the request ID (top right chat menu > Copy Request ID). It helps with backend debugging.
@deanrie
I see your point, but realistically I have no proof on that, as I paused those conversations and retried with a different model/mode like premium.
That is why I consider charging people for tokens in promise scammy, as the user basically needs to trust the model provider that the request is going to be filled and has no real option to abort a request once it been fired. I am saying scammy, as system degradation issues are happening so frequently that 1 out of 20 requests might land in this category and we are paying for them anyways. It is like ordering food from a delivery service, but your order might going to be half already eat or might never even arrive and you have no real way to complain about, as realistically you are not recording all the requests ID’s, that is unrealistic. We been trough with support on that, and that was the final verdict at that case always. I am not even sure if that is even legal in the EU.
@auooru - I see the screenshots and the new request ID. GPT-5.2 Extra High requests were affected too. I still recommend updating to 2.6.x. Newer versions handle timeouts better, so stuck requests get cancelled faster instead of hanging for 7+ minutes. When you can, please update and tell me how it goes.
@AnyKamisato - thanks for the version. You’re on 2.6.22, so your version looks fine. For more debugging, we need the request ID from a stuck request right after it happens (menu icon in the top-right of the chat > Copy Request ID). Which model are you using?
Overall: the team is aware of slow and stuck requests. Part of it was load on the Anthropic API (Opus 4.6), and part was routing on our side. We’re tracking it. If you think usage was charged incorrectly for stuck requests, email [email protected] with details.