Requests Extremely Slow & Stuck, Mass Wasted Calls

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

This problem has only started in the past three days. My requests run extremely slow and often get stuck completely with no progress.

Dozens of requests have been wasted already, and this kind of failure has never happened at all in my previous use.

I’ve tried both HTTP/1 and HTTP/2 to troubleshoot, but there is no improvement.

Steps to Reproduce

agent or plan mode conversation

Expected Behavior

Complete the conversation

Screenshots / Screen Recordings

Operating System

Windows 11

Version Information

Version: 2.4.37 (user setup)
VSCode Version: 1.105.1
Commit: 7b9c34466f5c119e93c3e654bb80fe9306b6cc70
Date: 2026-02-12T23:15:35.107Z
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Windows_NT x64 10.0.26200

Additional Information

Request ID: ac581d96-188a-4195-a127-66252578314d

Another freeze occurred: Request ID:

  • e5d95672-636e-44e7-9265-822c141c01aa
  • e461b7ff-93e9-4d33-95e0-27ade84fb9e0

I’m using the models: GPT 5.2 Extra High and GPT 5.2 Extra High Fast.

Does this stop you from using Cursor?

Yes - Cursor is unusable


1 Like

Hi there!

We detected that this may be a bug report, so we’ve moved your post to the Bug Reports category.

To help us investigate and fix this faster, could you edit your original post to include the details from the template below?

Bug Report Template - Click to expand

Where does the bug appear (feature/product)?

  • Cursor IDE
  • Cursor CLI
  • Background Agent (GitHub, Slack, Web, Linear)
  • BugBot
  • Somewhere else…

Describe the Bug
A clear and concise description of what the bug is.


Steps to Reproduce
How can you reproduce this bug? We have a much better chance at fixing issues if we can reproduce them!


Expected Behavior
What is meant to happen here that isn’t working correctly?


Screenshots / Screen Recordings
If applicable, attach images or videos (.jpg, .png, .gif, .mp4, .mov)


Operating System

  • Windows 10/11
  • MacOS
  • Linux

Version Information

  • For Cursor IDE: Menu → About Cursor → Copy
  • For Cursor CLI: Run agent about in your terminal
IDE:
Version: 2.xx.x
VSCode Version: 1.105.1
Commit: ......

CLI:
CLI Version 2026.01.17-d239e66

For AI issues: which model did you use?
Model name (e.g., Sonnet 4, Tab…)


For AI issues: add Request ID with privacy disabled
Request ID: f9a7046a-279b-47e5-ab48-6e8dc12daba1
For Background Agent issues, also post the ID: bc-…


Additional Information
Add any other context about the problem here.


Does this stop you from using Cursor?

  • Yes - Cursor is unusable
  • Sometimes - I can sometimes use Cursor
  • No - Cursor works, but with this issue

The more details you provide, the easier it is for us to reproduce and fix the issue. Thanks!


Very slow now, can not work

1 Like

I’m experiencing the same problem, and so are my colleagues. They’ve had to switch to other tools. Will this issue be resolved?

Hey, thanks for the report and the request IDs.

The first thing that jumps out is your version. 2.4.37 is from February 12. The current stable is 2.6.x, and there have been a lot of fixes for streaming timeouts and agent hangs since then. This is very likely related.

Here’s what I’d try:

  1. Update Cursor: Help > Check for Updates, then restart after it finishes. Make sure you land on 2.6.x.
  2. Run network diagnostics: Cursor Settings > Network > Run Diagnostics. Share the results here.
  3. Test in an empty folder: File > Open Folder > pick an empty directory > open a new chat > send a simple message. This helps rule out project-specific issues.

The team is aware of these hanging timeout issues, and they’ve been addressed in newer versions. Let me know how things go after the update.

@HungLePPlus @AnyKamisato can you also share your Cursor versions? Help > About Cursor > Copy. Same troubleshooting applies, make sure you’re on the latest.

I am running the newest version of cursor and have the exact same problem. It looks like this is some Claude service degradation issue, but this basically leads to burning tokens for no output this way. By looking at the Claude status dashboard it looks like they do not have any unresolved issues, so this is strange.

Version: 2.6.22

VSCode Version: 1.105.1

Commit: c6285feaba0ad62603f7c22e72f0a170dc8415a0

Date: 2026-03-27T15:59:31.561Z (4 days ago)

Build Type: Stable

Release Track: Default

Electron: 39.8.1

Chromium: 142.0.7444.265

Node.js: 22.22.1

V8: 14.2.231.22-electron.0

OS: Darwin arm64 25.3.0

Hey @HexadecimalHUN, thanks for chiming in with your version, that’s helpful. This rules out the outdated version as the only cause.

Can you share a couple of Request IDs from the stuck or slow requests? Chat context menu at the top right > Copy Request ID. That’ll let us trace what’s happening on the backend.

Also, which model are you seeing this with? You mentioned Claude, is it Sonnet 4 or something else?

The team is tracking this issue. Your report, and those request IDs, will help increase visibility and narrow down the root cause.

Let me know how things go, and drop those request IDs when you can.

@deanrie Sure, no problem.
9a1e6093-6460-43cb-b17c-51526f7c8410
I am generally using Opus 4.6 model. I am not sure if the same issue happens with another models like sonnet, but it looks Claude models are the ones effected by this service degradation.
The conversations ending today with Taking longer than expected, but for this specific request I had not paused the conversation so you can trace it back. In many other cases I just paused it, as it kept burning tokens which caused me around 4-5% of my overall Ultra usage, that is really heavy!

1 Like

I am facing same issue ..taking longer than expected for opus4.6..also if it starts streaming..token generation is way too slow….also taking around more than 5 mins to generate I think around 200-300 tokens

1 Like

@HexadecimalHUN - thanks for the request ID and details, that really helps.

It looks like Opus 4.6 is under higher load right now, so requests can get stuck on “Taking longer than expected” and stream slowly. The team is aware and is monitoring it.

About the burned tokens, I get it, it sucks to lose 4 to 5% of Ultra usage on stuck requests. If you think usage was charged incorrectly, email support at [email protected] with the details, and the team can check.

@Prince - same situation. Please share the request ID (top right chat menu > Copy Request ID). It helps with backend debugging.

Let me know if anything changes.

@deanrie I’m not available to update for now; I will try to update later when I have time.

Another freeze happened again: 8d6fe15a-580c-44b8-9e4c-1e6fc5a6993e (GPT-5.2 extra high)
No new output for more than 7 minutes.

@deanrie
I see your point, but realistically I have no proof on that, as I paused those conversations and retried with a different model/mode like premium.
That is why I consider charging people for tokens in promise scammy, as the user basically needs to trust the model provider that the request is going to be filled and has no real option to abort a request once it been fired. I am saying scammy, as system degradation issues are happening so frequently that 1 out of 20 requests might land in this category and we are paying for them anyways. It is like ordering food from a delivery service, but your order might going to be half already eat or might never even arrive and you have no real way to complain about, as realistically you are not recording all the requests ID’s, that is unrealistic. We been trough with support on that, and that was the final verdict at that case always. I am not even sure if that is even legal in the EU.

1 Like

Version: 2.6.22
VSCode Version: 1.105.1
Commit: c6285feaba0ad62603f7c22e72f0a170dc8415a0
Date: 2026-03-27T15:59:31.561Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Darwin arm64 24.6.0

request_id: a1764c9f-c588-417b-ac4d-09db92eb9915

reques id - 58e6624f-09b0-4c4a-a6d3-2b3ba6bc8250

M still stuck and facing the same issue…please let me know when the services will resume normally. thanks

My version

Version: 2.6.22 (Universal)
VSCode Version: 1.105.1
Commit: c6285feaba0ad62603f7c22e72f0a170dc8415a0
Date: 2026-03-27T15:59:31.561Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Darwin arm64 24.6.0

same issue
request id:
c5e04a65-0cb9-4fcc-a8fa-7133271e422e

Version: 2.6.22 (system setup)
VSCode Version: 1.105.1
Commit: c6285feaba0ad62603f7c22e72f0a170dc8415a0
Date: 2026-03-27T15:59:31.561Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26200

compared opus4.6 and gpt5.4
seems the model issue, not the cursor or network.
stream very slow.

Situation update:

@auooru - I see the screenshots and the new request ID. GPT-5.2 Extra High requests were affected too. I still recommend updating to 2.6.x. Newer versions handle timeouts better, so stuck requests get cancelled faster instead of hanging for 7+ minutes. When you can, please update and tell me how it goes.

@Prince - thanks for the request ID. The Opus 4.6 issues were caused by higher load on Claude models. Anthropic fixed an incident on their side: Claude Status - Elevated timeouts on requests to Claude Opus 4.6 and Sonnet 4.6. It should be stable now. If you still see hangs, send a fresh request ID.

@AnyKamisato - thanks for the version. You’re on 2.6.22, so your version looks fine. For more debugging, we need the request ID from a stuck request right after it happens (menu icon in the top-right of the chat > Copy Request ID). Which model are you using?

Overall: the team is aware of slow and stuck requests. Part of it was load on the Anthropic API (Opus 4.6), and part was routing on our side. We’re tracking it. If you think usage was charged incorrectly for stuck requests, email [email protected] with details.