Cursor stuck on "Taking longer than expected..."

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I’ve run into the issue several times now where things get stuck on “Taking longer than expected.” Very often, it has simply taken longer and then continued. However, I’ve also had the scenario, like now, where nothing else happened at all.

My first question is: what exactly is happening here? I doubt that this is the LLM’s reasoning process, because the reasoning CoT logs in Closer are normally provided by the reasoning model, and you can see the tokens involved in the reasoning process. So my assumption is that “Taking longer than expected” is an internal Cursor process where something is being fetched.

I’d appreciate some clarification on what is actually happening here.

First, there really MUST be a more detailed view of what is happening, because when you get stuck in an endless loop like this, you have no idea what is going on. You don’t know what is happening in the background, whether anything is still happening, whether it is stuck, or what the current state actually is in general.

Very often, I’ve already been stuck here for two or three minutes, and then suddenly it continued. That means if I had clicked Stop at that point, I would not have known that it was still worth waiting. So this topic really should be explained in much more detail.

Second, what is actually happening in cases like this? Is it possible that this is sometimes a bug and it gets stuck indefinitely? If so, that would mean the user could end up waiting forever.

Over the past few weeks, I have encountered this scenario at least five times where progress simply stopped and I had to press Stop because it took longer than five minutes.

After that, I had to write that it should continue exactly at the same point where it had stopped previously, and then it continued.

Because of this, I strongly assume that whatever is happening in the background, there is no proper retry mechanism or error handling in place here.

I would ask for a more detailed indication of what is actually happening. A message like “taking longer than expected” does not help anyone at the end of the day.

Please explain what exactly is happening here, and the user in the IDE MUST be given more detailed information about the process and what is currently happening.

I assume this is a downstream consequence of “Planning Next Moves.” If something is not working as expected there, then the follow-up message is probably “Taking longer than expected.”

That means Cursor is internally trying to do something unsuccessfully, and I end up in an infinite loop that either has no retry mechanism or no proper error handling.

The question I keep asking myself is what this means in terms of cost. Let’s say I’m using a high-reasoning model, I’m already at 60% of the context window, and this error has now happened to me four times. Four times, I have told it to start over in the internal reasoning process because we have a technical issue, and to continue exactly where it left off. And the whole time, I keep landing back at “Taking longer than expected.”

From my perspective, the question is: am I paying every single time? Probably yes, because I have now pressed execute four times again here. That means I am wasting real money because something is not working internally in Cursor.

Based on other posts I have already shared, this is really adding up by now. Because of Cursor’s technical debt, especially with high-reasoning models, a lot of money is being wasted.

I would like to ask whether, based on the posts I have sent, a credit boost or something similar might be possible for me. I am actively reporting bugs here, and in addition to the things I have posted, I already have a great deal of technical debt, where I feel I have already wasted hundreds of euros.

Is there any possibility of a credit boost?

Request ID;
16a46687-6430-440c-aa5c-48e82cae9cfe

Steps to Reproduce

see description

Screenshots / Screen Recordings

Operating System

Linux

Version Information

Version: 2.6.20
VSCode Version: 1.105.1
Commit: b29eb4ee5f9f6d1cb2afbc09070198d3ea6ad760
Date: 2026-03-17T01:50:02.404Z
Build Type: Stable
Release Track: Early Access
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Linux x64 6.8.0-90-generic

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

1 Like

Hey, thanks for the detailed report.

This is a known issue we’re actively tracking. In short, the “Taking longer than expected” message shows up when the client doesn’t get a server response within the timeout around 15 seconds. There can be different causes, from network issues to server delays.

A few things to try:

  1. Network diagnostics: Cursor Settings > Network > Run Diagnostics, then send a screenshot of the results.
  2. Disable HTTP/2: in settings.json add "cursor.general.disableHttp2": true, then restart Cursor.
  3. Re-index: Cursor Settings > Indexing > Resync Index. This fully fixed it for some users.
  4. Empty folder test: File > Open Folder > empty directory, start a new chat, type “hello”. If that works, the issue might be tied to a specific project or index.

About pricing and credit boosts, we can’t handle billing on the forum. Please contact support directly at [email protected]

Related thread with more community fixes: Taking longer than expected

Let me know how it goes with the steps above.

1 Like

@deanrie

I’ve now had this problem twice in a new context window, when I was at around 60% or more. It then keeps starting over and gets stuck in an endless loop.

This only seems to happen with the GPT 5.4 model. If I switch to Sonet or Opus, it suddenly works.

None of the solutions above worked, so I strongly assume these are internal Cursor issues.

I’ve been running into the same “Taking longer than expected…” issue over the past few days, especially during longer tasks.

After some troubleshooting, I noticed it seems related to network stability. When I switch to a less stable network environment, the issue appears much more frequently and gets stuck at that message. Once I switch back to a more stable network, the problem stops occurring (at least for now).

So it might not be purely a Cursor-side issue, but rather how it handles unstable or fluctuating network conditions.

Suggestion for the team:
It would be really helpful if Cursor could improve resilience to network instability — for example:

  • Better retry/reconnect mechanisms when the connection drops or degrades

  • More informative status messages (e.g., distinguishing between server delay vs. network issues)

  • Possibly a timeout recovery or resume capability for long-running tasks

Hope this helps others narrow down the cause.

2 Likes

The fact that the issue only reproduces on GPT 5.4, while everything works on Sonnet/Opus, is a really helpful detail.

For now, if GPT 5.4 is critical for your task, try starting with a smaller context and splitting the task into parts so you don’t go past 60%+ of the context window. It’s not a real fix, but as a workaround it might help.

About billing and compensation, unfortunately that can only be handled via [email protected]. We can’t help with that on the forum.

Let me know if you get any new details or a request ID from the next freeze.

This happens to me on all of the models I’ve tried. It doesn’t seem to be affected by using slow or fast models. I don’t really have any other networking issues that I know of that would cause a connection interruption.

I’ve also used other agents, like Copilot in VS code and the one built into the JetBrains products with no issues.

I’ve tried it with a very large legacy Java app and a smaller Java app, but I’ve also had the same problems using a repo with maybe 100 scripts in it.

I’ve had the same issues in a long-running chat and newly opened ones.

I am having the same symptom that @jjeff reported on all models that I have tried.

Hey @jjeff @WenningQiu

Yep, this is a known issue. The team is aware and tracking it. We recently shipped a fix for one of the main cases of this bug.

Can you share a couple things so I can check more precisely:

  1. Cursor version Menu > About Cursor > Copy
  2. OS
  3. Request ID from the stuck chat chat top right > Copy Request ID

Also please make sure you’re on the latest Cursor version. If an update is available, try updating and see if the issue still happens.

Let me know how it goes after the update.

Thanks Dean, below is the information you requested. The request has been hanging for a few hours.

Windows 11 Enterprise

4a808582-175d-4c4b-bda7-5f3d0a6ff6fa

I guess my Cursor might have been stuck in a bad state from which it could not recover; it started to work again after I restarted my machine. (I probably should have tried restarting Cursor app before restarting my laptop.)

1 Like

Hello Team Cursor

I’m experiencing the same “taking longer than expected” error with the Opus 4.6 Thinking Max model. Please help.

@WenningQiu Thanks for the info and the Request ID. A restart is a solid workaround in cases where Cursor gets stuck. If it starts happening regularly, let me know and share a new Request ID.

@july_smith For debugging, we’ll need a bit more info:

  1. Cursor version: Menu > About Cursor > Copy
  2. OS
  3. Request ID from the stuck chat: top-right chat icon > Copy Request ID

Also make sure you’re on the latest Cursor version. If an update is available, update and check if the issue still happens.

i’m noticing this occurs a lot more now as well although have only been using the composer 2 model in the last few weeks.

The same problem occurred. It occurred very frequently.

Hey @WillB @Dunky-Z,

This is a known issue and the team is aware. To look into your specific cases, please share:

  1. Cursor version: Menu > About Cursor > Copy
  2. OS
  3. Request ID from the stuck chat: top-right chat icon > Copy Request ID

Also make sure you’re on the latest version of Cursor. If there’s an update available, please update and check if the issue still happens.

@WillB, the fact that this started specifically on Composer 2 is a helpful detail. Please try switching temporarily to a different model, for example claude-4.6-sonnet, and see if you can reproduce it there.

Let us know how it goes.

Having the same issue with cursor and the auto mode aswell as Composer 2. Stuck on initial submit with “taking longer than expected”. Dropping the info, maybe it helps:

Version:
Version: 3.0.16 (user setup)
VSCode Version: 1.105.1
Commit: 475871d112608994deb2e3065dfb7c6b0baa0c50
Date: 2026-04-09T05:33:51.767Z
Layout: editor
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26200

Stuck chat: da002a38-5774-44ec-9729-7a2059712a32

1 Like