Plan Mode → Build consistently kills connection (Reconnecting loop) since latest update

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Since the latest Cursor update, Plan Mode works up to generating the plan, but clicking “Build” reliably breaks the agent connection and the IDE enters an infinite “Reconnecting/Thinking” loop. Network Diagnostics then shows multiple services failing (API/Ping/Chat/Agents canceled, EverythingProvider timeouts, DNS/SSL pending). Extensions are NOT the cause (Extension Bisect confirms). This is a regression: it worked normally before the update on the same machine/network. The issue blocks a core feature (Plan → Build) and effectively makes Cursor unusable for my workflow.

Steps to Reproduce

1.	Open Cursor IDE
2.	Open any project (happens in my main repo; also occurs when trying to use Plan/Build generally)
3.	Open Chat and switch to Plan Mode
4.	Ask it to create a plan (plan generation completes successfully)
5.	Click Build
6.	Observe: connection collapses → “Reconnecting” / “Thinking” loop; agent/chat features stop working; sometimes “Extension host terminated unexpectedly…”

Expected Behavior

After clicking Build, Cursor should execute the generated plan normally (agent runs, tools work, connection remains stable).

Operating System

Windows 10/11

Version Information

Version: 2.5.25 (system setup)
VSCode Version: 1.105.1
Commit: 7150844152b426ed50d2b68dd6b33b5c5beb73c0
Date: 2026-02-24T07:17:49.417Z
Build Type: Stable
Release Track: Default
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.20348

For AI issues: which model did you use?

Composer 1.5

Additional Information

Sometimes it´s just stuck in reconnection loop without stop of all agent features

Does this stop you from using Cursor

Yes - Cursor is unusable

same issue.

always loop.

always reconnecting …

cost my 200+ requests, does nothing.

1 Like

Hey, thanks for the detailed report. I see you already went through extension bisect, that’s great and it narrows things down.

A couple things to check:

  1. Try disabling HTTP/2: App Settings Ctrl + , → search for HTTP/2 → enable “Disable HTTP/2”. Restart Cursor. This often helps with connection drop issues.

  2. If that doesn’t help, send the Request ID from the next failed request (Chat context menu in the top right → Copy Request ID). This helps us track what exactly is dropping on the server.

  3. Are you using a VPN or a corporate proxy?

@ttys3 same question about HTTP/2 and VPN.

Let us know how it goes with HTTP/2.

Summary

@deanrie yes, I tried disable HTTP/2 and use http/1.1, that does not help.I didn’t record the request ID at the time. It resulted in a prompt indicating that the subagent had failed after 10 attempts, and this process took a very long time (before it tell me it failed, if stop before it tell me, no request id will shown up), consuming a significant number of my cursor request attempts. (legacy count time price user)10 hours ago, I lost approximately 200+ cursor requests. (This is almost half of my monthly budget.)I also tried the cursor’s network detection feature, and the results indicated that my connection was fine. Therefore, it’s unlikely that the issue is due to my connection.I’m using macOS 26, and I’ve tested both the latest stable version (latest macOS arm64 Cursor 2.5.x) and the nightly version (2.6.0-pre.33) of Cursor, and I’m experiencing the same problem in both cases.

Summary

update:@deanrie The connection failed 10 times. Please check your network connection and try again.Request ID: 81649520-9891-4c74-b140-5ef89ef16ad3Connection failed repeatedly

my problem resolved. I have to say, Both the stable and nightly Cursor versions have consistently been stable.

The reason is that a local machine process related to controlling network connections had a bug and was constantly crashing and restarting, leading to continuous network connections, disconnections, and reconnections.

1 Like

Update: Disabling HTTP/2 fixed the issue for me.

I enabled Cursor → Settings → General → “Disable HTTP/2” (forces HTTP/1.1 for requests), then fully restarted Cursor. After restart, Plan Mode → Build works normally again (no more reconnecting/thinking loop).

Environment details:

  • Cursor running on a Windows Server

  • Accessed via RDP from a MacBook

  • Remote access is secured via Tailscale (private mesh VPN) so the server isn’t exposed directly to the public internet

This looks like a HTTP/2-related connectivity/regression issue in my setup, and forcing HTTP/1.1 is a reliable workaround so far.

I didn’t change anything else (same network/setup); the issue started after the update.

I’ll keep HTTP/2 disabled for now and will monitor. If the reconnecting/Build issue happens again, I’ll post a new Request ID and logs.

Update (still broken): Disabling HTTP/2 helped only temporarily (~1 hour). The issue returned, and Plan Mode → Build reliably triggers a full reconnect loop again.

Request ID: bf83df2d-660c-4546-822a-b588a3205e57

Time: 2026-02-26T14:08:21Z (from diagnostics log)

Network Diagnostics: FAILED: API, Ping, Chat, Agent, Authentication UI, Cursor Tab, Agent Endpoint, Codebase Indexing.

Many endpoints fail with Error: Canceled after ~2.2s (authenticator.cursor.sh, api3.cursor.sh, agent.api5.cursor.sh, repo42.cursor.sh).

Marketplace + Authentication (prod.authentication.cursor.sh) are Success.

Also seeing “Extension host terminated unexpectedly” again. Extensions bisect previously confirmed not extension-related.

Please investigate using the Request ID (looks like DNS/SSL/stream cancellation rather than HTTP/2 only).

FAILED (8): API, Ping, Chat, Agent, Authentication UI, Cursor Tab, Agent Endpoint, Codebase Indexing

DNS: Running

SSL: Running
Logs:
[2026-02-26T14:08:21.396Z] Start

API: Error: Canceled
Logs:
[2026-02-26T14:08:21.398Z] Start
[2026-02-26T14:08:23.626Z] Error: Canceled: Canceled

Ping: Error: Canceled
Logs:
[2026-02-26T14:08:21.399Z] Sending ping 1
[2026-02-26T14:08:23.626Z] Error: Canceled: Canceled

Chat: Error: Canceled
Logs:
[2026-02-26T14:08:21.400Z] Starting streamSSE
[2026-02-26T14:08:23.627Z] Error: Canceled: Canceled

Agent: Error: Canceled
Logs:
[2026-02-26T14:08:21.400Z] Starting stream
[2026-02-26T14:08:21.400Z] Pushing first message
[2026-02-26T14:08:23.627Z] Error: Canceled: Canceled

Marketplace: Success
Logs:
[2026-02-26T14:08:21.393Z] Host: marketplace.cursorapi.com
[2026-02-26T14:08:21.555Z] Response in 162ms
[2026-02-26T14:08:21.555Z] Response: 200
[2026-02-26T14:08:21.555Z] Response Type: cors
[2026-02-26T14:08:21.555Z] Server: null
[2026-02-26T14:08:21.555Z] Result: OK in 162ms

Authentication: Success
Logs:
[2026-02-26T14:08:21.394Z] Host: prod.authentication.cursor.sh
[2026-02-26T14:08:21.777Z] Response: 200 in 383ms

Authentication UI: Error: Canceled
Logs:
[2026-02-26T14:08:21.396Z] DNS lookup: authenticator.cursor.sh
[2026-02-26T14:08:23.606Z] Error: Canceled: Canceled

Cursor Tab: Error: Canceled
Logs:
[2026-02-26T14:08:21.396Z] DNS lookup: api3.cursor.sh
[2026-02-26T14:08:23.607Z] Error: Canceled: Canceled

Agent Endpoint: Error: Canceled
Logs:
[2026-02-26T14:08:21.396Z] DNS lookup: agent.api5.cursor.sh
[2026-02-26T14:08:23.608Z] Error: Canceled: Canceled

Codebase Indexing: Error: Canceled
Logs:
[2026-02-26T14:08:21.396Z] DNS lookup: repo42.cursor.sh
[2026-02-26T14:08:23.610Z] Error: Canceled: Canceled

Downloads: Success
Logs:
[2026-02-26T14:08:21.395Z] Host: downloads.cursor.com
[2026-02-26T14:08:21.766Z] Response: 403 in 371ms

CDN: Success
Logs:
[2026-02-26T14:08:21.396Z] Host: cursor-cdn.com
[2026-02-26T14:08:21.662Z] Response: 404 in 266ms

Hey, thanks for the follow-up and the Request ID. The HTTP/2 fix being temporary makes sense given what I see in your diagnostics.

There’s a clear pattern in your network logs: domains like marketplace.cursorapi.com, prod.authentication.cursor.sh, downloads.cursor.com all work fine. But every .cursor.sh subdomain (authenticator.cursor.sh, api3.cursor.sh, agent.api5.cursor.sh, repo42.cursor.sh) fails with “Error: Canceled” at exactly ~2.2 seconds. This strongly suggests something in the network path is selectively killing connections to those specific domains.

Given your setup (Windows Server + RDP + Tailscale), Tailscale is the prime suspect. It routes traffic through its mesh network and could be interfering with long-lived connections to those subdomains.

Can you try two things:

  1. Test without Tailscale - temporarily disconnect Tailscale on the Windows Server, access Cursor directly (not through RDP), and see if Plan → Build works. If direct access isn’t possible, try at least disabling Tailscale’s network routing while keeping RDP alive through a different path. This is the most important test.

  2. DNS check - from the Windows Server, run in PowerShell:

    nslookup api3.cursor.sh
    nslookup agent.api5.cursor.sh
    nslookup authenticator.cursor.sh
    

    Check if DNS resolves correctly and whether Tailscale’s MagicDNS is interfering.

Also worth checking: does Tailscale have any ACL rules or exit node configuration that could affect outbound HTTPS to those subdomains?

Let me know the results of the Tailscale test - that’ll tell us exactly where to dig next.

Hi Dean , here are the results:

  1. Test without Tailscale:

I fully disabled/quit Tailscale on the Windows Server and tested again. Plan mode can generate the plan, but as soon as I click Build, it still falls into the same “Reconnecting” loop. Shortly after, Cursor reports that extensions were terminated unexpectedly, and Network Diagnostics shows the same failures as before.

So this reproduces even with Tailscale completely off, which strongly suggests it’s not caused by Tailscale (routing/DNS/ACL/exit node).

  1. DNS check (nslookup):

I ran nslookup for the hostnames you listed (api3.cursor.sh, agent.api5.cursor.sh, authenticator.cursor.sh, repo42.cursor.sh). All of them resolve successfully from the server

Additional symptom:

If Cursor stays in the reconnecting state for a while, the window can crash with:

The window terminated unexpectedly … reason: OOM (code -536870904).

If I fully quit and restart Cursor, connections come back and Cursor works again temporarily until I click “Build” in Plan Mode, which triggers the reconnect loop/crash again.

brother, im facing same issue, but nothing works. trued to rollback version and it doesnt help

1 Like

Good thing we ruled out Tailscale, that’s useful info.

The pattern is very specific: every .cursor.sh subdomain fails with “Error: Canceled” after about 2,2 seconds, while marketplace.cursorapi.com, prod.authentication.cursor.sh, and downloads.cursor.com work fine. A 2,2 s limit looks suspicious. It feels like something in the network stack is cutting connections specifically to those hosts.

A few more checks:

  1. Check Windows Firewall / Defender: Since this is Windows Server, make sure there aren’t firewall rules or Windows Defender Firewall settings that selectively block or limit outbound connections to *.cursor.sh. Run this in PowerShell:

    Get-NetFirewallRule | Where-Object {$_.Enabled -eq 'True'}
    

    to list enabled rules.

  2. Try different DNS: Even if nslookup works, force DNS to 8.8.8.8 or 1.1.1.1 in your network adapter settings and test again. Sometimes DNS resolves, but the route you get is different.

  3. Traceroute: Run this in PowerShell to see if the “failing” and “working” hosts take different paths:

    tracert api3.cursor.sh
    tracert authenticator.cursor.sh
    tracert marketplace.cursorapi.com
    
  4. Any security software: Is there antivirus, EDR/endpoint protection, or anything that inspects or filters HTTPS traffic on this server?

The OOM crash (“window terminated unexpectedly, reason: OOM”) is likely a side effect. The reconnect loop keeps retrying and uses more and more memory until it hits the Windows renderer limit around 4 GB. Fixing the network issue should stop the OOM too.

Hey, share a few details so we can help:

  • Cursor version (Help → About Cursor → Copy)
  • OS
  • Are you using a VPN or proxy?
  • Run Network Diagnostics (Cursor Settings → Network → Run Diagnostics) and paste the results
  • The Request ID from the failed Build attempt (chat context menu in the top right → Copy Request ID)

Also, when you say you rolled back the version, which version did you roll back to?

Hi Dean , I ran all requested checks. Summary below:

Check 1 (Windows Firewall):

  • No enabled outbound “Block” rules found.

    (Get-NetFirewallRule -Enabled True -Direction Outbound -Action Block returned no objects.)

  • The only explicit “Block” rules present are inbound hardening rules (NetBIOS/SMB ports 137/138/139/445).

    So I don’t see Windows Firewall intentionally blocking outbound access to *.cursor.sh.

Check 2 (DNS resolver change + single Cursor retry):

  • Switched Ethernet DNS to Cloudflare (1.1.1.1 / 1.0.0.1) and confirmed via Get-DnsClientServerAddress.

  • nslookup for the listed hosts resolved normally.

  • Retested Cursor once: behavior is unchanged — Plan → Build still triggers the reconnect loop and Network Diagnostics fails with the same “Canceled” + “Timeout waiting for EverythingProvider” pattern.

    So changing DNS resolvers does not affect the issue.

Check 3 (tracert):

  • tracert api3.cursor.sh completes successfully to destination (Cloudflare).

  • tracert marketplace.cursorapi.com completes successfully to destination (Cloudflare).

  • tracert agent.api5.cursor.sh reaches Twelve99 transit and then subsequent hops time out (* * *).

    Note: Test-NetConnection agent.api5.cursor.sh -Port 443 succeeds (TcpTestSucceeded=True), so the traceroute timeouts may indicate ICMP/traceroute filtering after that point, not necessarily TCP/443 being blocked.

Check 4 (Security software):

  • Only Windows Defender appears enabled:

    AMServiceEnabled=True, AntispywareEnabled=True, AntivirusEnabled=True, RealTimeProtectionEnabled=True

    No additional AV/EDR/security software installed.

Issue still reproducible: Plan mode generates the plan successfully, but clicking Build triggers “Reconnecting”, then extensions may terminate unexpectedly, and diagnostics shows the same failures as before.

Great that you checked all of that. At this point, the usual network causes are basically ruled out.

The pattern is unusual: TCP/443 connects fine, DNS resolves, the firewall isn’t blocking, but the connection drops at about 2,2 s every time. At the same time, some Cursor domains (marketplace.cursorapi.com, prod.authentication.cursor.sh, downloads.cursor.com) work fine, while others (api3.cursor.sh, authenticator.cursor.sh) fail.

This points to a Cursor specific issue, not a general network issue. Check these in order:

  1. Proxy settings inside Cursor (check this first)

A timeout around 2,2 s is typical when the app tries to connect to a proxy that doesn’t exist. If a proxy was set in Cursor by accident:

  • Press Ctrl+, then search for proxy
  • Check settings.json:
    "http.proxy": "",              // should be EMPTY
    "http.proxySupport": "off",    // or "on" for system proxy
    "http.noProxy": []             // check the list
    
  • If http.proxy contains something like http://127.0.0.1:7890 or any other address, remove it or clear it
  • Restart Cursor and test again
  1. curl test (to isolate OS vs Cursor)

    curl.exe -v --connect-timeout 10 https://api3.cursor.sh
    curl.exe -v --connect-timeout 10 https://marketplace.cursorapi.com
    
  • If curl also fails on api3.cursor.sh, it’s likely a Windows or SChannel issue, go to steps 3 and 4
  • If curl works, the issue is only in Cursor
  1. SChannel and TLS events (if curl also fails)

    Get-WinEvent -LogName "System" -MaxEvents 50 | Where-Object { $_.ProviderName -eq "Schannel" }
    

Also check Applications and Services Logs > Microsoft > Windows > CAPI2 > Operational for TLS or certificate errors.

  1. Check TLS version

    [Net.ServicePointManager]::SecurityProtocol
    

You should see Tls12, Tls13. Also check any GPO settings for cipher suites or TLS restrictions.

The curl results and the proxy check in Cursor will show the right direction. Start with step 1, it’s the most common cause for this kind of pattern.

I followed your latest steps:

1) Proxy settings inside Cursor:

Checked VS Code/Cursor settings.json (the one Cursor uses). There are no proxy settings present (no http.proxy, http.proxySupport, http.noProxy entries).

2) curl test (OS vs Cursor):

So OS/SChannel connectivity looks fine and the issue appears Cursor-only.

2 Likes
  1. Version: 2.5.26 (user setup)VSCode Version: 1.105.1Commit: 7d96c2a03bb088ad367615e9da1a3fe20fbbc6a0Date: 2026-02-26T04:57:56.825ZBuild Type: StableRelease Track: DefaultElectron: 39.4.0Chromium: 142.0.7444.265Node.js: 22.22.0V8: 14.2.231.22-electron.0OS: Windows_NT x64 10.0.19045 Windows 10, no proxy, no vpn, onlu corporate net nut they dont block. Diagnostic - Cursor Network Diagnostic Results

    FAILED (4): Authentication UI, Cursor Tab, Agent Endpoint, Codebase Indexing

    DNS: Running

    SSL: Running

    API: SuccessLogs:[2026-03-02T07:36:46.644Z] Start[2026-03-02T07:36:47.453Z] Result: true

    Ping: SuccessLogs:[2026-03-02T07:36:46.645Z] Sending ping 1[2026-03-02T07:36:47.402Z] Response: ‘ping’ in 757ms[2026-03-02T07:36:47.402Z] Sending ping 2[2026-03-02T07:36:47.952Z] Response: ‘ping’ in 550ms[2026-03-02T07:36:47.952Z] Sending ping 3[2026-03-02T07:36:48.462Z] Response: ‘ping’ in 510ms[2026-03-02T07:36:48.462Z] Sending ping 4[2026-03-02T07:36:48.984Z] Response: ‘ping’ in 522ms[2026-03-02T07:36:48.984Z] Sending ping 5[2026-03-02T07:36:49.507Z] Response: ‘ping’ in 523ms[2026-03-02T07:36:49.507Z] Result: true

    Chat: SuccessLogs:[2026-03-02T07:36:46.647Z] Starting streamSSE[2026-03-02T07:36:47.404Z] Response: ‘foo’ in 754ms[2026-03-02T07:36:48.389Z] Response: ‘foo’ in 985ms[2026-03-02T07:36:49.390Z] Response: ‘foo’ in 1001ms[2026-03-02T07:36:50.439Z] Response: ‘foo’ in 1049ms[2026-03-02T07:36:51.439Z] Response: ‘foo’ in 1000ms[2026-03-02T07:36:52.441Z] Result: true

    Agent: SuccessLogs:[2026-03-02T07:36:46.649Z] Starting stream[2026-03-02T07:36:46.650Z] Pushing first message[2026-03-02T07:36:47.505Z] Response: ‘foo’ in 855ms[2026-03-02T07:36:48.006Z] Pushing next message[2026-03-02T07:36:48.614Z] Response: ‘foo’ in 1109ms[2026-03-02T07:36:49.114Z] Pushing next message[2026-03-02T07:36:49.707Z] Response: ‘foo’ in 1093ms[2026-03-02T07:36:50.208Z] Pushing next message[2026-03-02T07:36:50.816Z] Response: ‘foo’ in 1109ms[2026-03-02T07:36:51.321Z] Pushing next message[2026-03-02T07:36:51.916Z] Response: ‘foo’ in 1100ms[2026-03-02T07:36:51.916Z] Result: true

    Marketplace: SuccessLogs:[2026-03-02T07:36:46.635Z] Host: [2026-03-02T07:36:46.920Z] Response in 284ms[2026-03-02T07:36:46.920Z] Response: 200 OK[2026-03-02T07:36:46.920Z] Response Type: cors[2026-03-02T07:36:46.920Z] Server: null[2026-03-02T07:36:46.920Z] Result: OK in 285ms

    Authentication: RunningLogs:[2026-03-02T07:36:46.636Z] Host:

    Authentication UI: Error: Timeout waiting for EverythingProviderLogs:[2026-03-02T07:36:51.684Z] Error: Error: Timeout waiting for EverythingProvider

    Cursor Tab: Error: Timeout waiting for EverythingProviderLogs:[2026-03-02T07:36:51.686Z] Error: Error: Timeout waiting for EverythingProvider

    Agent Endpoint: Error: Timeout waiting for EverythingProviderLogs:[2026-03-02T07:36:51.687Z] Error: Error: Timeout waiting for EverythingProvider

    Codebase Indexing: Error: Timeout waiting for EverythingProviderLogs:[2026-03-02T07:36:51.688Z] Error: Error: Timeout waiting for EverythingProvider

    Downloads: SuccessLogs:[2026-03-02T07:36:46.641Z] Host: [2026-03-02T07:36:47.014Z] Response: 403 Forbidden in 373ms

    CDN: SuccessLogs:[2026-03-02T07:36:46.643Z] Host: [2026-03-02T07:36:46.952Z] Response: 404 Not Found in 309ms Request ID: 971d7354-bc9b-4d2d-9905-63976679e407{“error”:“ERROR_CUSTOM”,“details”:{“title”:“Agent Execution Timed Out”,“detail”:“The agent execution provider did not respond within 30 seconds. This may indicate the extension host is not running or is unresponsive.”,“isRetryable”:false,“shouldShowImmediateError”:true,“additionalInfo”:{},“buttons”:[{“label”:“Reload Window”,“reloadWindow”:{}}],“planChoices”:}}Agent Execution Timed Out [deadline_exceeded]ConnectError: [deadline_exceeded] Agent Execution Timed Outat :43352:10263at async O_v.createExecInstance (:43352:8309)at async $Jb (:465:569778)at async $H.execute (:32115:6603)at async dBl.execute (:43352:1196)at async Lpy.execute (:44397:12825)at async zOl.buildComposerRequestContext (:44408:3793)at async zOl.streamFromAgentBackend (:44408:5384)at async zOl.getAgentStreamResponse (:44408:9837)at async yLe.submitChatMaybeAbortCurrent (:32182:15752)at async Gs (:43415:4781) I also ran all the network checks and fully reinstalled Curosr. Checked proxy settings, there is nothing. The main issue is network timeout for everything provider. I trtied runnning cursor with disabled extensions, but this dosent work either, looks like a cursor bug

2 Likes

Me too!
Roll back to 2.2.20 to solve the problem.

Version: 2.2.20 (system setup)
VSCode Version: 1.105.1
Commit: b3573281c4775bfc6bba466bf6563d3d498d1070
Date: 2025-12-12T06:29:26.017Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Windows_NT x64 10.0.26100

1 Like

on 2.2.44 facing [2026-03-03T12:05:41.295Z] Starting streamSSE
[2026-03-03T12:05:44.719Z] Response: ‘foo’ in 3420ms
[2026-03-03T12:05:45.244Z] Response: ‘foo’ in 525ms
[2026-03-03T12:05:46.231Z] Response: ‘foo’ in 987ms
[2026-03-03T12:05:47.231Z] Response: ‘foo’ in 1000ms
[2026-03-03T12:05:48.231Z] Response: ‘foo’ in 1000ms
[2026-03-03T12:05:49.249Z] Result: Error: Streaming responses are being buffered by a proxy in your network environment и [2026-03-03T12:05:49.431Z] Result: Error: HTTP/1.1 SSE responses are being buffered by a proxy in your network environment

1 Like

any updates on this issue? Tried new version 2.6.11 but still no success