“Connection failed.” A two-year paper trail of dawg-p00p

Your Analyst:

Me, an independent forensic examiner who reads logs for sport, hunts with a bow, field-craft is a hobby, Chevy over Ford, rips through .raw and .EO1 like hot butter, and laziness is a terminable offense.

I took it upon myself (once again) to investigate (it is my counterespionage background) WTF is actually going on and why isn’t someone picking up the dog :poop: :poop: in my yard?

So, working in Cursor, I started using Cursor to assess Cursor. I know, brilliant right? I mean, if this red neck with a 4x4 pickup can do it, anyone can - or you would think so - could as well.

So I started capturing and harvesting my own logs, analyzing and researching and the results are, IMHO, very humiliating for the entire Cursor team.

Let’s get into the real meat of the matter shall we?

Executive summary

My local network looks fine. DNS resolves fast, TLS is clean, HTTP/2 is negotiated, and the servers answer 200 OK. The failures arrive when the app tries to open HTTP/2 streams for AI responses, which are repeatedly refused with NGHTTP2_REFUSED_STREAM.

The same error and the same “check my VPN” banner have been reported by users since November 2023 (check my shoes) and continue through this week. This isn’t a new wrinkle. (Cursor - Community Forum)

Cursor’s own forum mods acknowledge NGHTTP2_REFUSED_STREAM as a known connection issue. Users frequently report that disabling HTTP/2 helps, but just as many say it doesn’t. (Cursor - Community Forum)

Third-party monitors show lots of incident churn in 2025 even when the official status is “operational.” One tracker has logged 100+ incidents since March 2025. Users also note outages with no matching status post. Bless their hearts. (IsDown)

What I examined

I examined my own dawg :poop: , raw app diagnostics and request logs from Cursor 2.0.69 on Windows 11. They show healthy DNS, clean TLS, HTTP/2 in use, and repeated stream failures to Cursor’s backend at api2.cursor.sh (Amazon-issued certs, multiple AWS IPs).

:poop: Samples:

  1. My own customer email to Cursor support listing a burst of failed Request IDs after “Downloaded v2.0 – newest version.”
  2. Public reports across Cursor’s forum (2023→2025) and related sources to establish the timeline and common failure patterns. Key posts are cited inline.
  3. My own logs and errort reports over the span of a couple days.

My Dawg’s :poop: : (Evidence from my machine)

Version & environment

Version: 2.0.69 (user setup)  |  VSCode 1.99.3  |  Electron 37.7.0
OS: Windows_NT x64 10.0.26200

The app negotiates HTTP/2 to https://api2.cursor.sh with an Amazon RSA 2048 M03 certificate, and the health pings return 200 OK in ~80–95 ms. That’s not what “my internet is broken” looks like. (AKA: Someone else’s dog just pooped in my yard and Cursor is telling me it is my dawg’s :poop: .)

The proof is in the :poop: ;

Immediately after the healthy checks, the AI request stream starts and dies with NGHTTP2_REFUSED_STREAM:

“Start → Sending ping 1 → Starting stream → Pushing first message → Error: … NGHTTP2_REFUSED_STREAM” (repeats).
Same pattern across different AWS endpoints and at different times.

Translation to plain English: the TCP and TLS handshakes succeed and the server even answers a regular HTTPS check with 200. The failure hits when Cursor opens a new HTTP/2 stream for the chat response. The server side says “nope” and refuses the stream. (Yep - Cursor dawg :poop: for certain.)

This error code is a generic HTTP/2 refusal seen when the remote peer is going away, exceeding limits, or otherwise declining additional streams. It’s widely documented in Node’s http2 and other client stacks. That doesn’t prove the exact root cause here, but it does show this class of failure is server-side or middlebox behavior, not a dead Wi-Fi card. (Stack Overflow)

Oh wait … there is more - “Reqest IDs flood”

My email to Cursor (Yes, as a digital forensic examiner I document TF out of everything), shows OVER ten failing Request IDs in one burst immediately after updating to v2.0. That squares with the log spam above.

The history of dawg :poop: just piles up. (timeline)

Nov 24, 2023 — “Connection failed.” Thread opened; brief outage acknowledged by Cursor staff. Earliest forum record of the exact wording. (Cursor - Community Forum)

Dec 6, 2023 — “The connection failed… VPN?” User reports weeks of failures without VPN. Staff suggests model toggles. Pattern of “it’s my network” begins. (Cursor - Community Forum)

Mar 18, 2024 — “[SOLVED] Connection Failed. Try Again ERROR.” Multiple users; seems transient, no specific local fix identified. (Cursor - Community Forum)

Oct–Nov 2024 — Persistent “connection failed” reports; users try disabling HTTP/2; mixed results. (Cursor - Community Forum)

Nov 25, 2024 — “Connection failed 0.43.3.” Same banner, suspected outage. (Cursor - Community Forum)

Mar–Apr 2025 — Large, active threads after updates to 0.47.x; screenshots show the same dialog; sometimes HTTP/2 toggle helps, often not. (Cursor - Community Forum)

Apr 1, 2025 — Users paste logs with NGHTTP2_REFUSED_STREAM errors. (Cursor - Community Forum)

Jul–Aug 2025 — “Premature close / NGHTTP2_INTERNAL_ERROR,” “No incident on status page,” reliability concerns. (Cursor - Community Forum)

Oct 26, 2025 — Staff: “NGHTTP2_REFUSED_STREAM is a known connection issue.” Report ties failures to chat streams; toggling HTTP/2 does not always help. (Cursor - Community Forum)

Nov 3–7, 2025 — New reports on 2.0.54 and later; network diagnostics “green,” still failing, often when attaching context. (Cursor - Community Forum)

Aggregated status — A status monitor claims 107 incidents since March 2025 while official pages frequently show “operational.” User forum posts note incidents with no matching status entry. If that dog is the canary, it’s wheezing. (IsDown)

Bottom line: Public, recurring “Connection failed / HTTP/2 refused stream” reports date back at least 23 months and never really stopped. Never ending flood of pooooooo.

Possible Diagnosis (:poop: analysis): Explain the symptoms

I’m not guessing at internal architecture, just connecting dots you can see:

Healthy base connectivity: DNS resolution to numerous AWS IPs in us-east-1, TLS fine, normal latency.

Failure on stream start: Repeated NGHTTP2_REFUSED_STREAM immediately after Starting streamPushing first message.

Not strictly client-side: Marketplace and other checks return 200 during the failures, so my host isn’t offline, and my router isn’t having a bad hair day.

HTTP/2 fragility acknowledged: Cursor staff repeatedly advise disabling HTTP/2; others report that it helps sometimes, not always. The product docs emphasize HTTP/2 for streaming AI. (Cursor - Community Forum)

Industry-wide, REFUSED_STREAM is consistent with a peer sending GOAWAY, hitting concurrency/flow-control limits, or a load balancer doing connection reuse in ways the upstream doesn’t love. Node/http2 issues discuss similar patterns. That’s a polite way of saying the failure sits somewhere between Cursor’s edge and its AI backend, not my living-room Wi-Fi. (GitHub)

“It’s my VPN.” Well, about that…

Note: This … really got me going. Bad customer relations Cursor. So you created this monster in me, now you have to live with me. Let’s goooooo

My case: no VPN, no proxy. The logs still show refused HTTP/2 streams while everything else is green.

Community history: many users state they aren’t on VPNs or corporate networks when the exact same dialog appears. The go-to advice is often “disable HTTP/2” or “check firewall,” but threads continue with failures after trying those steps. (Cursor - Community Forum)

Official posture: in late October 2025 a Cursor mod called NGHTTP2_REFUSED_STREAM a known issue and requested diagnostics, implying investigation on their side. (Cursor - Community Forum)

So telling home users to power-cycle the router may occasionally chase off a gremlin, but the pattern here is older and broader.

Let’s face it, for the millions in funding and all the Hyped propaghanda, this Cursor is not meeting expectations.

The receipts (selected log excerpts)

Protocol: h2 … Status: 200 … Result: true in 79ms … Error: … NGHTTP2_REFUSED_STREAM (multiple times within the same minute).
Same pattern across different IPs (3.234.0.182, 44.194.175.55, 3.230.94.158, etc.).
Marketplace endpoint continues to return 200 during failures.
my email’s Request ID burst immediately after updating to v2.0.

How long has this been going on?

First confirmed forum report of this exact message: Nov 24, 2023. (Cursor - Community Forum)
Recurring through 2024: multiple threads in March, July, August, October, and November 2024. (Cursor - Community Forum)
All through 2025: large threads around March–April; HTTP/2 refusal logs posted by users; staff calls it a known issue in Oct 2025; fresh reports continue in November 2025. (Cursor - Community Forum)

Duration: at least 23 months of public, repeated occurrence.

What Cursor could do next (concrete, testable)

These are engineering work items inferred from the failure class and public reports, not speculation about their internal stack:

  1. Graceful HTTP/2 handling. Treat REFUSED_STREAM as retryable with exponential backoff and jitter, respect GOAWAY, and cap concurrent request streams per connection. This mirrors guidance from Node/http2 discussions. (GitHub)

  2. Edge configuration review. Validate ALB/NGINX/Envoy HTTP/2 settings: max concurrent streams, flow control windows, idle timeouts, connection reuse between upstreams, and whether connection draining is coordinated during deploys.

  3. Client toggles that actually toggle. Forum posts from 2024 mention HTTP/2 being used despite the “disable HTTP/2” setting. Ensure the toggle is global and survives reloads. (Cursor - Community Forum)

  4. Status transparency. If user-visible “connection failed” spikes aren’t reflected on the status page, explain the classification. One tracker has >100 incidents logged since March 2025; bridging that gap would help users trust the page. (IsDown)

  5. Better error surfacing. Map NGHTTP2_ classes to actionable messages instead of the generic “VPN” banner. This would save my team a heap of ticket back-and-forth.

Closing thoughts - the final smell

Between my logs and two years of public reports, the pattern is consistent: basic connectivity is fine; HTTP/2 AI streams are being refused somewhere past my last-mile network. Telling a home user with clean TLS to “check their VPN” is like telling a catfish to climb a tree. It ain’t the right tool for the job.

If Cursor wants sample material to reproduce: you’ve already supplied a tight cluster of Request IDs and clock-matched logs. That should let them trace through their edge to the agent service.

Sources

my logs and diagnostics from Cursor 2.0.69, Windows 11.
my email to Cursor listing failed Request IDs after v2.0 update.
Cursor community threads: 2023–2025 reports of “Connection failed,” HTTP/2-related errors, and staff acknowledgments. (Cursor - Community Forum)
Node/http2 references showing what NGHTTP2_REFUSED_STREAM typically indicates. (GitHub)
Status and outage aggregators indicating incident volume and disparities with “operational” status. (IsDown)

2 Likes

This is essentially my experience as well. The issue isn’t even consistent, but when it occurs, it makes the product completely unusable. I am not running a VPN. All my network tests are clean/solid.

Seemingly no real attempt by Cursor to solve the issue.

1 Like

No comment regarding the actual issue but really - one of the more enjoyable ChatGPT posts I saw in a while :joy: .

1 Like