Renderer OOM when agent turn loads many file contexts + large writes

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

When the Cursor agent executes a turn that simultaneously:

  1. loads a large number of files as @context (19 files in my case),
  2. writes a large file to the chat state (149-line plan file), and
  3. queues a large Shell tool call (~60 lines of PowerShell) for rendering,

the renderer process freezes and then crashes with OOM code -536870904.

The crash sequence is three progressive “detected unresponsive” cycles before the
renderer exits:

05:19:51 [error] CodeWindow: detected unresponsive
05:20:06 [error] CodeWindow unresponsive samples: <Array.push inside updateComposerDataSetStore>
05:20:28 [error] CodeWindow: detected unresponsive
05:20:43 [error] CodeWindow unresponsive samples: <updateComposerBubbleSetStore>
05:20:55 [error] CodeWindow: detected unresponsive
05:21:13 [error] CodeWindow: renderer process gone (reason: oom, code: -536870904)

The stack symbol in the unresponsive samples is Array.push inside
updateComposerDataSetStore and updateComposerBubbleSetStore — the React
store that backs the chat composer UI. The renderer appears to exhaust the V8
heap while pushing many state entries (context files + file write diff + tool
call preview) into the composer store in a single turn.

Aggravating factor observed: Two agent-loop windows were running concurrently
at the time (activeCount=2 in main.log PowerMainService wakelock entries),
which likely doubled renderer memory pressure.

This is distinct from the periodic-OOM issues (#153927**,** #152534**).** Those
appear to be continuous memory leak / OTEL exporter issues. This is a single
“spike” OOM caused by one large batch update to the composer state.

Steps to Reproduce

  1. Open the Cursor agent in at least one chat tab (two tabs with concurrent agent
    loops more reliably reproduces).
  2. Attach or load 15–20 files as context in the agent conversation (either via
    @file references or by having the agent load them via SemanticSearch /
    Read / Glob calls which accumulate context).
  3. Have the agent write a large file (100–200 lines) to disk in the same turn —
    the diff appears in the chat state.
  4. Have the agent queue a large Shell or terminal tool call (50–70 lines of
    script) in the same turn.
  5. Observe: renderer freezes (“not responding” dialog), then terminates with
    “The window terminated unexpectedly (reason: ‘oom’, code: ‘-536870904’)”.

Expected Behavior

The renderer should stream the composer state updates incrementally and release
memory between large operations. Alternatively, the agent should yield between
large state updates to keep the renderer heap below the V8 limit.

Screenshots / Screen Recordings

“The window is not responding” hang dialog (during the freeze)

“The window terminated unexpectedly (reason: ‘oom’, code: ‘-536870904’)” crash dialog

Post-reopen chat state — shows the last completed step (plan file +149 written)
and the pending 60-line Shell call visible but not yet executed

Operating System

Windows 10/11

Version Information

2.7.0-pre.158.patch.0

For AI issues: which model did you use?

claude-4.6-sonnet-medium-thinking

Additional Information

main.log evidence (session 20260328T214446, still available locally):

line 706: 2026-03-30 05:19:51.701 [error] CodeWindow: detected unresponsive
line 707: 2026-03-30 05:20:06.703 [error] CodeWindow unresponsive samples:
          <1>
              at Array.push (<anonymous>)
              at $__ (...workbench.desktop.main.js:49997:26110)
              at Object.fn (...workbench.desktop.main.js:49997:26319)
              [...]
              at sw.updateComposerDataSetStore (...workbench.desktop.main.js:38315:68744)
              [...]
              at sw.updateComposerBubbleSetStore (...workbench.desktop.main.js:38315:81380)
line 840: 2026-03-30 05:21:13.590 [error] CodeWindow: renderer process gone
          (reason: oom, code: -536870904)

Wakelock evidence (two concurrent agent loops at crash time):

05:15:27 [info] [PowerMainService] Started wakelock id=200 owner=window:2
         reason="agent-loop" activeCount=2

Related topics:

  • topic #153927 — periodic OOM every 10–15 min (Windows, 32GB RAM); different
    pattern but same error code
  • topic #152534 — periodic OOM every ~3 min (Windows, 128GB RAM); possible OTEL
    leak; different pattern
  • topic #153186 — renderer freeze from MCP markdown ReDoS; separate trigger but
    also freezes the chat renderer via a state-update path

Workaround:

This was a one-time OOM with a specific trigger (large single-turn context accumulation). It can be avoided by splitting tasks into smaller sequential agent turns — one deliverable per turn.

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

Hey, great bug report. The stack traces and logs really help.

I can see the crash is happening specifically in updateComposerDataSetStore / updateComposerBubbleSetStore when pushing a large batch of data (19 files + a big diff + a Shell tool call) in a single turn. This is a separate trigger from the periodic OOM leaks in other threads, and you’re correctly telling them apart.

I passed this to the team. This exact scenario (a spike OOM from a batch state update with a large context in one turn) wasn’t being tracked separately before. No ETA yet, but your report with logs helps with prioritization.

A couple things to try in addition to the workaround of splitting it into multiple turns (which you already found):

  • Avoid running two concurrent agent loops (activeCount=2 in your logs). That doubles memory pressure on the renderer.
  • Monitor the renderer process via Ctrl+Shift+P > Developer: Open Process Explorer before big turns. If it’s already around ~2 to 3 GB, it’s better to start a new chat first.

Let me know if you can reproduce it again, especially if it happens with only one agent loop running.

1 Like

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Cursor 2.6.22 repeatedly crashes with renderer OOM during agent-loop workflows (reason: oom, code -536870904)

Steps to Reproduce

Repro Context (Observed Pattern)
Run sustained agent-driven operations (multi-step terminal/tool workflow).
Keep the session active through repeated agent-loop cycles.
Cursor eventually crashes with renderer OOM

Expected Behavior

Cursor should remain stable during long agent-loop tasks and handle large tool/output workloads without renderer termination.

Operating System

Windows 10/11

Version Information

Product / Version
Cursor 2.6.22
Environment
OS: Windows 11 (10.0.26100)
Workspace: C:\brain_rnd
Shell: PowerShell
Workflow type: long-running agent tasks with multiple tool calls / large output handling

For AI issues: which model did you use?

Premium

Additional Information

Cursor OOM Crash Report
Generated: 2026-04-02

Environment

  • OS: Windows 11 (10.0.26100)
  • Cursor: 2.6.22
  • Workspace: C:\brain_rnd
  • Crash signature: CodeWindow renderer process gone (reason: oom, code: -536870904)

Inspected files

  1. C:\Users\NilorCool\AppData\Roaming\Cursor\logs\20260401T193930\main.log
  2. C:\Users\NilorCool\AppData\Roaming\Cursor\logs\20260402T045241\main.log
  3. C:\Users\NilorCool\AppData\Roaming\Cursor\logs\20260402T045241\window1\renderer.log
  4. C:\Users\NilorCool\AppData\Roaming\Cursor\logs\20260402T045241\window2\renderer.log

Primary OOM events

  • 2026-04-02 04:51:56.404
    [error] CodeWindow: renderer process gone (reason: oom, code: -536870904)
    Source: …\20260401T193930\main.log

  • 2026-04-02 04:54:46.853
    [error] CodeWindow: renderer process gone (reason: oom, code: -536870904)
    Source: …\20260402T045241\main.log

  • 2026-04-02 08:01:57.625
    [error] CodeWindow: renderer process gone (reason: oom, code: -536870904)
    Source: …\20260402T045241\main.log

  • 2026-04-02 08:08:18.290
    [error] CodeWindow: renderer process gone (reason: oom, code: -536870904)
    Source: …\20260402T045241\main.log

What happens around the crashes

  • Repeated agent-loop wakelock activity appears before each OOM:
    • [PowerMainService] Started wakelock … reason=“agent-loop”
    • [ComposerWakelockManager] Acquired wakelock … reason=“agent-loop”
  • Immediately after OOM, extension hosts exit:
    • “Extension host with pid … exited with code: 0, signal: unknown.”

Additional signals found in renderer logs

  • window1\renderer.log:
    • [warning] Failed to flush aggregating provider batch Canceled
    • [error] [MainThreadShellExec.execute] Session not found:
  • window2\renderer.log:
    • Repeated [Extension Host] [otel.error] OTLPExporterError: Bad Request
      payload includes: {“error”:“Trace spans collection is not enabled for this user”}

Summary

  • The crash pattern is consistent and renderer-specific OOM.
  • Incidents correlate with active agent-loop sessions and repeated wakelock acquisition.
  • OTEL 400 errors are frequent noise but not the direct crash signature.

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

Nils, your crash pattern matches this thread. The renderer runs out of memory during sustained agent-loop workflows, and the team is actively working on it.

The OTEL exporter 400 errors in your logs ("Trace spans collection is not enabled for this user") are unrelated noise and not causing the crash.

A few things that can help reduce the frequency:

  • Start a fresh chat before large agent tasks. Long-running conversations accumulate state in the renderer.

  • Avoid running concurrent agent loops. Your logs show activeCount=2 wakelocks, which doubles renderer memory pressure.

  • Monitor renderer memory. Ctrl+Shift+P > Developer: Open Process Explorer before big tasks. If the renderer is already around 2-3 GB, start a new chat first.

Let me know if the crashes continue despite these changes.