Where does the bug appear (feature/product)?
Cursor IDE
Describe the Bug
When the Cursor agent executes a turn that simultaneously:
- loads a large number of files as
@context(19 files in my case), - writes a large file to the chat state (149-line plan file), and
- queues a large Shell tool call (~60 lines of PowerShell) for rendering,
the renderer process freezes and then crashes with OOM code -536870904.
The crash sequence is three progressive “detected unresponsive” cycles before the
renderer exits:
05:19:51 [error] CodeWindow: detected unresponsive
05:20:06 [error] CodeWindow unresponsive samples: <Array.push inside updateComposerDataSetStore>
05:20:28 [error] CodeWindow: detected unresponsive
05:20:43 [error] CodeWindow unresponsive samples: <updateComposerBubbleSetStore>
05:20:55 [error] CodeWindow: detected unresponsive
05:21:13 [error] CodeWindow: renderer process gone (reason: oom, code: -536870904)
The stack symbol in the unresponsive samples is Array.push inside
updateComposerDataSetStore and updateComposerBubbleSetStore — the React
store that backs the chat composer UI. The renderer appears to exhaust the V8
heap while pushing many state entries (context files + file write diff + tool
call preview) into the composer store in a single turn.
Aggravating factor observed: Two agent-loop windows were running concurrently
at the time (activeCount=2 in main.log PowerMainService wakelock entries),
which likely doubled renderer memory pressure.
This is distinct from the periodic-OOM issues (#153927**,** #152534**).** Those
appear to be continuous memory leak / OTEL exporter issues. This is a single
“spike” OOM caused by one large batch update to the composer state.
Steps to Reproduce
- Open the Cursor agent in at least one chat tab (two tabs with concurrent agent
loops more reliably reproduces). - Attach or load 15–20 files as context in the agent conversation (either via
@filereferences or by having the agent load them viaSemanticSearch/
Read/Globcalls which accumulate context). - Have the agent write a large file (100–200 lines) to disk in the same turn —
the diff appears in the chat state. - Have the agent queue a large Shell or terminal tool call (50–70 lines of
script) in the same turn. - Observe: renderer freezes (“not responding” dialog), then terminates with
“The window terminated unexpectedly (reason: ‘oom’, code: ‘-536870904’)”.
Expected Behavior
The renderer should stream the composer state updates incrementally and release
memory between large operations. Alternatively, the agent should yield between
large state updates to keep the renderer heap below the V8 limit.
Screenshots / Screen Recordings
“The window is not responding” hang dialog (during the freeze)
“The window terminated unexpectedly (reason: ‘oom’, code: ‘-536870904’)” crash dialog
Post-reopen chat state — shows the last completed step (plan file +149 written)
and the pending 60-line Shell call visible but not yet executed
Operating System
Windows 10/11
Version Information
2.7.0-pre.158.patch.0
For AI issues: which model did you use?
claude-4.6-sonnet-medium-thinking
Additional Information
main.log evidence (session 20260328T214446, still available locally):
line 706: 2026-03-30 05:19:51.701 [error] CodeWindow: detected unresponsive
line 707: 2026-03-30 05:20:06.703 [error] CodeWindow unresponsive samples:
<1>
at Array.push (<anonymous>)
at $__ (...workbench.desktop.main.js:49997:26110)
at Object.fn (...workbench.desktop.main.js:49997:26319)
[...]
at sw.updateComposerDataSetStore (...workbench.desktop.main.js:38315:68744)
[...]
at sw.updateComposerBubbleSetStore (...workbench.desktop.main.js:38315:81380)
line 840: 2026-03-30 05:21:13.590 [error] CodeWindow: renderer process gone
(reason: oom, code: -536870904)
Wakelock evidence (two concurrent agent loops at crash time):
05:15:27 [info] [PowerMainService] Started wakelock id=200 owner=window:2
reason="agent-loop" activeCount=2
Related topics:
- topic #153927 — periodic OOM every 10–15 min (Windows, 32GB RAM); different
pattern but same error code - topic #152534 — periodic OOM every ~3 min (Windows, 128GB RAM); possible OTEL
leak; different pattern - topic #153186 — renderer freeze from MCP markdown ReDoS; separate trigger but
also freezes the chat renderer via a state-update path
Workaround:
This was a one-time OOM with a specific trigger (large single-turn context accumulation). It can be avoided by splitting tasks into smaller sequential agent turns — one deliverable per turn.
Does this stop you from using Cursor
Sometimes - I can sometimes use Cursor


