Cursor crashes with out-of-memory (OOM) errors on Windows despite having 64GB of system RAM. The crashes occur during normal usage and appear to be related to Chromium/Electron renderer process limits rather than physical memory exhaustion.
Environment
OS: Windows 10/11
RAM: 64GB
Cursor version: 2.5.26
VS version: 1.105.1
Problem
Cursor crashes unexpectedly (possible error code: -536870904 or similar)
Crashes occur regardless of available system RAM
AI features (Composer, Chat) seem to increase memory usage before crash
Crash Patterns (when it tends to happen)
Large coding instructions: Crashes when given big implementation prompts, even when the user provides code snippets to reduce context. The AI is doing most of the coding work.
Conversation length: Sometimes a relatively small snippet is enough to trigger a crash if the conversation is already long.
Context search: Crashes when the AI has to search for context again within the current conversation.
Timing varies: Crashes can occur at any stage—at the start of development, mid-development, or near the end (e.g., right before a git commit).
Actual Behavior
Application closes unexpectedly with OOM-type behavior.
Additional Context
Similar issues reported by other users on forum (64GB, 128GB systems)
Suggests Chromium per-process memory limits may be the bottleneck
Workarounds attempted
Reduced context (fewer open files)
Restarted Cursor periodically
Tags: crashes, performance, windows
Steps to Reproduce
Open Cursor with a project
Use Composer/Chat or work with AI features
Crashes when opening large context
Expected Behavior
Cursor should remain stable with 64GB RAM available.
Hey, thanks for the report. This is a known issue on Windows. The Electron renderer process hits a ~4 GB memory limit and crashes, no matter how much RAM your system has. The team is aware and tracking it, but we don’t have an ETA for a fix yet.
For now, here are a few workarounds that have helped other users:
Rotate chats: don’t stay in the same chat for long sessions. Once the context gets big, start a new chat.
Check for an OTEL leak: open Ctrl+Shift+P > Developer: Toggle Developer Tools > Console, then look for lines like [otel.error] or OTLPExporterError. If you see them, that’s a separate memory leak that makes this worse.
Monitor memory: Ctrl+Shift+P > Developer: Open Process Explorer. Keep an eye on the renderer process.
workbench.desktop.main.js:43319 [Extension Host] [otel.error] {“stack”:“OTLPExporterError: Bad Request\n\tat IncomingMessage. (c:\Users\\AppData\Local\Programs\cursor\resources\app\node_modules\@opentelemetry\otlp-exporter-base\build\src\transport\http-transport-utils.js:52:31)\n\tat IncomingMessage.emit (node:events:531:35)\n\tat endReadableNT (node:internal/streams/readable:1698:12)\n\tat process.processTicksAndRejections (node:internal/process/task_queues:90:21)”,“message”:“Bad Request”,“code”:“400”,“name”:“OTLPExporterError”,“data”:“{“error”:“Trace spans collection is not enabled for this user”}”}
As soon as I opened that console, I get this happening multiple times.
I’ve recorded videos for you that hopefully provides more insight!
A successful run:
I have to split this into multiple replies, I think, cus I’m a new user and it isn’t allowed.
Okay, thanks for restoring @deanrie ! I still can’t give the other 2 links, though. One is about: Following up the debugger until it crashed
The other is about: Crash but showing what the AI is doing
Thanks for checking the console and recording a video. That OTEL error (OTLPExporterError: Bad Request, “Trace spans collection is not enabled”) confirms there’s a telemetry leak running in the background that uses memory. Every rejected span gets buffered and is never cleared, so the renderer hits the 4 GB limit faster.
I’m a bit confused. You say there’s no workaround but on that link you give a workaround.
”As a workaround, try turning off Privacy Mode in Cursor: Settings > General > Privacy Mode. The “Trace spans collection is not enabled for this user” error likely means the server is rejecting spans because Privacy Mode blocks telemetry collection. Disabling it should let the server accept the spans instead of them building up in memory.”
Sorry for the confusion. There is a workaround, but only if you can turn off Privacy Mode.
If you need Privacy Mode for work (proprietary code, etc.), then unfortunately the only option for now is to rotate chats more often so you don’t hit the limit.
Hey @drsimpatia, @ke_tan, @javiermarcon! The OpenTelemetry memory leak causing OOM crashes has been addressed in a recent Cursor update. Updating to the latest version should resolve this.