OOM / Memory Crash on Windows (64GB RAM)

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

OOM / Memory Crash on Windows (64GB RAM)

Summary

Cursor crashes with out-of-memory (OOM) errors on Windows despite having 64GB of system RAM. The crashes occur during normal usage and appear to be related to Chromium/Electron renderer process limits rather than physical memory exhaustion.

Environment

  • OS: Windows 10/11
  • RAM: 64GB
  • Cursor version: 2.5.26
  • VS version: 1.105.1

Problem

  • Cursor crashes unexpectedly (possible error code: -536870904 or similar)
  • Crashes occur regardless of available system RAM
  • AI features (Composer, Chat) seem to increase memory usage before crash

Crash Patterns (when it tends to happen)

  • Large coding instructions: Crashes when given big implementation prompts, even when the user provides code snippets to reduce context. The AI is doing most of the coding work.
  • Conversation length: Sometimes a relatively small snippet is enough to trigger a crash if the conversation is already long.
  • Context search: Crashes when the AI has to search for context again within the current conversation.
  • Timing varies: Crashes can occur at any stage—at the start of development, mid-development, or near the end (e.g., right before a git commit).

Actual Behavior

Application closes unexpectedly with OOM-type behavior.

Additional Context

  • Similar issues reported by other users on forum (64GB, 128GB systems)
  • Suggests Chromium per-process memory limits may be the bottleneck

Workarounds attempted

  • Reduced context (fewer open files)
  • Restarted Cursor periodically

Tags: crashes, performance, windows

Steps to Reproduce

  1. Open Cursor with a project
  2. Use Composer/Chat or work with AI features
  3. Crashes when opening large context

Expected Behavior

Cursor should remain stable with 64GB RAM available.

Operating System

Windows 10/11

Version Information

Version: 2.5.26 (user setup)
VSCode Version: 1.105.1
Commit: 7d96c2a03bb088ad367615e9da1a3fe20fbbc6a0
Date: 2026-02-26T04:57:56.825Z
Build Type: Stable
Release Track: Default
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

Composer 1.5, I think?

Does this stop you from using Cursor

No - Cursor works, but with this issue

An image of the error

me too,

Hey, thanks for the report. This is a known issue on Windows. The Electron renderer process hits a ~4 GB memory limit and crashes, no matter how much RAM your system has. The team is aware and tracking it, but we don’t have an ETA for a fix yet.

For now, here are a few workarounds that have helped other users:

  1. Rotate chats: don’t stay in the same chat for long sessions. Once the context gets big, start a new chat.

  2. Check for an OTEL leak: open Ctrl+Shift+P > Developer: Toggle Developer Tools > Console, then look for lines like [otel.error] or OTLPExporterError. If you see them, that’s a separate memory leak that makes this worse.

  3. Monitor memory: Ctrl+Shift+P > Developer: Open Process Explorer. Keep an eye on the renderer process.

Related threads for context:

Let me know if you see any OTEL errors in the console. That will help narrow down the cause.

Hey! I appreciate the response @deanrie

workbench.desktop.main.js:43319 [Extension Host] [otel.error] {“stack”:“OTLPExporterError: Bad Request\n\tat IncomingMessage. (c:\Users\\AppData\Local\Programs\cursor\resources\app\node_modules\@opentelemetry\otlp-exporter-base\build\src\transport\http-transport-utils.js:52:31)\n\tat IncomingMessage.emit (node:events:531:35)\n\tat endReadableNT (node:internal/streams/readable:1698:12)\n\tat process.processTicksAndRejections (node:internal/process/task_queues:90:21)”,“message”:“Bad Request”,“code”:“400”,“name”:“OTLPExporterError”,“data”:“{“error”:“Trace spans collection is not enabled for this user”}”}

As soon as I opened that console, I get this happening multiple times.

I’ve recorded videos for you that hopefully provides more insight!

A successful run:

I have to split this into multiple replies, I think, cus I’m a new user and it isn’t allowed.

A potential crash:

Nevermind, now it says I can’t post anymore. I had 2 more links. Rough. What now?

Okay, thanks for restoring @deanrie ! I still can’t give the other 2 links, though. One is about: Following up the debugger until it crashed
The other is about: Crash but showing what the AI is doing

Thanks for checking the console and recording a video. That OTEL error (OTLPExporterError: Bad Request, “Trace spans collection is not enabled”) confirms there’s a telemetry leak running in the background that uses memory. Every rejected span gets buffered and is never cleared, so the renderer hits the 4 GB limit faster.

This is a separate bug from the general OOM issue, and others have reported it too: Renderer OOM crash: OTEL exporter leaks memory when trace spans rejected (400)

There’s no workaround for the OTEL leak itself yet, but rotating chats more often will help you hit the limit less often.

I’m a bit confused. You say there’s no workaround but on that link you give a workaround.

”As a workaround, try turning off Privacy Mode in Cursor: Settings > General > Privacy Mode. The “Trace spans collection is not enabled for this user” error likely means the server is rejecting spans because Privacy Mode blocks telemetry collection. Disabling it should let the server accept the spans instead of them building up in memory.”

Sorry for the confusion. There is a workaround, but only if you can turn off Privacy Mode.

If you need Privacy Mode for work (proprietary code, etc.), then unfortunately the only option for now is to rotate chats more often so you don’t hit the limit.

Hey @drsimpatia, @ke_tan, @javiermarcon! The OpenTelemetry memory leak causing OOM crashes has been addressed in a recent Cursor update. Updating to the latest version should resolve this.