Cursor Freezing Computer With state.vcsdb I/O

Describe the Bug

Read/write activity for state.vscdb saturated my harddrive’s iop buffer once every 45 to 60 seconds, freezing my computer. After investigation and troubleshooting below, it now does this once every 10-20 minutes.

Steps to Reproduce

I can’t give steps to repro, but I can give my Procmon memory info (PML, CSV, or XML formats available, only XML has the tack trace info). Procmon is unofficial windows-endorsed software, distrubuted on learn.microsoft.com’s sysinternals download page.

Expected Behavior

Cursor should not read/write the state.vscdb file so quickly, and with so much data, that it freezes my computer.

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.2.1 (user setup)
VSCode Version: 1.99.3
Commit: 031e7e0ff1e2eda9c1a0f5df67d44053b059c5d0
Date: 2025-07-03T06:16:02.610Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.19045

Additional Information

Cursor started freezing my computer every 45-60 seconds. Using task monitor, I isolated the problem to disk I/O caused by one of Cursor’s processes. I investigated this issue using ProcMon64, and it generated event info and stack traces for each disk read. The cursor devs can email me for the full file with stack traces.

The lag spikes were happening frequently - once every 45-60 seconds. During the spikes, Cursor was reading and writing dozens of times to a file called state.vscdb. That file is an SQL database used by VSCode to maintain various state values. Every spike’s SQL database access stack traces all had a similar pattern:

Frame Module Location Address
...various vscode-sqlite3.node calls...
19	vscode-sqlite3.node	napi_register_module_v1 + 0x3f26d	0x7ffa2ec17c6d	
20	vscode-sqlite3.node	napi_register_module_v1 + 0x3f4a0	0x7ffa2ec17ea0	
21	vscode-sqlite3.node	napi_register_module_v1 + 0x3744	0x7ffa2ebdc144	
22	Cursor.exe	node::PromiseRejectCallback + 0x42fdf	0x7ff6691f221f	
23	Cursor.exe	uv_cancel + 0x24a	0x7ff669246b1a	
24	Cursor.exe	uv_thread_create_ex + 0x197	0x7ff668ab9487	
25	Cursor.exe	Cr_z_adler32 + 0x58bf8a	0x7ff66b0ac95a	
26	KERNEL32.DLL	BaseThreadInitThunk + 0x14	0x7ffa744f7374	
27	ntdll.dll	RtlUserThreadStart + 0x21	0x7ffa75cfcc91	

It looks like the database spam is caused by a promise being rejected as a thread is closed. (uv_cancel is called by libuv when a thread ends, either successfully or due to an error. It calls a handler function, in this case that seems to be a promise rejection function?)

So I looked for signs of errors happening at the same rate as the lag spikes. I found one: the git log source in the Output window was showing git failures that roughly lined up with some lag spikes. Every git message corresponded to a lag spike, but not every lag spike corresponded to a git message.

2025-07-03 15:54:56.890 [info] > git rev-parse --show-toplevel [275ms]
2025-07-03 15:54:56.891 [info] fatal: Not a git repository (or any of the parent directories): .git
2025-07-03 15:55:01.151 [info] > git rev-parse --show-toplevel [235ms]
2025-07-03 15:55:01.151 [info] fatal: Not a git repository (or any of the parent directories): .git
2025-07-03 15:55:01.279 [info] > git rev-parse --show-toplevel [113ms]
2025-07-03 15:55:01.279 [info] fatal: Not a git repository (or any of the parent directories): .git

On a hunch, I created a git repo for my current project. The git errors continued for 5 more minutes, in a different flavor:

2025-07-03 16:02:34.760 [info] > git status -z -uall [137ms]
2025-07-03 16:02:34.766 [info] > git for-each-ref --sort -committerdate --format %(refname)%00%(objectname)%00%(*objectname) [129ms]
2025-07-03 16:04:17.821 [info] > git symbolic-ref --short refs/remotes/origin/HEAD [120ms]
2025-07-03 16:04:17.821 [info] fatal: ref refs/remotes/origin/HEAD is not a symbolic ref
2025-07-03 16:04:18.783 [info] > git symbolic-ref --short refs/remotes/origin/HEAD [962ms]
2025-07-03 16:04:18.783 [info] fatal: ref refs/remotes/origin/HEAD is not a symbolic ref
2025-07-03 16:04:18.930 [info] > git rev-parse --verify origin/master [133ms]
2025-07-03 16:04:18.931 [info] fatal: Needed a single revision
2025-07-03 16:04:18.943 [info] > git rev-parse --verify origin/master [128ms]
2025-07-03 16:04:18.943 [info] fatal: Needed a single revision
2025-07-03 16:04:19.037 [info] > git rev-parse --verify origin/main [95ms]
2025-07-03 16:04:19.037 [info] fatal: Needed a single revision
2025-07-03 16:04:19.047 [info] > git rev-parse --verify origin/main [94ms]
2025-07-03 16:04:19.047 [info] fatal: Needed a single revision
2025-07-03 16:04:19.141 [info] > git rev-parse --verify origin/master [96ms]
2025-07-03 16:04:19.141 [info] fatal: Needed a single revision
2025-07-03 16:04:19.153 [info] > git rev-parse --verify origin/master [94ms]
2025-07-03 16:04:19.154 [info] fatal: Needed a single revision
2025-07-03 16:04:19.246 [info] > git rev-parse --verify origin/develop [93ms]
2025-07-03 16:04:19.246 [info] fatal: Needed a single revision
2025-07-03 16:04:19.258 [info] > git rev-parse --verify origin/develop [92ms]
2025-07-03 16:04:19.258 [info] fatal: Needed a single revision
2025-07-03 16:04:19.350 [info] > git branch -r [93ms]
2025-07-03 16:04:19.361 [info] > git branch -r [93ms]
2025-07-03 16:04:19.451 [info] > git config --get init.defaultBranch [92ms]
2025-07-03 16:04:19.464 [info] > git config --get init.defaultBranch [92ms]
2025-07-03 16:04:26.103 [info] > git status [103ms]
2025-07-03 16:04:26.115 [info] > git status [105ms]

But then they stopped. No more git messages at all in that log. This almost fixed the lag spikes (after about 5 minutes; I created the git repo at 16:00:00 ish, the log messages end 5 minutes later). I went 15 minutes without any lag, then another spike occurred. They are much less frequent now, maybe one per 10 minutes rather than one or two per minute. It seems like something else is triggering the same code, but I can’t find any other logs pointing to it. It’s probably something that isn’t logged.

My guess is there’s some bad handling of sub-process failures/errors, and git happened to be triggering that every minute, but other thing(s) are still triggering it occasionally. I then restarted Cursor and the problem went away.

My HDD’s hardware ID is WD10EZEX-21M2NA0, a western digital 1TB harddrive. The stated throughput is 240MB/s but the lag spikes were capping out at 2MB/s or less. It wasn’t data volume, it was request volume (iops). The average response time jumped to the 10-15 second range; it shouldn’t go above 100ms.

Does this stop you from using Cursor

No - Cursor works, but with this issue