File indexing stuck due to "Bad unexpected error"

Issue: My file indexing is stuck at 2%.

Operating System: OSX 15.1.1
Cursor Version: 0.45.10

Description:
The Output > Cursor Indexing & Retrieval shows a stream of:

2025-02-06 22:12:09.388 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 11584
2025-02-06 22:12:09.398 [info] Completed job unsuccessfully, will retry: <file_1> error: Bad unexpected error
2025-02-06 22:12:09.388 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 11584
2025-02-06 22:12:09.398 [info] Completed job unsuccessfully, will retry: <file_N> error: Bad unexpected error

It doesn’t make any progress, and the fileQueue.length is stuck at 11583.

I’ve attempted restarting Cursor, as well as deleting the index and starting fresh.

Looks like you’re hitting the file limit since your queue shows 11584 files - this is above our 10k recommended file limit. Try adding some folders to your .cursorignore file to get under that limit and indexing should work properly

You can create a .cursorignore in your project root and add patterns like:
node_modules/
dist/
build/

I used to be able to index 70,000 files up until whatever changed on the backend a few months ago. I remember reading that the limit was 100,000 files. FWIW, I also see this same error whether I attempt to index 4,000 files or 40,000. Can we get some actual insight into how to fix this?

“Try adding some folders to your .cursorignore file to get under that limit and indexing should work properly” is not accurate since the 11k files is not the problem here.

The Cursor codebase is around 15k files currently, and this indexes well for us, so I wouldn’t expect an issue here!

What error do you see when the indexing is underway?

I typically see it index a portion of the files, getting stuck and then getting into a loop on the rest of the files:

2025-02-25 10:41:51.478 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.484 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/client_async.pyi error: Bad unexpected error
2025-02-25 10:41:51.484 [info] fileQueue.length: 2257
2025-02-25 10:41:51.484 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.488 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/consumer/group.pyi error: Bad unexpected error
2025-02-25 10:41:51.488 [info] fileQueue.length: 2257
2025-02-25 10:41:51.488 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.489 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/consumer/__init__.pyi error: Bad unexpected error
2025-02-25 10:41:51.489 [info] fileQueue.length: 2257
2025-02-25 10:41:51.489 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.491 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/consumer/fetcher.pyi error: Bad unexpected error
2025-02-25 10:41:51.491 [info] fileQueue.length: 2257
2025-02-25 10:41:51.491 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.493 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/producer/future.pyi error: Bad unexpected error
2025-02-25 10:41:51.493 [info] fileQueue.length: 2257
2025-02-25 10:41:51.493 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.495 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/producer/kafka.pyi error: Bad unexpected error
2025-02-25 10:41:51.495 [info] fileQueue.length: 2257

(Changed the filename for privacy)
But it will show that same log for what appears to be every file. You can see by the timestamps it does this very quickly, and it doesn’t appear to stop (it will do this for the entire time I have cursor open, i.e. for hours). There’s definitely some sort of recursive failure here.

I believe I have the same issue, indexing a large codebase (~90k files) worked two days ago, then started to fail yesterday. Symptoms are similar:

  • The % goes up to ~70%
  • Then it goes to ~80% before going back to 70%, basically forever
  • I see in the logs a bunch of “Bad unexpected errors” (even before reaching 70%)

I also see the fileQueue.length going down, then a startSync triggers and reset it to a higher value:

2025-02-25 18:51:01.121 [info] Waiting for jobs to finish. currentIndexingJobs.length: 192 concurrentUploads.current: 192 fileQueue.length: 28928
2025-02-25 18:51:01.124 [info] Completed job successfully: b08e4698-0a58-4991-a3db-dcb382b3ec3c <path>
2025-02-25 18:51:01.124 [info] fileQueue.length: 28927
2025-02-25 18:51:01.124 [info] Waiting for jobs to finish. currentIndexingJobs.length: 192 concurrentUploads.current: 192 fileQueue.length: 28927
2025-02-25 18:51:01.127 [info] Completed job successfully: b08e4698-0a58-4991-a3db-dcb382b3ec3c <path>
2025-02-25 18:51:01.127 [info] fileQueue.length: 28926
2025-02-25 18:51:01.127 [info] Waiting for jobs to finish. currentIndexingJobs.length: 192 concurrentUploads.current: 192 fileQueue.length: 28926
2025-02-25 18:51:01.226 [info] multiCodebaseIndexingJob dispose
2025-02-25 18:51:01.226 [info] Aborting indexing job.
2025-02-25 18:51:01.227 [info] Doing a startup handshake.
2025-02-25 18:51:01.233 [info] Completed job unsuccessfully, will retry: <path>: Bad unexpected error
2025-02-25 18:51:01.233 [info] Indexing job successfully done or aborted.
2025-02-25 18:51:01.501 [error] Error Checking Connection: [unimplemented] HTTP 404
2025-02-25 18:51:01.502 [info] Creating Indexing Repo client:  https://repo42.cursor.sh
2025-02-25 18:51:01.502 [info] Creating repo client with backend url: https://repo42.cursor.sh
2025-02-25 18:51:07.027 [info] Finished computing merkle tree in 5778.967999999411 ms.
2025-02-25 18:51:07.027 [info] Doing the initial handshake with hash: 44f0ed95d578c427dd6c8b587fbd8fce522d0a7a6ea9ca78b00a415db7af198f
2025-02-25 18:51:07.028 [info] Handshake start
2025-02-25 18:51:07.310 [info] Handshake timing: 281.92479100078344, response: {"status":"STATUS_SUCCESS","codebases":[{"codebaseId":"b08e4698-0a58-4991-a3db-dcb382b3ec3c","status":"STATUS_OUT_OF_SYNC"}]}
2025-02-25 18:51:07.310 [info] Handshake result: {"status":"STATUS_SUCCESS","codebases":[{"codebaseId":"b08e4698-0a58-4991-a3db-dcb382b3ec3c","status":"STATUS_OUT_OF_SYNC"}]}
2025-02-25 18:51:07.311 [info] Starting fast remote sync.
2025-02-25 18:51:07.314 [info] Total num embeddable files: 72071
2025-02-25 18:51:07.314 [info] Root hash: 44f0ed95d578c427dd6c8b587fbd8fce522d0a7a6ea9ca78b00a415db7af198f
2025-02-25 18:51:07.314 [info] In the out of sync case.
2025-02-25 18:51:07.315 [info] [startSync]: ----------------------
syncing point nextSubtree{"relativePath":".","hash":"44f0ed95d578c427dd6c8b587fbd8fce522d0a7a6ea9ca78b00a415db7af198f"}
...
2025-02-25 18:51:07.316 [info] Waiting on semaphore to be released 1
2025-02-25 18:51:18.620 [info] Waiting on semaphore to be released 20
2025-02-25 18:51:18.815 [info] Waiting on semaphore to be released 1
2025-02-25 18:51:19.055 [info] setting numJobsToGo to 35567
2025-02-25 18:51:19.057 [info] [startSync]: numJobs: 35567
2025-02-25 18:51:20.969 [info] Uploading 35567 files.
2025-02-25 18:51:20.971 [info] Total number of files to embed: 72071
2025-02-25 18:51:20.971 [info] Not aborted
2025-02-25 18:51:20.971 [info] Starting while loop.
2025-02-25 18:51:20.971 [info] fileQueue.length: 35567
2025-02-25 18:51:20.971 [info] fileQueue.length: 35566
2025-02-25 18:51:20.971 [info] fileQueue.length: 35565

I’ve been seeing the same sort of pattern for maybe the last month or so and our codebase is just shy of 4000 files.

I will say that I’m personally working in a multi-root workspace. I know Cursor doesn’t handle that super well atm but I’m wondering if others have a similar setup.

My workaround has been to have everything in a parent folder and adding that as the first folder in the workspace file. That worked fine up until this and Composer and Chat were both happy.

I’ve been spending some time trying to make sense of this, and I can pretty reliably get it to index the codebase just fine when I remove all the folders except the parent one from the workspace. Whenever I add any of the nested projects back in though, that’s when it starts to fail.

I’d also been using a multi-root workspace, and ran into enough difficulties that I ended up switching to use a new Cursor instance per application folder. This fixed all of these issues (indexing, Composer file path confusion)

same issue here

===============================================================================
 Total                1211       141158       121420         4476        15262
===============================================================================

my codebase only have 1211 files to be indexed.

confirmed: this is a bug also a serious regression. all indexing of multi-root workspaces is broken…

cc @danperks

Confirmed that this is related to multi-root workspaces. Sorry for not pinning that sooner, but I saw related threads where it was revealed that it’s already known that indexing doesn’t work with multi-root workspaces.

This is unfortunate though since indexing on multi-root workspaces used to work (or at least appeared to) a few months ago.

This happens to me as well, I’d really love for cursor to be able to work on my backend AND frontend concurrently. Is there a fix for this coming?

I’m seeing the same behavior. I’m in a big mono-repo. Within a minute of indexing the output window starts filling up with:

2025-03-02 13:02:09.019 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/jsonext/bench_test.go error: Bad unexpected error
2025-03-02 13:02:09.019 [info] fileQueue.length: 6846
2025-03-02 13:02:09.019 [info] Waiting for jobs to finish. currentIndexingJobs.length: 18 concurrentUploads.current: 17.5 fileQueue.length: 6846
2025-03-02 13:02:09.055 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/jsonext/nan_test.go error: Bad unexpected error
2025-03-02 13:02:09.055 [info] fileQueue.length: 6846
2025-03-02 13:02:09.055 [info] Waiting for jobs to finish. currentIndexingJobs.length: 18 concurrentUploads.current: 17.5 fileQueue.length: 6846
2025-03-02 13:02:09.092 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/jsonext/encode.go error: Bad unexpected error
2025-03-02 13:02:09.092 [info] fileQueue.length: 6846
2025-03-02 13:02:09.092 [info] Waiting for jobs to finish. currentIndexingJobs.length: 18 concurrentUploads.current: 17.5 fileQueue.length: 6846
2025-03-02 13:02:09.128 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/encoding.go error: Bad unexpected error

I’m also seeing a handful of files that are static in the settings UI and then a bunch of files streaming below it. It seems to spin potentially forever, I’m just pausing indexing for now.

The same for me! Problems on Codebase indexing:

workbench.desktop.main.js:636 ConnectError: [aborted] read ECONNRESET
    at t (workbench.desktop.main.js:2318:112233)
    at async Object.checkFeatureStatus (workbench.desktop.main.js:459:133010)
    at async _Lt.maybeRefreshFeatureStatus (workbench.desktop.main.js:636:20572)
    at async workbench.desktop.main.js:2894:14889
workbench.desktop.main.js:965 [composer] ToolFormer: error in toolWrappedStream undefined
workbench.desktop.main.js:591 ConnectError: [aborted] read ECONNRESET
    at yhs.$streamAiConnect (workbench.desktop.main.js:2318:110714)
    at async workbench.desktop.main.js:459:133880
    at async p_t.toolWrappedStream (workbench.desktop.main.js:965:4404)
    at async jC (workbench.desktop.main.js:591:4354)
    at async yc.handleStreamComposer (workbench.desktop.main.js:922:2033)
    at async vLt.streamResponse (workbench.desktop.main.js:591:13952)
    at async dVt.<anonymous> (workbench.desktop.main.js:2976:872)
    at async hVt.<anonymous> (workbench.desktop.main.js:2935:2413)
    at async KR.processCodeBlocks (workbench.desktop.main.js:948:1926)
    at async workbench.desktop.main.js:1918:21855
workbench.desktop.main.js:1918 [composer] Error in AI response: ConnectError: [aborted] read ECONNRESET
    at yhs.$streamAiConnect (workbench.desktop.main.js:2318:110714)
    at async workbench.desktop.main.js:459:133880
    at async p_t.toolWrappedStream (workbench.desktop.main.js:965:4404)
    at async jC (workbench.desktop.main.js:591:4354)
    at async yc.handleStreamComposer (workbench.desktop.main.js:922:2033)
    at async vLt.streamResponse (workbench.desktop.main.js:591:13952)
    at async dVt.<anonymous> (workbench.desktop.main.js:2976:872)
    at async hVt.<anonymous> (workbench.desktop.main.js:2935:2413)
    at async KR.processCodeBlocks (workbench.desktop.main.js:948:1926)
    at async workbench.desktop.main.js:1918:21855
workbench.desktop.main.js:1918 [composer] Failed to get complete AI response
workbench.desktop.main.js:953 Error checking file existence: Error: Unable to resolve filesystem provider with relative file path 'cursor.aisettings:cursor/aisettings'
workbench.desktop.main.js:953 Error checking file existence: Error: Unable to resolve filesystem provider with relative file path 'cursor.aisettings:cursor/aisettings'
workbench.desktop.main.js:636 ConnectError: [aborted] read ECONNRESET
    at t (workbench.desktop.main.js:2318:112233)
    at async Object.checkFeatureStatus (workbench.desktop.main.js:459:133010)
    at async _Lt.maybeRefreshFeatureStatus (workbench.desktop.main.js:636:20572)
    at async workbench.desktop.main.js:1918:20540
workbench.desktop.main.js:953 Error checking file existence: Error: Unable to resolve filesystem provider with relative file path 'cursor.aisettings:cursor/aisettings'
workbench.desktop.main.js:459 Failed to refresh server config from server: ConnectError: [aborted] read ECONNRESET
    at t (workbench.desktop.main.js:2318:112233)
    at async Object.getServerConfig (workbench.desktop.main.js:459:133010)
    at async ■■■.forceRefreshServerConfig (workbench.desktop.main.js:459:137736)

We’re seeing the same issue for the past month or so. 11k files and used to work absolutely file until recently. Tried reducing the file count below 10k by ignoring certain projects and no luck. We use rush workspace for our mono-repository.

Is there any update on this? While it doesn’t technically cripple Cursor, it’s a pretty serious regression and makes it really frustrating to use.