Looks like you’re hitting the file limit since your queue shows 11584 files - this is above our 10k recommended file limit. Try adding some folders to your .cursorignore file to get under that limit and indexing should work properly
You can create a .cursorignore in your project root and add patterns like:
node_modules/
dist/
build/
I used to be able to index 70,000 files up until whatever changed on the backend a few months ago. I remember reading that the limit was 100,000 files. FWIW, I also see this same error whether I attempt to index 4,000 files or 40,000. Can we get some actual insight into how to fix this?
“Try adding some folders to your .cursorignore file to get under that limit and indexing should work properly” is not accurate since the 11k files is not the problem here.
I typically see it index a portion of the files, getting stuck and then getting into a loop on the rest of the files:
2025-02-25 10:41:51.478 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.484 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/client_async.pyi error: Bad unexpected error
2025-02-25 10:41:51.484 [info] fileQueue.length: 2257
2025-02-25 10:41:51.484 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.488 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/consumer/group.pyi error: Bad unexpected error
2025-02-25 10:41:51.488 [info] fileQueue.length: 2257
2025-02-25 10:41:51.488 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.489 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/consumer/__init__.pyi error: Bad unexpected error
2025-02-25 10:41:51.489 [info] fileQueue.length: 2257
2025-02-25 10:41:51.489 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.491 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/consumer/fetcher.pyi error: Bad unexpected error
2025-02-25 10:41:51.491 [info] fileQueue.length: 2257
2025-02-25 10:41:51.491 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.493 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/producer/future.pyi error: Bad unexpected error
2025-02-25 10:41:51.493 [info] fileQueue.length: 2257
2025-02-25 10:41:51.493 [info] Waiting for jobs to finish. currentIndexingJobs.length: 1 concurrentUploads.current: 1 fileQueue.length: 2257
2025-02-25 10:41:51.495 [info] Completed job unsuccessfully, will retry: ./stubs/kafka/producer/kafka.pyi error: Bad unexpected error
2025-02-25 10:41:51.495 [info] fileQueue.length: 2257
(Changed the filename for privacy)
But it will show that same log for what appears to be every file. You can see by the timestamps it does this very quickly, and it doesn’t appear to stop (it will do this for the entire time I have cursor open, i.e. for hours). There’s definitely some sort of recursive failure here.
I’ve been seeing the same sort of pattern for maybe the last month or so and our codebase is just shy of 4000 files.
I will say that I’m personally working in a multi-root workspace. I know Cursor doesn’t handle that super well atm but I’m wondering if others have a similar setup.
My workaround has been to have everything in a parent folder and adding that as the first folder in the workspace file. That worked fine up until this and Composer and Chat were both happy.
I’ve been spending some time trying to make sense of this, and I can pretty reliably get it to index the codebase just fine when I remove all the folders except the parent one from the workspace. Whenever I add any of the nested projects back in though, that’s when it starts to fail.
I’d also been using a multi-root workspace, and ran into enough difficulties that I ended up switching to use a new Cursor instance per application folder. This fixed all of these issues (indexing, Composer file path confusion)
Confirmed that this is related to multi-root workspaces. Sorry for not pinning that sooner, but I saw related threads where it was revealed that it’s already known that indexing doesn’t work with multi-root workspaces.
This is unfortunate though since indexing on multi-root workspaces used to work (or at least appeared to) a few months ago.
I’m seeing the same behavior. I’m in a big mono-repo. Within a minute of indexing the output window starts filling up with:
2025-03-02 13:02:09.019 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/jsonext/bench_test.go error: Bad unexpected error
2025-03-02 13:02:09.019 [info] fileQueue.length: 6846
2025-03-02 13:02:09.019 [info] Waiting for jobs to finish. currentIndexingJobs.length: 18 concurrentUploads.current: 17.5 fileQueue.length: 6846
2025-03-02 13:02:09.055 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/jsonext/nan_test.go error: Bad unexpected error
2025-03-02 13:02:09.055 [info] fileQueue.length: 6846
2025-03-02 13:02:09.055 [info] Waiting for jobs to finish. currentIndexingJobs.length: 18 concurrentUploads.current: 17.5 fileQueue.length: 6846
2025-03-02 13:02:09.092 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/jsonext/encode.go error: Bad unexpected error
2025-03-02 13:02:09.092 [info] fileQueue.length: 6846
2025-03-02 13:02:09.092 [info] Waiting for jobs to finish. currentIndexingJobs.length: 18 concurrentUploads.current: 17.5 fileQueue.length: 6846
2025-03-02 13:02:09.128 [info] Completed job unsuccessfully, will retry: ./gorilla/pkg/encoding/encoding.go error: Bad unexpected error
I’m also seeing a handful of files that are static in the settings UI and then a bunch of files streaming below it. It seems to spin potentially forever, I’m just pausing indexing for now.
workbench.desktop.main.js:636 ConnectError: [aborted] read ECONNRESET
at t (workbench.desktop.main.js:2318:112233)
at async Object.checkFeatureStatus (workbench.desktop.main.js:459:133010)
at async _Lt.maybeRefreshFeatureStatus (workbench.desktop.main.js:636:20572)
at async workbench.desktop.main.js:2894:14889
workbench.desktop.main.js:965 [composer] ToolFormer: error in toolWrappedStream undefined
workbench.desktop.main.js:591 ConnectError: [aborted] read ECONNRESET
at yhs.$streamAiConnect (workbench.desktop.main.js:2318:110714)
at async workbench.desktop.main.js:459:133880
at async p_t.toolWrappedStream (workbench.desktop.main.js:965:4404)
at async jC (workbench.desktop.main.js:591:4354)
at async yc.handleStreamComposer (workbench.desktop.main.js:922:2033)
at async vLt.streamResponse (workbench.desktop.main.js:591:13952)
at async dVt.<anonymous> (workbench.desktop.main.js:2976:872)
at async hVt.<anonymous> (workbench.desktop.main.js:2935:2413)
at async KR.processCodeBlocks (workbench.desktop.main.js:948:1926)
at async workbench.desktop.main.js:1918:21855
workbench.desktop.main.js:1918 [composer] Error in AI response: ConnectError: [aborted] read ECONNRESET
at yhs.$streamAiConnect (workbench.desktop.main.js:2318:110714)
at async workbench.desktop.main.js:459:133880
at async p_t.toolWrappedStream (workbench.desktop.main.js:965:4404)
at async jC (workbench.desktop.main.js:591:4354)
at async yc.handleStreamComposer (workbench.desktop.main.js:922:2033)
at async vLt.streamResponse (workbench.desktop.main.js:591:13952)
at async dVt.<anonymous> (workbench.desktop.main.js:2976:872)
at async hVt.<anonymous> (workbench.desktop.main.js:2935:2413)
at async KR.processCodeBlocks (workbench.desktop.main.js:948:1926)
at async workbench.desktop.main.js:1918:21855
workbench.desktop.main.js:1918 [composer] Failed to get complete AI response
workbench.desktop.main.js:953 Error checking file existence: Error: Unable to resolve filesystem provider with relative file path 'cursor.aisettings:cursor/aisettings'
workbench.desktop.main.js:953 Error checking file existence: Error: Unable to resolve filesystem provider with relative file path 'cursor.aisettings:cursor/aisettings'
workbench.desktop.main.js:636 ConnectError: [aborted] read ECONNRESET
at t (workbench.desktop.main.js:2318:112233)
at async Object.checkFeatureStatus (workbench.desktop.main.js:459:133010)
at async _Lt.maybeRefreshFeatureStatus (workbench.desktop.main.js:636:20572)
at async workbench.desktop.main.js:1918:20540
workbench.desktop.main.js:953 Error checking file existence: Error: Unable to resolve filesystem provider with relative file path 'cursor.aisettings:cursor/aisettings'
workbench.desktop.main.js:459 Failed to refresh server config from server: ConnectError: [aborted] read ECONNRESET
at t (workbench.desktop.main.js:2318:112233)
at async Object.getServerConfig (workbench.desktop.main.js:459:133010)
at async ■■■.forceRefreshServerConfig (workbench.desktop.main.js:459:137736)
We’re seeing the same issue for the past month or so. 11k files and used to work absolutely file until recently. Tried reducing the file count below 10k by ignoring certain projects and no luck. We use rush workspace for our mono-repository.