Suspected long file reporting error?

I’m using Claude-3.7-sonnet in Agent mode to introduce the file and ask for changes to the file, there is a probability that

Connection failed. If the problem persists, please check your internet connection or VPN

( Request ID: 1ca17f85-e1c4-4949-bde8-b69691c42a92)

Is there a solution to this problem, which has been occurring very frequently recently

When I ask about other files the problem disappears again.

Translated with DeepL.com (free version)

Hi, please open your console (in the command palette, Developer: Toggle Developer Tools) and send us screenshots of any errors you see here!

And you can provide information about your device by clicking Help—About to make it easier for officials to find the problem.

I encountered the same issue and resolved it by splitting the file into smaller segments, which allowed normal processing. It appears that the cursor may have a memory constraint per request, as larger files could trigger service timeouts due to excessive resource demands.

My 2 cents theory is that it’s probably not the API requests to LLM that are crashing but maybe the diff algorithm which, for some reason, can’t bear more than ~1200 lines of code changes without becoming very slow.
Maybe a VSCode hardcoded RAM usage limit for subservices or something? Looks like it at least.
Models like Gemini are supposed to have 1 million token context, this is theoretically like hundreds of thousands of lines of code, so I don’t think the bottleneck is the LLM itself, but rather the software/VSCode infra that processes all this input/output, code changes, etc.