MCP file-read tool endlessly chains temp files for large outputs, making real content inaccessible v2

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Hello everyone, regarding this post here: I had already written something about it back then, but there was no further response.

The problem with the infinite loop itself is solved in theory, because the output is now being read. However, this entire architectural principle has now run into a problem again, because there is a maximum limit when reading files. This limits Cursor again in its own architecture.

In other words: Cursor uses a limit in the Read File tool, and users are thereby restricted to adapting to Cursor. Other MCP tools to read files ourselves are not supported due to this Cursor limitation.

Concrete example:

  • If we have an MCP tool that reads files, and we call it, the output is written into a text file.
  • This text file, in turn, cannot be accessed via our MCP tools, because otherwise we would end up in an infinite loop again: Every time a file is read, a new text file would be loaded.
  • Cursor then works in such a way that instead the internal Read File tool is used. This causes the limitation, because there is a maximum limit of 100,000 characters there.

I had already written in the previous post that we as developers do not want to be limited by Cursor in what we call with our MCP tools. MCP tools are our freedom to do whatever we want.

For months, nothing has changed about this. The only thing that has changed is that we no longer get stuck in an infinite loop because of this bug. But in a way it has become even worse, because I am simply not able to read files via external MCP tools: I am forced to use Cursor Read File, and now have this limitation.

That means: If the output in the text file is over 100,000 characters, I cannot read the file with Cursor’s Read_File tool. And as described above, I cannot use my own MCP tools because then I end up in this infinite loop again where it comes to Read File.

This entire architectural concept is limiting and extremely poorly designed.

Nothing has changed since my post months ago regarding the architectural limitation.

I understand the rationale behind writing the output to a local physical file, and from a context-engineering perspective I believe that is architecturally the right approach because re-anchoring can happen. I’m not opposed to that system.

The problem is that it creates side effects, because you get redirected in an infinite loop back to the internal Cursor Read-File tool. That means we have a fundamental architectural issue here. This needs to be given higher priority in terms of how developers want to respond.

As I already wrote back then, there needs to be an option to disable these behaviors. If the user wants logs to be written to a temporary file and/or read via Cursor’s Read-File tool, that must be configurable as an on/off setting. It must not be a limited, hardcoded option that cannot be changed, because that’s how we end up in situations like this.

I also previously wrote multiple points about the Read-File tool. It can’t be that the tool is orchestrated via system prompts in a way that everything is always read in chunks—partially, step by step, line by line (e.g., blocks of 200 to 400). This creates constant conflicts, because we have to use our own system prompts to work around the internal Cursor system prompts that orchestrate Read-File so that files can be read fully. And even that doesn’t work, because the Read-File tool has internal limitations.

We need more options for the Read-File tool in the settings. We don’t need maximum limits. We don’t need maximum limits, and we also don’t need limitations on how we work with our MCP tools.

https://forum.cursor.com/t/mcp-file-read-tool-endlessly-chains-temp-files-for-large-outputs-making-real-content-inaccessible/142720/5

Steps to Reproduce

see describe

Operating System

Windows 10/11

Version Information

Version: 2.5.26 (user setup)
VSCode Version: 1.105.1
Commit: 7d96c2a03bb088ad367615e9da1a3fe20fbbc6a0
Date: 2026-02-26T04:57:56.825Z
Build Type: Stable
Release Track: Early Access
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26200

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, I see this is a follow-up to your previous report. The infinite loop is fixed, but the main issue has shifted. Now any MCP output over 100,000 characters becomes a dead end because the built-in Read File can’t handle it, and external MCP tools get routed through the same temp-file pipeline.

I’ll pass this to the team as a feature request: configurable temp-file behavior and removing the Read File limit. I don’t have an ETA yet, but your detailed report really helps us prioritize.

For now, the most practical workaround is to filter inside your MCP tool before returning the result. For example, for a directory tree, filter out REFERENCE_001 directories on the server side instead of returning the full 112,000-character tree. It’s not ideal since it pushes work onto your tools, but it fully avoids the 100,000-character cap.

Let me know if you have any questions.

1 Like

Thank you very much for the quick feedback. Yes, I’ll try to solve it differently as a temporary workaround, but thanks again for the fast response and for what you wrote.

If you implement it in the way you described and address the requests accordingly, I think it would help a lot of developers. I’ll keep an eye on it—thank you very much. <3

I think the generally better approach is always to offer options in the settings, rather than imposing limitations.

1 Like