MCP file-read tool endlessly chains temp files for large outputs, making real content inaccessible

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Summary
I’m using an MCP tool whose job is to read files and return their content into the LLM context. For large files, the tool does not return the content directly. Instead, it writes the content to a temporary file (e.g. under some .cursor/.../agent-tools/*.txt path) and only returns the path to that temp file.

If I then call the same MCP tool again on that returned temp-file path, the exact same behavior happens: the content is considered “too large”, it gets written into yet another temp file, and I get back only a new path. This can repeat indefinitely, effectively creating an endless chain of temp files, and I never get the actual content into the LLM context.

Actual Behavior

  • Large file content is repeatedly redirected into temp files.
  • Each attempt to read the temp file produces yet another temp file.
  • The user cannot realistically access the content within the LLM session.
  • This behavior effectively traps the user in a temp-file indirection loop for large files.

Why This Is a Problem (Architectural Limitation)

Architecturally, this behavior introduces serious limitations:

  • It prevents real inspection and processing of large files inside the LLM workflow.
  • It forces users into a pattern where they cannot complete their task for large inputs.
  • It hides the actual content behind multiple layers of temp files, which is not user-friendly and makes debugging very difficult.

Request / Proposal

  • Make the “write large output to temp file and return only a path” behavior opt-in, configurable in settings or via tool options.
  • Provide a mode where:
    • The tool still reads the file, but returns actual content (possibly chunked or streamed) instead of always redirecting to temp files.
    • Or at least offer a way to explicitly override the temp-file behavior for specific calls.

This would avoid the endless temp-file loop and give users full control over how file content is surfaced in the session.

Steps to Reproduce

  1. Use an MCP tool that reads files on a file whose content is large enough to trigger the “redirect to temp file” behavior.
  2. Observe that the tool:
    • Does not return the file content.
    • Returns only a path to a temporary file where the content was written.
  3. Call the same MCP file-read tool on that temp-file path.
  4. Observe that:
    • Again, the content is treated as too large.
    • A new temp file is created.
    • Only the new temp-file path is returned.
  5. Repeat → you end up in a loop where you always get a new temp-file path, but never the real content.

Expected Behavior

  • I should have a reliable way to actually read the file content into the LLM context, even for large files.
  • If there is a “redirect large output to temp file” feature, it should be:
    • Optional / configurable, not forced.
    • Clearly documented and opt-in, so users can decide whether they want this indirection or not.
    • Possible to disable when I explicitly need the content (e.g., for analysis, refactoring, or debugging).

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version 2.0.77
VSCode Version 1.99.3

For AI issues: which model did you use?

GPT 5.1 High

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the report. This is a unique issue that needs investigation from the engineering team.

To help us debug this, could you please share:

  • Which MCP server you’re using (name and configuration from your mcp.json)
  • The approximate file size where the temp file behavior starts happening
  • MCP logs:
    Ctrl+Shift+U → select “MCP Logs” in the dropdown → copy the relevant log entries that show the temp file being created
  • Whether this happens with Cursor’s built-in Read File tool or only with your custom MCP tool

If you can reliably reproduce the issue with files of a certain size, that would be especially helpful. Once I have this information, I’ll pass it on to the engineering team for investigation.

“Which MCP server you’re using (name and configuration from your mcp.json)”

    "filesystem-extended": {
      "command": "node",
      "args": [
        "C:\\Projects\\mcp\\server\\system\\files\\mcp-filesystem-extended\\dist\\index.js",
        "c:\\Projects",
        "C:\\Projects",
        "C:\\git",
        "c:\\git",
        "c:\\Users\\test\\.cursor"
      ]
    }

“The approximate file size where the temp file behavior starts happening”

  • Large output has been written to: c:\Users\test.cursor\projects\c-git-test\agent-tools\e6ed6a03-c8b1-488b-9578-9f343b8bc42d.txt (97.2 KB, 1363 lines)

    The issue is this: my custom tool batch_read_files — which can read multiple files and worked flawlessly for months using Cursor’s old behavior — now breaks ever since the system started offloading large outputs into temporary files.

Suddenly, when my tool tries to read these files, Cursor returns a message telling the LLM it cannot read the file directly and must call the tool again. From that moment on, it’s pure dice-rolling whether the LLM switches back to Cursor’s internal cursors read_file tool to read the output or keeps trying to use my batch_read_files tool. Either path is a mess.

If I keep relying on my own MCP tool to read the file, I end up trapped in infinite loops — which has already happened more than once. Trivial to reproduce:
Just use any custom MCP tool that reads a file. It will blow up the same way.

This means one thing:
The offloading of large outputs into temporary directories MUST be optional.

Right now it’s forced on the user, and that’s unacceptable. It breaks the entire developer experience because it forces us to interact with temporary files we never asked for. It makes zero sense to read a file, have Cursor dump it into a temp directory, and then read it again. That’s two redundant operations.

The expected behavior MUST match how it worked before.
If some users want temp-file offloading, fine — give them the option.
But don’t force it on everyone.

Since Cursor never offered a native way to read multiple files in one request, developers have always relied on their own MCP tools. And now those tools are being sabotaged because Cursor’s behavior has changed in ways we can’t control.

To make matters worse, Cursor’s built-in read_file tool requires hacky prompt tricks to avoid chunked line-reading, since it defaults to reading files only in pieces. From day one, Cursor made developers jump through hoops to ensure full-file reads.

We cannot let Cursor drift further in this direction — limiting developers by:

  • forcing chunked reads
  • forcing large outputs into temp files
  • forcing MCP tools to double-read files

MCP tools exist precisely so developers can perform operations Cursor itself does not support. They should not be crippled by system-level behavior that we cannot override.

And regarding the MCP logs: I checked them. They show nothing. If “MCP logs” refer to the STDOUT of my MCP tools, that would explain it — my tool doesn’t emit logs.

1 Like

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.