Where does the bug appear (feature/product)?
Cursor IDE
Describe the Bug
Summary
I’m using an MCP tool whose job is to read files and return their content into the LLM context. For large files, the tool does not return the content directly. Instead, it writes the content to a temporary file (e.g. under some .cursor/.../agent-tools/*.txt path) and only returns the path to that temp file.
If I then call the same MCP tool again on that returned temp-file path, the exact same behavior happens: the content is considered “too large”, it gets written into yet another temp file, and I get back only a new path. This can repeat indefinitely, effectively creating an endless chain of temp files, and I never get the actual content into the LLM context.
Actual Behavior
- Large file content is repeatedly redirected into temp files.
- Each attempt to read the temp file produces yet another temp file.
- The user cannot realistically access the content within the LLM session.
- This behavior effectively traps the user in a temp-file indirection loop for large files.
Why This Is a Problem (Architectural Limitation)
Architecturally, this behavior introduces serious limitations:
- It prevents real inspection and processing of large files inside the LLM workflow.
- It forces users into a pattern where they cannot complete their task for large inputs.
- It hides the actual content behind multiple layers of temp files, which is not user-friendly and makes debugging very difficult.
Request / Proposal
- Make the “write large output to temp file and return only a path” behavior opt-in, configurable in settings or via tool options.
- Provide a mode where:
- The tool still reads the file, but returns actual content (possibly chunked or streamed) instead of always redirecting to temp files.
- Or at least offer a way to explicitly override the temp-file behavior for specific calls.
This would avoid the endless temp-file loop and give users full control over how file content is surfaced in the session.
Steps to Reproduce
- Use an MCP tool that reads files on a file whose content is large enough to trigger the “redirect to temp file” behavior.
- Observe that the tool:
- Does not return the file content.
- Returns only a path to a temporary file where the content was written.
- Call the same MCP file-read tool on that temp-file path.
- Observe that:
- Again, the content is treated as too large.
- A new temp file is created.
- Only the new temp-file path is returned.
- Repeat → you end up in a loop where you always get a new temp-file path, but never the real content.
Expected Behavior
- I should have a reliable way to actually read the file content into the LLM context, even for large files.
- If there is a “redirect large output to temp file” feature, it should be:
- Optional / configurable, not forced.
- Clearly documented and opt-in, so users can decide whether they want this indirection or not.
- Possible to disable when I explicitly need the content (e.g., for analysis, refactoring, or debugging).
Operating System
Windows 10/11
Current Cursor Version (Menu → About Cursor → Copy)
Version 2.0.77
VSCode Version 1.99.3
For AI issues: which model did you use?
GPT 5.1 High
Does this stop you from using Cursor
Yes - Cursor is unusable
