Issue with chat summary when using MCP tools

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

There’s an issue with summarization feature whenever it’s working with MCP agents
Whenever MCP agents read logs / confluence pages / tickets and summarization steps starts in the middle of agent working, the summary generated mentions some kind of README.md and agent loses all of its context…

Steps to Reproduce

Have GitHub - eniayomi/gcp-mcp: A Model Context Protocol (MCP) server that enables AI assistants like Claude to interact with your Google Cloud Platform environment. This allows for natural language querying and management of your GCP resources during conversations. and Introducing Atlassian's Remote Model Context Protocol (MCP) Server - Work Life by Atlassian setup, then try to give them commands to read from documentation and logs, ensuring that the logs and docs will be so big they’ll fill up the context - when summarization step happens it usually breaks and defaults to some kind of README which is not even mentioned

Expected Behavior

Summary should be kept with proper log references (save logs in intermediate files maybe to avoid filling up the context needlessly, then use GREP on those files or something like that)

Screenshots / Screen Recordings

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.7.33 (Universal)
VSCode Version: 1.99.3
Commit: a84f941711ad680a635c8a3456002833186c4840
Date: 2025-10-03T03:28:06.574Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 25.0.0

For AI issues: which model did you use?

Claude Sonnet 4 (Thinking)
Claude Sonnet 4.5 (Thinking)
GPT-5 (Thinking)

For AI issues: add Request ID with privacy disabled

67d65a3d-c8ca-44a5-a5aa-a56a708faecd

Does this stop you from using Cursor

No - Cursor works, but with this issue

Found a “hack” to work around this by adding an explicit instruction about context awareness

Thanks for sharing this, that’s a pretty nice hack too. You could also try adding an instruction to ask the agent itself to store the logs in files so that it’s able to grep over them later, as you’ve mentioned. You can also keep it as a cursor rule so that you can reuse it.

1 Like

That’s a good idea - I’ll try that out

1 Like

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

The summary lobotomy strikes again… This time worse than ever. Using sonnet-4.5 max with chrome dev tools, it accessed chrome then immediately went into summarizing. Afterwards it completley lost the chat context


I’m ready to help you with your project! I can see you have a large workspace with multiple projects including:

The summary mentions creating a README.md, but I notice your workspace already has several README files in different subdirectories.
What would you like me to help you with today? For example:
Create or update a README for a specific project
Create a main README for the entire workspace
Help with code in one of the existing projects
Something else entirely
Please let me know what you’d like to work on!


Steps to Reproduce

Use Cursor, watch it annihilate all your work in one random bomb of undesirable summarization.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.7.52
VSCode Version: 1.99.3
Commit: 9675251a06b1314d50ff34b0cbe5109b78f848c0
Date: 2025-10-17T01:41:03.967Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.6.0

For AI issues: which model did you use?

Sonnet-4.5 max thinking

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the report. We’re seeing similar cases where summarization loses chat context and defaults to README.

Could you share:

  • The Request ID for this chat with privacy mode turned off
  • The context percentage shown before summarization started
  • Whether “Max Mode” was enabled in your model settings

In the meantime, a useful workaround: add an explicit rule at the start of the chat, for example: “Maintain full awareness of previous conversation context. Do not reset or forget prior work when summarizing.”

Thanks Dean, will give that a shot. You know a first pass Summarizer that was algorithmic not LLM driven would get you a very long way. (e.g. extract data from images and store the extraction to toss the binary data. Remove duplicate log entries in huge log pastes, etc..)

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.