There’s an issue with summarization feature whenever it’s working with MCP agents
Whenever MCP agents read logs / confluence pages / tickets and summarization steps starts in the middle of agent working, the summary generated mentions some kind of README.md and agent loses all of its context…
Summary should be kept with proper log references (save logs in intermediate files maybe to avoid filling up the context needlessly, then use GREP on those files or something like that)
Thanks for sharing this, that’s a pretty nice hack too. You could also try adding an instruction to ask the agent itself to store the logs in files so that it’s able to grep over them later, as you’ve mentioned. You can also keep it as a cursor rule so that you can reuse it.
The summary lobotomy strikes again… This time worse than ever. Using sonnet-4.5 max with chrome dev tools, it accessed chrome then immediately went into summarizing. Afterwards it completley lost the chat context
I’m ready to help you with your project! I can see you have a large workspace with multiple projects including:
The summary mentions creating a README.md, but I notice your workspace already has several README files in different subdirectories.
What would you like me to help you with today? For example:
Create or update a README for a specific project
Create a main README for the entire workspace
Help with code in one of the existing projects
Something else entirely
Please let me know what you’d like to work on!
Steps to Reproduce
Use Cursor, watch it annihilate all your work in one random bomb of undesirable summarization.
Operating System
MacOS
Current Cursor Version (Menu → About Cursor → Copy)
Hey, thanks for the report. We’re seeing similar cases where summarization loses chat context and defaults to README.
Could you share:
The Request ID for this chat with privacy mode turned off
The context percentage shown before summarization started
Whether “Max Mode” was enabled in your model settings
In the meantime, a useful workaround: add an explicit rule at the start of the chat, for example: “Maintain full awareness of previous conversation context. Do not reset or forget prior work when summarizing.”
Thanks Dean, will give that a shot. You know a first pass Summarizer that was algorithmic not LLM driven would get you a very long way. (e.g. extract data from images and store the extraction to toss the binary data. Remove duplicate log entries in huge log pastes, etc..)