I get this notification in the chat, although it was a short chat, no context issue yet and the model continues with its tasks, with the “Summarizing chat context” still appearing (as you can see the model continues, also after thinking)
It never stops summarizing, only restarting Cursor does the trick.
The AI needs to use the data returned by the MCP server (kind of one of the main use-case of MCP) - so it doesn’t make sense to let the chat continue while summarizing it in the background (even in this case, when do we know when the summary is inserted in context?)
After more testing, it doesn’t always happen when using MCP tools, just any time we’ll get the Summarizing chat context, the agent will lose track of what it was doing and start looping. Really is a breaking bug.
Provide a clear description of the bug
LLM loses context after running “Summarising chat context” and starts wasting dozens of calls to understand what it was doing just moment before. No idea how it’s implemented but if feels like summarisation process never ends successfully (message keeps blinking) while previous history is already removed. My workaround atm is to split work to smallest possible chunks and make llm write down all actions to MD files, so it can recover quickly after losing context, but overall UX is terrible.
Explain how to reproduce the bug (if known)
No idea, but almost every single chat I start that requires 20+ operations from agent recently instantly stops with “Summarising chat context” message blinking, LLM stops and starts checking changes conducted before context summarisation to understand what it was doing. Wastes 10-20 requests and starts summarising again.
To me looks like major regression.
Attach screenshots or recordings (e.g., .jpg, .png, .mp4).
Tell us your operating system and your Cursor version (e.g., Windows, 0.x.x).
Version: 1.0.1
VSCode Version: 1.96.2
Commit: 9f54c226145b02c8dd0771069db954e0ab5fa1b0
Date: 2025-06-07T19:29:24.209Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.4.0
It looks like the user is engaged in a complex debugging operation, which I’ll now forget all about since this summary will reboot the context. You should probably copy and paste the last few responses verbatim so that the vague summary doesn’t cause the LLM to go way off scope.
Sometimes it will just stop working even though it never finished implementation. Other times it will just “start over” looking at all the files it already looked at.
Very frustrating, I’ve never experienced that in the past.
Cursor keep generating mistakes, keep reading the same file again and again. After reading failed, it shows “chat context summarized, start new chat for a better result.” Yeah I start a new chat, and it keep going on. When ask cursor to edit attached file, the cursor edit other file that not on attachment. And I pay for their mistakes.
The Auto “Summarizing chat context” feature in 1.0 is absolutely garbage. I’m pretty sure the only true purpose is to save Cursor on API Costs, but it literally kills our productivity. Often times, after summarizing, it just wants to start the entire implementation over again. At the very least, it resets the context and it has to RE-look into all the files it already spent time looking into. Hate it.
Love Cursor! Hate this feature. It needs to go, at the very least it needs to be optional…
I can’t emphasize enough how much I hate this summarization feature.
I use Cursor all day every day. And this is the most frustrating part about it. All I know to do is be extremely loud and hope that your product team hears us and fixes this or changes it.
It takes twice as long to do things, because it has to re-search file context every time it summarizes, which causes more summaries, which causes more file searches, and repeat.
My Supabase MCP “List Tables” function causes an infinite loop, because my database structure is so large, so the context obviously exceeds Cursor’s limit for summaries. So it will immediately summarize.
What’d be super helpful would be if anyone could grab us a request ID with privacy mode disabled, just so we can see what’s going on under the hood here!
In v1.0, we didn’t actually change how the summarization worked, so nothing drastic should be different here really. The only difference is we now show when this is happening.
If you are seeing a lot of issues with this, it may be worth switching to a model with a large context window, or if it’s specific to an MCP server, see if you can edit it so it returns less data, causing less bloat of the context window!