"Summarizing chat context" lingering / weird behvaiour

I get this notification in the chat, although it was a short chat, no context issue yet and the model continues with its tasks, with the “Summarizing chat context” still appearing (as you can see the model continues, also after thinking)

  • Model: Sonnet 4.0 Thinking
  • Pro plan

1 Like

Update: I keep getting it after using MCP tools, if the MCP tool return is long, I’ll be stuck in a full loop:

me: “please rerun the MCP tool I updated”

cursor: “ok I’ll rerun”
cursor runs the MCP tool
lingering Summarizing chat context
thinking

cursor: “ok I’ll rerun”
cursor runs the MCP tool
lingering Summarizing chat context
thinking

etc.

Hey, does it ever stop Summarizing? It’s possible Cursor is summarizing a long output behind the scenes, while still letting your chat continue!

If it gets stuck outright however, and never goes away, this seems like a bug!

It never stops summarizing, only restarting Cursor does the trick.

The AI needs to use the data returned by the MCP server (kind of one of the main use-case of MCP) - so it doesn’t make sense to let the chat continue while summarizing it in the background (even in this case, when do we know when the summary is inserted in context?)

Thanks for the info. Can you confirm if the summarising UI disappears as expected if you are not using the MCP server, if you are able to check that?

Can you confirm what MCP server and tool is causing this?

Also, if you have privacy mode disable (or can do so), and can reproduce this, a Request ID would be super useful:

Thanks

Here, i had same problem:

24340e9d-ec77-4333-ba93-49e0bb7dc794

1 Like

After more testing, it doesn’t always happen when using MCP tools, just any time we’ll get the Summarizing chat context, the agent will lose track of what it was doing and start looping. Really is a breaking bug.

1 Like

Same as Summarizing chat context eternal loop but that was locked.

Cursor 0.51.1
Windows 10 22H2

:lady_beetle: Provide a clear description of the bug
LLM loses context after running “Summarising chat context” and starts wasting dozens of calls to understand what it was doing just moment before. No idea how it’s implemented but if feels like summarisation process never ends successfully (message keeps blinking) while previous history is already removed. My workaround atm is to split work to smallest possible chunks and make llm write down all actions to MD files, so it can recover quickly after losing context, but overall UX is terrible.

:counterclockwise_arrows_button: Explain how to reproduce the bug (if known)
No idea, but almost every single chat I start that requires 20+ operations from agent recently instantly stops with “Summarising chat context” message blinking, LLM stops and starts checking changes conducted before context summarisation to understand what it was doing. Wastes 10-20 requests and starts summarising again.
To me looks like major regression.

:camera: Attach screenshots or recordings (e.g., .jpg, .png, .mp4).
image

:laptop: Tell us your operating system and your Cursor version (e.g., Windows, 0.x.x).
Version: 1.0.1
VSCode Version: 1.96.2
Commit: 9f54c226145b02c8dd0771069db954e0ab5fa1b0
Date: 2025-06-07T19:29:24.209Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.4.0

for me it’s getting stuck at summarizing chat context and I have to open a new chat tab in order to fix it.

Better screenshot:

Note - “summarizing chat context” appears 3 time in a row, all blinking, after that model typically just starts conducting random actions.

I also have seen “summarizing chat context” after 1st message in new chat. It shows up randomly and hardly does anything useful.

Anyone else read this as “leaking context”?

It looks like the user is engaged in a complex debugging operation, which I’ll now forget all about since this summary will reboot the context. You should probably copy and paste the last few responses verbatim so that the vague summary doesn’t cause the LLM to go way off scope.

I’ll just edit these 3 files now

component.tsx +123 -7

And simplify your UI

navigation.tsx +17 -371

There, I fixed it.

Hey all, you can have a read of how this works below, happy to answer any queries though:

1 Like

Yeah, and we appreciate the transparency! I’d love to be able to control the context in delicate situations so we don’t get this.

GoldfishGPT

It shouldn’t have to explore the file it just made edits to to understand what’s happening. That’s basically a recipe for errors.

(quietly eats foot as it actually solves the problem while I’m complaining about it)

I hear you.

Sometimes it will just stop working even though it never finished implementation. Other times it will just “start over” looking at all the files it already looked at.

Very frustrating, I’ve never experienced that in the past.

Cursor keep generating mistakes, keep reading the same file again and again. After reading failed, it shows “chat context summarized, start new chat for a better result.” Yeah I start a new chat, and it keep going on. When ask cursor to edit attached file, the cursor edit other file that not on attachment. And I pay for their mistakes.

Cursor version 1.0.0

Hey all, just merged some relevant/duplicate threads into this one. There seems to be 3 separate behaviours we are tracking here:

  1. :siren: After a summary, the context of Cursor is almost “reset” and the model forgets the immediate past edits and history - example
  2. :warning: Cursor will sometimes get stuck summarising and never move beyond it - example
  3. :ladybug: The summarising hint never hides, even if the chat moves beyond it - example

The team are aware of these, and are currently investigating all three with a high level of priority.

2 Likes

The Auto “Summarizing chat context” feature in 1.0 is absolutely garbage. I’m pretty sure the only true purpose is to save Cursor on API Costs, but it literally kills our productivity. Often times, after summarizing, it just wants to start the entire implementation over again. At the very least, it resets the context and it has to RE-look into all the files it already spent time looking into. Hate it.

Love Cursor! Hate this feature. It needs to go, at the very least it needs to be optional…

I can’t emphasize enough how much I hate this summarization feature.

I use Cursor all day every day. And this is the most frustrating part about it. All I know to do is be extremely loud and hope that your product team hears us and fixes this or changes it.

  1. It takes twice as long to do things, because it has to re-search file context every time it summarizes, which causes more summaries, which causes more file searches, and repeat.
  2. My Supabase MCP “List Tables” function causes an infinite loop, because my database structure is so large, so the context obviously exceeds Cursor’s limit for summaries. So it will immediately summarize.

What’d be super helpful would be if anyone could grab us a request ID with privacy mode disabled, just so we can see what’s going on under the hood here!

In v1.0, we didn’t actually change how the summarization worked, so nothing drastic should be different here really. The only difference is we now show when this is happening.

If you are seeing a lot of issues with this, it may be worth switching to a model with a large context window, or if it’s specific to an MCP server, see if you can edit it so it returns less data, causing less bloat of the context window!