How to disable/defer "Chat context summarized." interruptions?

I do rocket-science stuff, and benefit from spending hours crafting my chat, and get excellent results, then the dreaded “Chat context summarized” shows up, and wipes out all my hours of work, and Claude goes back to being an in-the-dark idiot. I use “duplicate chat” as much as possible to try avoid that, and I “go back in time” whenever possible (edit prior messages), and my tools and instructions are already as lean as possible… but still hit the problem often enough that it’s a major time-waster …

I notice it can happen when I’m only using 59% context - so something seems to be triggering this unnecessarily? I’ve tried “DO NOT SUMMARIZE” instructions - but it looks like there’s no agentic control over when it decides to do this?

Does anyone know how to tell it NOT to do any summary stuff, unless there’s no other option (i.e. 99% context use).
Maybe a “/nosummarize” command is needed, to at least give us back our work?

1 Like

100% agreed!
This is so annoying… I literally built a big feature that requires having many files in the model’s context for each step in the chat… so I’m building the context brick by brick, then this “Chat context summarized“ step made the model completely lose memory of the details that it was able to do with no issues before…
I assume this feature was requested before to ‘save tokens’ by preventing sending the whole chat…. Well it’s doing the opposite now :angry: !!
I’m not sure but I also noticed that this happens when you leave the chat thread for some time…
PLEASE REVERT THIS CHANGE OR MAKE IT OPTIONAL!

I’ve been railing against this for months – they’re not inclined. I thought there was a brief moment of respite with it prompting “Summarize to continue” but alas, no, that went away in the last update. It is beyond infuriating.

If I go back in my chat history to re-run the previous command it wont restore the chat either, because its been destroyed. Really the weakest piece of cursor hands down, frustratingly, and poorly implemented. Stop so I can switch to a 1M token model for crissakes or give me tools to edit the context.

Otherwise love Cursor/Composer 1 model is awesome. But burns tokens and context like nobodies business making this even more important.

@deanrie is there anyone you can relay this to, please?!