Provide a clear description of the bug
LLM loses context after running “Summarising chat context” and starts wasting dozens of calls to understand what it was doing just moment before. No idea how it’s implemented but if feels like summarisation process never ends successfully (message keeps blinking) while previous history is already removed. My workaround atm is to split work to smallest possible chunks and make llm write down all actions to MD files, so it can recover quickly after losing context, but overall UX is terrible.
Explain how to reproduce the bug (if known)
No idea, but almost every single chat I start that requires 20+ operations from agent recently instantly stops with “Summarising chat context” message blinking, LLM stops and starts checking changes conducted before context summarisation to understand what it was doing. Wastes 10-20 requests and starts summarising again.
To me looks like major regression.
Attach screenshots or recordings (e.g.,
.jpg
, .png
, .mp4
).
Tell us your operating system and your Cursor version (e.g.,
Windows, 0.x.x
).
Version: 1.0.1
VSCode Version: 1.96.2
Commit: 9f54c226145b02c8dd0771069db954e0ab5fa1b0
Date: 2025-06-07T19:29:24.209Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.4.0