Context summarization is causing significant workflow changes in the AI, turning it from a peer you’re working with into a chaos goblin that starts destroying the codebase

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I’ve complained about this before — the context summarization needs controls. We need the ability to choose what stays and what goes. We should also have the option to enable rolling context, even if it’s more expensive for the user. This summarization system might work for some people, but for me, it’s still not functioning in a beneficial way. The AI always shifts into an overzealous, destructive goblin. The knowledge of the system’s objective is always lost, and without fail, every single time, it goes from being a focused helper to a toddler that’s had too much sugar and wants to destroy everything.

I’m still finding random bits of unrelated code scattered through the codebase from weeks ago — code that has nothing to do with the tasks I’m working on. It causes unknown errors and leaves me completely confused. This leads to unpredictable and erratic behavior.

Steps to Reproduce

Work delicately with the LLM in a code base that requires attention to every single line.

Expected Behavior

Reliable, preditcable behaviour from the ai. Contextual understanding staying in a granular way that is not lost.

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.7.44 (user setup)
VSCode Version: 1.99.3
Commit: 9d178a4■■■89981b62546448bb32920a8219a5d0
Date: 2025-10-10T15:43:37.500Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26100

For AI issues: which model did you use?

Sonnet 4, gpt5, gpt4o, Opus, Sonnet 4.5

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. This is a known limitation of context summarization, especially in long sessions or large codebases.

AI can lose task focus after summarization when the context fills up too quickly.

To debug the “random code insertion”:
Could you share:

  • Concrete examples of unrelated code (which files and snippets)
  • Which mode you’re using: Ask, Agent, or a custom mode?
  • Whether it happens right after auto-summarization

Workarounds:

  1. Run /summarize manually at good points and start a new request instead of relying on auto-summarization
  2. Split big tasks into smaller sessions, keep an instruction file and move step by step
  3. Try Max mode for longer context retention, noting it’s more expensive

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.