"Summarizing Chat Content" 💀

Anyone else read this as “leaking context”?

It looks like the user is engaged in a complex debugging operation, which I’ll now forget all about since this summary will reboot the context. You should probably copy and paste the last few responses verbatim so that the vague summary doesn’t cause the LLM to go way off scope.

I’ll just edit these 3 files now

component.tsx +123 -7

And simplify your UI

navigation.tsx +17 -371

There, I fixed it.

2 Likes

I don’t think it’s a full context reset (just enough to response) as it often keeps summarizing afterwards.

This isn’t ideal, especially in agent mode.

I’d prefer to be asked what I want to happen.
Current behavior, or an option to continue in a new chat with a more detailed summary.

Even more frustrating is there’s no way to view the summary to:
a) check for important missing details
b) copy it to a new chat when the connection fails.

Similar issue with past chat summaries. We can’t see it so have to just trust nothing is overlooked, or repeat details unnecessarily ‘just in case’

When the context window is about to be filled up, Cursor will automatically summarize the conversation to make sure the model has enough space to respond. We try to keep as much important information as possible, but there are necessarily some things that we have to leave out. You likely will see the model seemingly forgetting some information from earlier, re-reading the same information again, or similar. This is not ideal, but better than not being able to respond at all.

2 Likes

So I’m sure we all have seen at times when our convos get long our ai buddies burp and forget wtf they are doing why and the context needed to remain focused on task. often ending in chasing it’s tail changing a to b then no must be c… oh got it a was the right path… ■■■■ nope sure its b now… oh now I remember why we needed path c! and circle-jerk iteration 1 of however long I’ll play along.

I’m pretty sure this context summary is where things go burp, or at least one point within the chain.

And mad props to both cursor and the models. Every version seems to be chipping away at the epic circle jerks ai models can do in agent mode.

One thing the would help us devs is always expose that critical info. if the toolchain or prompt responses have “Hey I filled context, summed up best I could to free up the 80% context needed for this next stage of the convo” the more info you offer us devs the more we can ensure that our ai buddies are doing useful things w the compute. And if not give you feedback as to where and why our smart agent buddies turned masochistic.

1 Like

I’ve given explicit rules split tasks into sections and document/update progress every 24-25 tool calls so when it pauses I can choose to start a new chat instead.

When it works it’s great as the last thing it does before pausing it write down it’s progress.

But once it starts summarizing I think rules go out the window then the connection fails as always and it hasn’t updated. :man_facepalming:

I’ll probably have to use proper task-management like taskmaster.

Whatever summarizing is trying to do isn’t working.

I have built my own MCP Server which I am using as a RAG system and I have had to tweak the chunk sizing in my weaviate database to tackle this issue:

Problem Analysis

The issue is that your chunks are being processed with the “cursor_dense” content type, which has a maximum chunk size of 7,000 characters. The search results show chunks with lengths of 773, 6285, and 4623 characters, all using the “cursor_dense” configuration. While these are within the 7K limit, they’re still quite large for Cursor’s chat summarisation feature.

The problem is that:

  1. Cursor’s “summarise chat” feature has practical token limits around 70K tokens

  2. Large chunks (up to 7,000 characters ≈ 1,600-2,000 tokens each) quickly consume this limit

  3. When you have multiple large chunks in context, you hit the token ceiling

4 posts were merged into an existing topic: “Summarizing chat context” lingering / weird behvaiour

Moving everything to this thread, as there does seem to be an issue here!