There are definitely issues with the summarization feature that need to be addressed. It is also not always clear when it is happening, and I think much of the time it is stealth-summarized during “Planning Next Moves” and the like.
That said…remember that in a given chat, every time you prompt again, ALL the prior context has to be sent again along with any new prompt, as well as any newly attached context to the prompt you are issuing. Depending on the amount of context you attach to a given prompt, and I suspect any predictive assumptions the agent makes about how much additional context might be used by any of its internal functionality (system prompt, etc.), it is not necessarily unexpected that you might go from say 50%ish, to over 90%ish, with a single prompt.
Now, I agree, Cursor should not be preemptively summarizing without need. If they are summarizing at 50% when the next prompt would push that to 65%, summarization is UNNECESSARY and if they waste our time summarizing anyway, that’s a very real problem. I mostly use Claude models. Those models support a 1M context, however we are provided 200k. So Cursor is already limiting context, to avoid the problems that occur when you over-use context. There shouldn’t be a need to over-aggressively compress context when it is not strictly necessary.
I think part of the problem, is that Cursor has a SEVERE LACK of insight and transparency into context usage. This has been a problem for some time, but as I get more refined and advanced with my usage of the agent, I find it has become critically essential, that they stop obscuring context usage behind their, nearly-useless, tiny little context usage indicator in the prompt. They can keep that, but, we need to be able to CLICK that, or something, and see a DETAILED REPORT of context usage, EVERYTHING, their internal usage and ours, from the full context window the selected model provides.
Anything less, and…well, here we are. 
A thought on the progressive planning approach. Because context IS and CAN BE summarized, I periodically have the agent “flush” the current state of the plan to a markdown file or something like that. When the agent seems like it has lost detail in its context, I can have it refresh itself by referencing the markdown file.