*sigh* more optimization hell: Summarizing chat neverending

  1. I have never encountered a situation where summarization occurs too early. Usually it happens either on time or even later than I would like. I have also not seen Cursor optimizations hurt performance.

  2. If you do not enable summarization, then, unless this causes an error on the provider side, you will get a completely filled context that will self-destruct in inappropriate places of this very context (for example, at the very beginning of the context you may have a fragment of a sentence or some important link left, which will break the Agent’s mind).

  3. The more loaded the context is, the dumber the model. Subjectively, I do not feel this when working in Cursor, but there are generally accepted benchmarks on this matter. The more compact the information in the context window, the better the LLM works.

1 Like