Cursor Fails to work once 80% Context is passed

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

During a normal context (Auto or any other agent mode), things seem to work fine - that is, instructions to do automated tasks are executed and continue without issue and things work as expeted.

As soon as the context passes 80%, Cursor starts saying things like “I will do this or that” and then drops back to a prompt and we ahve to re-prompt to say “uh. do it again” and that causes the context reset and basically all things that were done before stop working.

As soon as the context is summarized at 100% things start to work automatically again… however, in the summarization of course things are lost again…

This creates a loop where the end user has to fight with Cursor until the context get summarized before we can continue to work.

The attached chat shows this happening in real time where you can see chat after chat is “dropped out” and requires reprompting and then as soon as the context was summarized magically the need to re-prompt" goes away.

This results in a ton of churn and cycling with the agent that “works one second” and then stops working - if the context is at 80% then it should work no?

Additionally, we can see clear “ERRORS” in the code generation and this results in, pre context hit, duplication of the same code (and errors) again and again costing time and resources, once we pass 80%, the agent starts to indicate that there are system level issues rather than recognizing the clear errors thrown in the code generation piece and then drops back down to the need to re-prompt (and force the agent to review all the history and cherry pick out when it was working).

This is all within the span of 30 minutes and in the same chat session (context).

Steps to Reproduce

Issue commands for automated tasks in a context and let it fill up.. watch as Cursor all the sudden starts to require “reprompting” when it says " I will do this and that" and see that rather than actually “doing the work” it stops and you need to reprompt (and all your context is lost, you have to redo the effort again and again).

Expected Behavior

when the agent indicates its “going to do something” it should do it, not printout “planning” and then drop back to a dead prompt requiring the user to redo all the efffort again and again.

If the agent cannot function after 80% context full, then it should say 100% context full and summarize at that point to allow functions to continue, not force the user to retype everything that got to that 80% (or we can really say 100%, since at the 80% mark, Cursor fails to really work anymore).

Screenshots / Screen Recordings

Operating System

Windows 10/11
Linux

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.0.38
VSCode V: 1.99.3
Commit: 3fa
Date: 2025-10-29

For AI issues: which model did you use?

Auto

Additional Information

more time spent trying to figure out “why CUrsor” is doing this and that than being able to do the work that i intend…

really really poor experience with this weeks updates… shows lack of consideration for the userbase i feel in releasing such things that break after update and there is not even a Release note with the updates to show what is fixed or not…

very much looking for alternative now.

Does this stop you from using Cursor

Yes - Cursor is unusable

Here is seeing it “live” meaning it was busted (constant re–prompting)_ and then “boom” context is summarized and we are able to tasks again..

Thanks for the detailed report with screenshots – this really helps us understand what’s happening.

This is a limitation with any LLM due to limited context windows. The higher you fill up, LLMs tend to stop working properly.

Workarounds to try:

  • Manual summarization: Run /summarize at good stopping points instead of waiting for auto-summarization, then start a new request
  • Split tasks: Break big tasks into smaller chat sessions

This is exactly what happens to be with GPT, drives me crazy! it works fine with claude/grok they keep going perfectly until it fills and then auto summarizes and then keeps going. GPT5 however does exactly what you described above, and now it wastes 10 times more $ doing it than before with the request based, ugh.