GPT5 bug after several requests stops working properly

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

GPT5 on a new chat starts out working well, then after several requests it just quits doing things. This started last friday around when it started using the to-do list. I don’t think it has anything to do with the todolist itself, but just the same timing. So something changed.
It seems maybe once the context gets to about 50% it just stops doing things. for example i’ll ask it to do something, it says it will do it, but then the chat stops. By stops i mean it returns back to me, the chat turn/request is done, but it didn’t do anything.

So i spent 100 more requests asking it to do things, sometimes it changes 1 line says it’s continuing to the next line , then the chat stops. It just does this forever at this point and the only way to fix it is to start a new chat.
It was not doing this before last friday, it would go non stop until context was full, summarize it and continue.

This was on 1.4.x and now 1.5.x same thing so it doesn’t seem to be related to the client version. In fact i left 1.4.5 open for a week and gpt was working fine and it stopped working well without me restarting my client so no changes at all on my side even the same chat session and no client restarts as i left it open. Something on the server side changed.

It’s very annoying and i can’t really use GPT5 any more. This model was helping with some in depth complicated math that some of the other models were struggling with, so I really want it to be fixed.

Steps to Reproduce

use GPT5, ask it to do several things until context starts filling up, possibly using the todolist tool as well

Expected Behavior

Keep working normally

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.5.5 (user setup)
VSCode Version: 1.99.3
Date: 2025-08-25T17:40:25.290Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26100

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. Which specific GPT-5 model are you using? Also, could you share the request ID? First, you’ll need to disable privacy mode.

unfortunately i can’t do that, but it’s related to the amount of context somehow, when it’s more than 50% or so it starts declining, which didn’t happen initially.

There has to be a better way to make more of a ‘sliding window’ type of context where it keeps summarizing and keeping content that is relevant and throws out what isn’t (maybe debug logs and other things). I see that cursor handles this different in different models like the grok4-code im testing it never even gets to 80%, it summarizes around 70% but it knocks it down to less than 20%, when ideally it would be more of a slding summarization rather than a nuke.
But i really do like GPT5 and this ‘when it gets over 50% usage it stops doing things properly’ is not good. it should be easy to replicate

Thank you for the additional info, we’ll investigate this.

Any update on this? GPT5 seems to be doing well until it hits about 128k context, then it just starts doing a handful of things and stopping then at 170k context every time i ask it do do anything it prints out one response and then stops. And just continues like that forever until i open a new chat (which is really frustrating because I just got it to look at everything and by the time it looks it’s almost 128k again).. so it would be great to have this fixed.
What i mean by one thing is it has a list of stuff to do, i say go ahead, it does one thing like runs 1 command, changes 1 line in a file, says it’s continuing to do XXX (prints out a list of things) then just dies. It doesn’t do this when context is 0 (new chat ) at all.. so it’s very strange and I like GPT5 for some things but this bug is crazy
(using gpt5 medium or high, i didn’t try low)

Could you try updating to version 1.6.26 and let me know if that doesn’t help?

I am using 1.6.26, that last message was because i updated to 1.6.26 and the behaviour hasn’t changed. it still does the same thing and so i posted it :slight_smile:

I tried claude and it’s bangin! the terminal is working 200% better now, it was able to run like 20 things with no problems, but i need to use gpt5 for some things as claude is just to messy for what i’m doing.

Thanks for the additional info. We’ll investigate.

ok let me know, it just degrades over time.. when first opening a new chat it’s great. If i summarize the chat it works for a bit, but then like 90-100k context it’s back to telling me it’s going to do something and then just quits. It’s weird because then i say keep going, it does one thing, quits. I say keep going, just does 1 thing and quits over and over.. it’s stuck like that. When it does the 1 thing sometimes it can output 100 lines of code for that 1 thing but then it doesn’t ever go to the next thing..
For example it’ll edit the file, compile it, run it, be satisfied it did that and move on to the next step and quit. I think it has something to do with it trying to do multiple things in parallel or something of that nature, it’s just very odd.
If i start a new chat and ask it to do 10 things it’ll just keep going for several minutes and actually do what it says it’s going to do..

So like after ~100k context or so, i’ll ask it to do something and it says:
I’ll mark x done on the todo list
(todo list updates)
I’ll start a new process with (whatever i asked it to do)
( I see the review changes button and chat stops)

i have to keep pressuring it to continue and it repeats this , it has been happening since like the week after it got released so it’s not new. My guess is that the thing has used the entire context window but cursor thinks it hasn’t, so that it has a very limited amount that it can do and just quits, although it should report that back to cursor, there’s no error or anything it just ends the chat turn.
I can’t share the request because of the security settings but the request-id i can give you if that helps

I really like GPT5 but this bug is killing me. I have seen it (on latest version now) happen enough now that It’s definitely based on the session context length. While i can have a massive session with grok or even the new supernova (which isn’t that great but it doesn’t die out), when the session gets too long with GPT5 it just plain stops working right at all. IT seems to be fine on a new chat, and it even summarizes ok a few times but after a couple summaries it just quits, nothing i can do will get it to do anything.. so it must not be the model itself but how you guys are summarizing the context maybe? Even one prompt of mine can make it fill up the context and summarize even before it finishes the request so it’s really frustrating. GPT5 does seem to work the best for what i’m doing unfortunately.

Same thing happening with gpt5-codex

ask it to do something, it says it will, then stops. This is a new chat, asking it to add keywords to the top of documents, it’s about 60% context used.
Same thing other gpt5 does. It was doing 10 at a time, then it started to slow down doing like 3 docs at a time, then it does none.
This doesn’t happen at all with claude/grok/grok-code or even the ‘supernova’ model. I haven’t tried gpt4. I really like gpt5 but it’s just unusable because of this. I can only use it when i start a new chat and once it gets to 40-50% context i have to start a new chat which isn’t acceptable.

Any update on this? it still does it. I ask it to run 10 iterations of a test, it runs 1 and then i have to tell it to continue. If i ask claude or grok, it runs 10 tests and then stops. It’s very frustrating! The test are very simple and just require adjust a value, recompiling and running and getting a result.

This is still happening and driving me crazy because this is the only model that seems to really get what i’m doing. It just keeps saying what it will do but then doesn’t do it and returns to chat. like 5x in a row now , it’s wasting my requests. in the dashboard it doesn’t say the request errored.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.