I’m reporting a quality issue with the Cursor AI assistant in my recent chat (using CODEX).
Issue:
The assistant repeatedly gave substantially the same answer multiple times (on record up to 10 times before I personally just gave up), even after I explicitly asked for a different response format each time and made it aware (asked it to read the chat to recognize the pattern behavior).
It ignored direct feedback such as “stop repeating” and continued looping similar summaries.
This made the conversation unusable and wasted time (and tokens!).
What I asked for:
A fresh, non-repetitive response focused on a completely different topic or question.
What happened instead:
The assistant kept restating prior content with minor rewording.
It did not adapt to my correction or escalation messages.
Impact:
Loss of trust in response control.
Significant friction and delay in planning work.
Requested improvements:
Detect repetition across recent turns and force a new response strategy.
Add a “user said stop repeating” hard guardrail.
Ask a clarifying question when the user says the answer is off-target instead of re-summarizing.
Improve memory of immediate corrective feedback in the same thread.
Please review this chat for repetition-handling and escalation-failure behavior.
Steps to Reproduce
Ask a different question and it gives the same answer from a previous question, and maybe occassionally spit out something small related to recent question at the end of it. So it is aware of its bug but can’t seem to stop repeating itself.
Hey, thanks for the detailed report and the request ID.
I looked into this, your requests were processed correctly on our side. The model received and responded to each message without errors. The issue is that in long conversations like yours with 33 messages, LLMs can get stuck in repetition loops. This happens more often when follow-up messages focus on the repetition, since that can accidentally reinforce the pattern.
A couple of things that should help:
Start a fresh chat when you notice the model repeating itself. Long conversations can reduce quality for any model, shorter and more focused chats usually work much better.
Try a different model for the same task. Claude 4.6 Sonnet or Auto mode may handle this type of interaction differently and can break the loop.
Try not to tell the model to stop repeating. Instead, start a new chat and ask the question again from scratch.
Your feedback about repetition detection and guardrails is noted, I’ll pass it along to the team.
Let me know if you keep seeing this with fresh, shorter conversations.
That chat was new and contained less than 20 entries, considering I’ve had other chat sessions that probably had more than a hundred chat entries without seeing this repeated answer behavior, I don’t think you should brush this off as “just start a new chat the second it glitches”, projects need context obviously so we need a much longer session for the model to understand. Especially for veteran devs who already know what isn’t normal behavior!
Got it, I don’t want to brush this off. If the chat was short and the model still got stuck in a loop, that’s genuinely not normal and shouldn’t happen.
I passed your feedback about repetition detection and guardrails to the team. The idea of detecting repeats and forcing a strategy change is a good point, and it should help in cases like this.
On the practical side, if you hit this again in GPT Codex, try switching to a different model (Auto or Claude 4.6 Sonnet) for the same task. Not as a permanent workaround, just so you don’t lose time in the moment.
Yes I’ve had to switch models as this seems to be a pattern as well where Sonnet 4.5 was so good I kept using it for half a year, then recently to Sonnet 4.6 it became unusable as it would spend too much time analyzing (I could literally see it thinking to itself and changing its mind and then trying again) just to produce a faulty answer/deployment.
Codex was a breath of fresh air to change to because it was so good at keeping things precise and getting things done often oneshotting things. But since last week, it started doing this repeated answers issue and not listening to prompts telling it that it’s repeating things.
Kinda scary to be honest, when we’ve all adapted psychologically expecting AI to be so smart and human like, then it suddenly acts as if it’s no longer aware of its surroundings.