Repeated-response failure in Cursor assistant conversation

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I’m reporting a quality issue with the Cursor AI assistant in my recent chat (using CODEX).

Issue:

  • The assistant repeatedly gave substantially the same answer multiple times (on record up to 10 times before I personally just gave up), even after I explicitly asked for a different response format each time and made it aware (asked it to read the chat to recognize the pattern behavior).

  • It ignored direct feedback such as “stop repeating” and continued looping similar summaries.

  • This made the conversation unusable and wasted time (and tokens!).

What I asked for:

  • A fresh, non-repetitive response focused on a completely different topic or question.

What happened instead:

  • The assistant kept restating prior content with minor rewording.
  • It did not adapt to my correction or escalation messages.

Impact:

  • Loss of trust in response control.
  • Significant friction and delay in planning work.

Requested improvements:

  1. Detect repetition across recent turns and force a new response strategy.
  2. Add a “user said stop repeating” hard guardrail.
  3. Ask a clarifying question when the user says the answer is off-target instead of re-summarizing.
  4. Improve memory of immediate corrective feedback in the same thread.

Please review this chat for repetition-handling and escalation-failure behavior.

Steps to Reproduce

Ask a different question and it gives the same answer from a previous question, and maybe occassionally spit out something small related to recent question at the end of it. So it is aware of its bug but can’t seem to stop repeating itself.

Screenshots / Screen Recordings

Operating System

Windows 10/11

Version Information

Version: 2.6.20 (user setup)
VSCode Version: 1.105.1
Commit: b29eb4ee5f9f6d1cb2afbc09070198d3ea6ad760
Date: 2026-03-17T01:50:02.404Z
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

CODEX

For AI issues: add Request ID with privacy disabled

e01d0ab8-a14a-4e88-b041-30673ddcd33a

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the detailed report and the request ID.

I looked into this, your requests were processed correctly on our side. The model received and responded to each message without errors. The issue is that in long conversations like yours with 33 messages, LLMs can get stuck in repetition loops. This happens more often when follow-up messages focus on the repetition, since that can accidentally reinforce the pattern.

A couple of things that should help:

  1. Start a fresh chat when you notice the model repeating itself. Long conversations can reduce quality for any model, shorter and more focused chats usually work much better.
  2. Try a different model for the same task. Claude 4.6 Sonnet or Auto mode may handle this type of interaction differently and can break the loop.
  3. Try not to tell the model to stop repeating. Instead, start a new chat and ask the question again from scratch.

Your feedback about repetition detection and guardrails is noted, I’ll pass it along to the team.

Let me know if you keep seeing this with fresh, shorter conversations.

That chat was new and contained less than 20 entries, considering I’ve had other chat sessions that probably had more than a hundred chat entries without seeing this repeated answer behavior, I don’t think you should brush this off as “just start a new chat the second it glitches”, projects need context obviously so we need a much longer session for the model to understand. Especially for veteran devs who already know what isn’t normal behavior!

Got it, I don’t want to brush this off. If the chat was short and the model still got stuck in a loop, that’s genuinely not normal and shouldn’t happen.

I passed your feedback about repetition detection and guardrails to the team. The idea of detecting repeats and forcing a strategy change is a good point, and it should help in cases like this.

On the practical side, if you hit this again in GPT Codex, try switching to a different model (Auto or Claude 4.6 Sonnet) for the same task. Not as a permanent workaround, just so you don’t lose time in the moment.

Let me know if it happens again.

Yes I’ve had to switch models as this seems to be a pattern as well where Sonnet 4.5 was so good I kept using it for half a year, then recently to Sonnet 4.6 it became unusable as it would spend too much time analyzing (I could literally see it thinking to itself and changing its mind and then trying again) just to produce a faulty answer/deployment.

Codex was a breath of fresh air to change to because it was so good at keeping things precise and getting things done often oneshotting things. But since last week, it started doing this repeated answers issue and not listening to prompts telling it that it’s repeating things.

Kinda scary to be honest, when we’ve all adapted psychologically expecting AI to be so smart and human like, then it suddenly acts as if it’s no longer aware of its surroundings.

1 Like

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.