Too aggressive loop detection

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

I had asked Cursor to add a button to a site, which worked well. But then I asked it to change the behaviour of what happened when the button is clicked.

Very quickly, cursor jumped in saying a loop had been detected, and offered the options of restarting or resuming.

I think this kicked in far too quickly. Reading the thinking, the button was off the edge of the page, and couldn’t be clicked initially, and so Cursor was retrying to work around tht.

After I clicked resume, Cursor was having some difficulty making the change I wanted, but appeared to be doing sensible things, e.g. trying to work around copying and pasting not working as expected.

The loop detection kicked in again very quickly, before it was doing anything that looked like a loop to me, but now didn’t have an option to resume saying “unrecoverable agent model looping detected”.

Steps to Reproduce

Don’t know, haven’t tried.

Sorry, I also don’t know how to “add Request ID with privacy disabled” - have supplied the copied info

Expected Behavior

The clanker should have been left to run.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.44
VSCode Version: 1.105.1
Commit: 20adc1003928b0f1b99305dbaf845656ff81f5d0
Date: 2025-12-24T21:41:47.598Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 25.2.0

For AI issues: which model did you use?

Opus 4.5

For AI issues: add Request ID with privacy disabled

Version: 2.2.44
VSCode Version: 1.105.1
Commit: 20adc1003928b0f1b99305dbaf845656ff81f5d0
Date: 2025-12-24T21:41:47.598Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 25.2.0

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. This is a known issue with the loop detection detector, the team is working on improving the logic for detecting real loops, especially when using the browser.

Related reports:

To pass this to the team, we need a real Request ID. To get it with Privacy Mode turned off:

  • Cursor Settings CMD+, > Privacy > uncheck “Privacy Mode”
  • In the chat where the error happened, click the chat menu (three dots in the top right) > Copy Request ID

This will help the engineers review the exact case and improve the detector.

1 Like

The request id from clicking the three dots is:
b84948d2-d809-446c-9e7a-2c2919899995

but it wasn’t the first entry in the chat that had the loop problem.

I think these two are the two requests that resulted in loop detection.
1c3242d4-d6c5-452f-a868-9f52466a65d5
c167bb96-6b08-44d4-9d6d-c5adbdab4922

I believe “Data sharing enabled” was already set in the privacy option.

1 Like

I can only join the report.
When developing MCP servers the agent is always tasked with the testing of the tool developed inside cursor as if it were a normal tool. The testing is IMPOSSIBLE as a test requires multiple calls to tools of the server and the loop detection immediately interrupts the testing. This happens even though the agent is calling different tools of the same server - the loop detection does not take function names and arguments into account AT ALL! It is very frustrating! Happened reliably every time - i dont have any request ids at hand as it did not happen just now - i ultimately used a different environment to complete the testing successfully without losing my sanity.

Hi @Danack,

I wanted to share my perspective and a workaround that helps me when I hit this “aggressive loop” wall.

  1. Workaround when “Resume” fails:
    I rarely use the “Resume” button because, as you noticed, it often leads back to the same error. Instead, I simply type “continue” (or give a small nudge like “try again”) in the chat.
    This seems to reset the internal loop counter effectively. It forces a new turn in the conversation rather than trying to resume the exact flagged process, allowing the agent to proceed without getting stuck in the “unrecoverable” state.

  2. A “Token Saving” perspective:
    While the detection can be annoying, I’ve come to see it as a safety brake. When an agent gets stuck in a retry loop (like failing to click an off-screen button repeatedly), it can burn through a lot of tokens very fast with zero result.
    I treat these interruptions as a signal to step in and guide the agent manually. Instead of letting it “wander” and brute-force the solution, I use the pause to give precise, step-by-step instructions.

  • Example: Instead of hoping it eventually clicks the button, I might tell it: “Use scroll”, “Modify the CSS to ensure all buttons are visible within the viewport”, or “Fix the page layout so elements don’t overlap.”
  1. Model choice for testing:
    For iterative tasks like browser UI testing where loops are more likely, I switch to cheaper/faster models to save costs. I often use Haiku or Grok Code.
    Since these models can be a bit “dumber” and might break things if left unchecked, I give them a strict system instruction:

“If you find errors, make NO code changes — your task is testing only. Describe the errors briefly without code listings (listings just distract from the core issue), stop, and wait for my further instructions.”

This way, I get free/cheap testing speed without the risk of the model messing up the codebase with poor fixes.

Hope this helps!

Hi @rafael_uzarowski,

I develop custom MCP servers too and I feel your pain. The agent often tries to rush into testing immediately after editing the code, which triggers the loop detection (and fails anyway because the server isn’t restarted yet).

I solve this by adding a strict rule to my .cursor/rules file (or project rules).

The Solution:
Add a rule explicitly forbidding auto-testing for server code:

“After making changes to the MCP server code, DO NOT test it immediately. I must restart the server first. Stop, ask me to restart the server, and wait for my confirmation.”

My experience with models:

  • Strong models (like GPT-5.2, Claude Sonnet 4.5) follow this rule perfectly and stop as requested.
  • Weaker/Faster models (Haiku, Grok Code, or “Auto” mode) might ignore it initially. For them, I sometimes have to repeat in the chat: “REMEMBER: Do NOT start testing after edits! ALWAYS STOP and ask me to restart the server first!”
    • Usually, after 1-10 reminders in the chat context, they “learn” this constraint and stop triggering the loop detector.

This workflow saves both my sanity and tokens. Hope it helps!

Thanks for the additional Request IDs, I’ve passed them to the team for analysis.

@Rafael_Uzarowski, thanks for the extra details about the issue when testing MCP servers. If you run into this again, please send the Request ID. It’ll help the team improve the detector logic.

@argrigorov, thanks for the helpful workarounds. Especially the tip to use “continue” instead of Resume, and the .cursor/rules guidance when developing MCP servers.

The team is working on improving how we detect real infinite loops vs normal retries, especially for browser and MCP testing cases.

I use mcp-hmr (hot module reloading) with my custom patch to also reload env variables from a env file - so the mcp code is instantly reloaded during development and this actually IS the desired workflow in my case, the agent should test whenever something needs verification beyond the standard pytest cases. Agent has ssh connection to a windows vm for pytest execution plus can test live inside the cursor instance completely autonomously.
If you want to give it a try, it works like a charm: hmr/packages/mcp-hmr at main · ehlowr0ld/hmr · GitHub
The fork also has my extensions for wsgi/asgi server hot reloading for apps that set up the server programmatically and not via cli

I have pretty large user rules file and dont really have problems with model adherence, the testing during development is also among the instructions, the loop detection is my problem that prevents full autonomous cycle

+1 to keep this open

Other possible related posts:

Those posts seem specific to Composer-1, and have the opposite problem, where there is a clearly obvious infinite loop that just keeps going until the chat is manually stopped. Regardless, the loop detector may be to blame for both those issues and this one, and both situations need to be handled properly.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.