Frequent Agent Freezing During Tool-Based Responses

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Agent frequently freezes during responses. Agents often stops generating output halfway through a response and becomes completely unresponsive. When this happens, I have to manually stop the response or send an additional message asking it to continue.

This issue has existed before, but has become significantly worse recently. It now occurs very frequently (roughly every one or two messages). In some cases, even after duplicating a chat, messages cannot be sent at all—the send button does nothing.

The problem is much more severe when tool calls are involved (e.g., code generation). Simple conversational prompts without tool usage rarely freeze, but workflows that involve tools now freeze so often that the product is nearly unusable.

Steps to Reproduce

  1. Open a new chat in Cursor.
  2. Use an AI model (see below).
  3. Send a prompt that involves tool usage (e.g., writing or modifying code).
  4. Observe that the response often stops mid-generation and freezes.
  5. In some cases, duplicate the chat and attempt to continue—messages may fail to send entirely.

Expected Behavior

Agent responses should complete normally without freezing, and messages should always be sent successfully when clicking the send button.

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.3.41
VSCode Version: 1.105.1
Commit: 2ca326e0d1ce10956aea33d54c0e2d8c13c58a30
Date: 2026-01-16T19:14:00.150Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Darwin arm64 24.4.0

For AI issues: which model did you use?

Opus 4.5 Thinking (also observed with other models)

Does this stop you from using Cursor

Yes - Cursor is unusable

Hey, thanks for the report. This looks like a connection issue or timeouts during long tool operations.

Try these steps to debug:

  1. Network diagnostics:
  • Cursor Settings > Network > Run Diagnostics, then send a screenshot of the results
  • Are you using a VPN or a corporate network? Try turning it off or switching to another network
  • Add this setting: "cursor.general.disableHttp2": true (this sometimes helps with timeouts)
  1. Request ID:
    Next time the agent gets stuck, open the chat context menu (three dots in the top-right) > Copy Request ID, then paste it here. This helps the team check what’s happening on the backend.

  2. Console errors:
    Help > Toggle Developer Tools > Console tab. If you see any red errors when it gets stuck, please send a screenshot.

  3. Please check:

  • Does this happen only with Opus 4.5 Thinking, or with other models too?
  • Do simple prompts without code generation work fine?

From what you described, it gets worse during tool calls (code generation). Other users have reported similar patterns, but most cases were fixed by network changes or updates. Yours looks recent, so we need this info so the team can dig deeper.

Thanks for the reply. I will share 2 stuck session and their console logs.

request id: 57de7d90-76f7-4c8a-9c55-35fa98761f2c

request id: 9ad81372-8a2d-44a5-acf0-b92ee9731b2a

Thanks for the screenshots and request IDs. Things are clearer now. I see two related issues:

  1. OTLPExporterError (lots of telemetry errors)
    This is a known issue in version 2.3.41. There’s a memory leak caused by rejected trace spans. The team is working on a fix. These errors can spam the console and may affect performance.

  2. ConnectError: network socket disconnected
    This looks like network drops during AI requests. A couple questions:

  • Are you using a VPN or a corporate network?
  • Can you share a screenshot of Cursor Settings > Network > Run Diagnostics?

Try this temporary workaround:

  1. Add this to settings.json:
    "cursor.general.disableHttp2": true
    
  2. If you’re using a VPN, try without it
  3. Restart Cursor

Next steps:
I’ll pass your request IDs and screenshots to the team. They should make it clear what’s happening on the backend. Most likely it’s a combo of the telemetry bug plus network timeouts during long tool operations.

Let me know if the HTTP/2 workaround helped.

Thanks! I think it helps. I guess VPN caused most of the issues, and maybe it wasn’t stable recently.

1 Like

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.