Agent frequently freezes during responses. Agents often stops generating output halfway through a response and becomes completely unresponsive. When this happens, I have to manually stop the response or send an additional message asking it to continue.
This issue has existed before, but has become significantly worse recently. It now occurs very frequently (roughly every one or two messages). In some cases, even after duplicating a chat, messages cannot be sent at all—the send button does nothing.
The problem is much more severe when tool calls are involved (e.g., code generation). Simple conversational prompts without tool usage rarely freeze, but workflows that involve tools now freeze so often that the product is nearly unusable.
Steps to Reproduce
Open a new chat in Cursor.
Use an AI model (see below).
Send a prompt that involves tool usage (e.g., writing or modifying code).
Observe that the response often stops mid-generation and freezes.
In some cases, duplicate the chat and attempt to continue—messages may fail to send entirely.
Expected Behavior
Agent responses should complete normally without freezing, and messages should always be sent successfully when clicking the send button.
Operating System
MacOS
Current Cursor Version (Menu → About Cursor → Copy)
Hey, thanks for the report. This looks like a connection issue or timeouts during long tool operations.
Try these steps to debug:
Network diagnostics:
Cursor Settings > Network > Run Diagnostics, then send a screenshot of the results
Are you using a VPN or a corporate network? Try turning it off or switching to another network
Add this setting: "cursor.general.disableHttp2": true (this sometimes helps with timeouts)
Request ID:
Next time the agent gets stuck, open the chat context menu (three dots in the top-right) > Copy Request ID, then paste it here. This helps the team check what’s happening on the backend.
Console errors:
Help > Toggle Developer Tools > Console tab. If you see any red errors when it gets stuck, please send a screenshot.
Please check:
Does this happen only with Opus 4.5 Thinking, or with other models too?
Do simple prompts without code generation work fine?
From what you described, it gets worse during tool calls (code generation). Other users have reported similar patterns, but most cases were fixed by network changes or updates. Yours looks recent, so we need this info so the team can dig deeper.
Thanks for the screenshots and request IDs. Things are clearer now. I see two related issues:
OTLPExporterError (lots of telemetry errors)
This is a known issue in version 2.3.41. There’s a memory leak caused by rejected trace spans. The team is working on a fix. These errors can spam the console and may affect performance.
ConnectError: network socket disconnected
This looks like network drops during AI requests. A couple questions:
Are you using a VPN or a corporate network?
Can you share a screenshot of Cursor Settings > Network > Run Diagnostics?
Try this temporary workaround:
Add this to settings.json:
"cursor.general.disableHttp2": true
If you’re using a VPN, try without it
Restart Cursor
Next steps:
I’ll pass your request IDs and screenshots to the team. They should make it clear what’s happening on the backend. Most likely it’s a combo of the telemetry bug plus network timeouts during long tool operations.