Stopping mid-chat

Describe the Bug

I feel that when asking a question, that once the LLM starts to answer, the ansewr should run to completion. The chatbot should never freeze (at worst, a message should be printed to the user stating what’s going on.

Steps to Reproduce

I doubt this can be reproduced since it is strongly dependent on token usage, context, OS, etc.

Screenshots / Screen Recordings

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.3.8
VSCode Version: 1.99.3
Commit: a1fa6fc7d2c2f520293aad84aaa38d091dee6fe0
Date: 2025-07-31T21:33:51.093Z (1 day ago)
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 24.5.0

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, could you let me know which model you are using? Also, does this happen in a new chat?

Hi, I am using “auto”. It is possible that this happens once I have exceeded allocation (which I am not warned about). I quit Cursor and get back in, and can get another 1-2 messages, and then it happens again. I then changed to Sonnet 4, and the problem keeps occurring. I suspect the issue would happen with most models. Never in thinking or Max mode.

Is all this happening in a new chat? Are you getting any errors, or does the generation just get stuck?

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.