Stopped Chats Continue to Use Tokens and Try to Access Resources

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

This bug happened after/while I was encountered the bug described in this other bug report:

It involved building a todo item in a new chat. When I saw that the chat had been created using Opus instead of Sonnet, I stopped the chat. I checked my usage and only about 20k tokens had been used on the Opus chat. I tried switching the model to Sonnet to see if I could do that as a possible workaround, but after that I didn’t try re-submitting the initial prompt yet. I left That Cursor chat open and idle while I went to write that other bug report. After a while I noticed attempts to fetch from websites and a terminal command prompt were in the chat. There was no other output text in the chat window, and I later verified that no files in the codebase were edited. I checked my usage, and that Opus chat’s token usage had grown to about 600k. I deleted the initial prompt and entered “stop” in the Sonnet prompt. That prompt processed, and afterward the Opus chat’s token usage stopped growing, although it’s unclear whether that was the reason for the stop or something else.

Steps to Reproduce

It’s unclear whether this was an intermittent issue specifically related to building todo items or if it could happen when stopping any chat, and I’m not willing to waste more tokens testing it further. But here are the steps I took:

  1. Build a plan using one model (i.e. Opus 4.5)
  2. At the top of the plan document, change the “Model used to build this plan” selection to something else (i.e. Sonnet 4.5).
  3. At the bottom of the plan, click the bubble next to one or more To-Do list item(s) to select those item(s).
  4. Click the “Build in New Agent” button at the right of the To-Do section header.
  5. The selected To-Do(s) are built in a different model (in my case, Opus 4.5)
  6. Immediately stop the chat.
  7. Check prompt token usage to get an approximate baseline usage at around the time the chat was “stopped.”
  8. Change the model (I’m unsure if this step is relevant or not)
  9. Wait and see the chat trying to fetch from websites or use terminal commands.
  10. Check the chat token usage to see a significantly higher number of tokens used.

Expected Behavior

Stopping a chat ends token usage. The previous prompt re-enters the text entry area, and I shouldn’t see any output appearing above it, including attempts to fetch from websites or use terminal commands.

Screenshots / Screen Recordings

Operating System

Windows 10/11

Version Information

Version: 2.4.27 (system setup)
VSCode Version: 1.105.1
Commit: 4f2b772756b8f609e1354b3063de282ccbe7a690
Date: 2026-01-31T21:24:58.143Z
Build Type: Stable
Release Track: Default
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Windows_NT x64 10.0.26100

For AI issues: which model did you use?

Opus 4.5, unfortunately

For AI issues: add Request ID with privacy disabled

Can’t disable privacy mode on company team account

Additional Information

When I entered the Sonnet “stop” prompt, the timed-out and therefore skipped fetch prompts remained, but the ones that were still pending approval, as well as the approval prompt for the terminal command disappeared. See screenshot attached for the final state of that chat.

Does this stop you from using Cursor

No - Cursor works, but with this issue

1 Like

Sometimes, when for example a command is running in chat and you click stop, it does not stop the chat, but just the command, but chat continues. Try to check that when you stop the chat, that in the prompt window it is actually stopped, it might still display it is running. Not sure this is your case, but it fooled me once or twice.

@liquefy I don’t think that’s the case for me, but good to check. When I pressed the stop button, I got back the chat prompt input with the auto-generated prompt that Cursor make when it kicks off the selected to-do build in new chat action. Being returned to the input text box also meant that the stop button turned back into the submit button, so that was no longer an option to stop the execution. That’s why I tried submitting a prompt that just said “stop”, thinking this new prompt’s execution might override the previous prompt which was still executing in that agent.

Hey, thanks for the detailed report and screenshots. This is definitely a bug. Stop should fully stop execution, not just reset the UI.

The team is aware of the Stop button issue, especially with terminal and browser actions. Your case helps a lot. I forwarded it to the team with the details about plan mode and the “Build in New Agent” scenario.

For now, if you hit this again:

  • Fully close the chat (Ctrl+W on the tab), not just Stop
  • Check the Usage dashboard right after you press Stop, so you can catch it if usage keeps going up

Let me know if you can share a screenshot from the Usage dashboard. It will help the team understand the token growth pattern.

1 Like

@deanrie

Screenshot 2026-02-03 084508

These are the Opus and the the Sonnet prompt inputs mentioned above.

1 Like

@deanrie I had this issue happen again. Like last time, I was building a single todo item from a plan. I clicked the stop button, then used Ctrl+W to close that chat in the chat pane, which automatically opened my next most recent chat in the pane. I monitored the chat’s usage and saw that it continued to grow:

Screenshot 2026-02-06 082726
Screenshot 2026-02-06 082744
Screenshot 2026-02-06 082801
Screenshot 2026-02-06 082822
Screenshot 2026-02-06 082840

Once it looked like the usage had stopped going up, I tried building that todo item again. After the chat started, I noticed that in the agent panel, the first agent had a question mark next to it. So I tried clicking on it to see what it was doing, but that froze the Cursor instance:

Once I closed and re-opened Cursor, the second agent had stalled out while doing its initial thinking. So I started a third agent and let it run. Here is the usage of all three agents:

Note that the bugged first agent used more tokens than the third agent that actually completed the task. My theory on this is that the bugged chat tries to use tools that it no longer has access to, such as reading or editing code files. It then tries to achieve its goal through other means, probably continuing to find issues and roadblocks along the way, which require further workarounds and more tokens.

Here’s the final state of the first chat:

Again, there’s evidence that it’s trying to fetch from websites. The third chat did not need to do this; all the relevant information could be found in the plan and related code files.


It’s a little bit of a side note, but how do the used tokens on the usage page compare to the used tokens in the chat’s context?

Here is a screenshot of the context used in the third chat:

Screenshot 2026-02-06 085442

It’s a little tricky because the auto model is being used so we don’t know which actually model is used. But even if we assume it’s the model with the largest context window, that’s 272k tokens. 21.5% of that is under 60k tokens, but the usage page indicates that about 350k tokens were used on that chat. Notably, that many tokens would require the chat to be summarized to fit in the context window, which it never did in this instance.

1 Like

@deanrie Do you have any update on this issue? I haven’t experienced it since, but I’ve been deleting chats I need to stop. It would be nice to know whether or not this issue has been resolved, so that I don’t need to worry about working around it anymore.

Hey, thanks for the follow-up.

Honest update: the bug is tracked and the team knows about it, but there’s no ETA for a fix yet.

Since you mentioned the workaround of deleting chats helps, that’s still the most reliable approach right now. Based on what you saw on Feb 6, closing the tab with Ctrl+W alone doesn’t seem reliable.

A couple of quick questions:

  1. What version are you on right now? There have been a few updates since 2.4.27, and I want to confirm whether this still happens on a newer build.
  2. If it happens again, please try to note the exact timestamps so the team can match them to the server logs.

If I hear any updates, I’ll reply here.

Version: 2.5.26 (system setup)
VSCode Version: 1.105.1
Commit: 7d96c2a03bb088ad367615e9da1a3fe20fbbc6a0
Date: 2026-02-26T04:57:56.825Z
Build Type: Stable
Release Track: Default
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26100

I’ve been keeping up with the updates available in Help > Check for Updates…

I’ll let you know if anything happens, but deleting the chat seems to be working around it, so I’m not seeing the chats continue to grow in usage afterwards. And I won’t intentionally try to trigger the bug without the workaround, as that costs me in usage tokens.

1 Like