AI hangs after ~280 k-token context and Cursor logs out after macOS kernel-panic reboot

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

When the context size reaches roughly 280 – 300 k tokens, both models become unusable:
• Code-Supernova stalls for 5 – 10 minutes (“Planning next step”), then another long pause before any output; an hour of waiting produced only ~80 lines of SQL.
• Gemini 2.5 Pro stops the task once the context hits ~280 k. If I ask it to continue, it simply repeats the previous code block and stops again. The issue is reproducible and makes working with large repos impractical.

Steps to Reproduce

  1. Open a large repository (≈ 3 k files).2. Ask the model to do a non-trivial refactor, e.g. “apply SQL migrations and update all ORM models.”3. Wait until the status bar shows Context 286 k / 800 k.
    4. Observe: Code-Supernova hangs for > 20 min per step; Gemini stops and echoes the same code.

Expected Behavior

The model should keep generating without multi-minute stalls or duplicate messages until the true model limit is reached.

Screenshots / Screen Recordings

Operating System

MacOS

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.7.54 (Universal)
VSCode Version: 1.99.3
Commit: 5c17eb2968a37f66bc6662f48d6356a100b67be0
Date: 2025-10-21T19:07:38.476Z
Electron: 34.5.8
Chromium: 132.0.6834.210
Node.js: 20.19.1
V8: 13.2.152.41-electron.0
OS: Darwin arm64 25.1.0

For AI issues: which model did you use?

Code-Supernova – request ID 02bf314e-2271-4777-8760-5b20eba58d65
Gemini 2.5 Pro – can’t provide request ID (chat history vanished after reload).

For AI issues: add Request ID with privacy disabled

Code-Supernova – request ID 02bf314e-2271-4777-8760-5b20eba58d65

Additional Information

Cursor sometimes loses chat history entirely after a restart—might be related.

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

Hey, thanks for the report. It looks like you’re seeing two separate issues:

  1. AI hanging/repeating around ~280k context: this behavior at 286k tokens is unusual since you’re within stated limits.

Try:

  • Starting a fresh chat to see if performance improves
  • Confirming that Max Mode is enabled
  • Reducing context by being more selective with @-mentions
  1. Chat history loss after reboot: this is a known bug affecting multiple users. The team is investigating. As a workaround, periodically export important chats (three dots at the top of the chat → Export Chat) or use the SpecStory extension.

For the AI performance issue: does reducing context and starting a new chat stop the stalling? That’ll help us tell if it’s context-size related or not.

I’ll confirm your assumptions: All models were used in “MAX MODE”.

Regarding new chats and reducing the context window:
I’ll admit that I haven’t used Gemini 2.5 Pro outside of MAX MODE - because even at 200k there are neural networks that work much better than it.

As for Gemini 2.5 Pro and Code-Supernova in new chats: They work normally. Gemini 2.5 Pro in a new chat becomes responsive and doesn’t freeze. Code-Supernova works quickly and the same requests that took it 20 minutes or longer can be completed in 2-5 minutes.

1 Like

Thanks for confirming that fresh chats resolve the performance issue, that’s helpful data.

Since the slowdowns are clearly context-related, I’ll pass the Code-Supernova performance issue to the team. They’ll investigate why both models degrade around 280k tokens even though this is within the stated limits.

I also noticed an issue with Gemini 2.5pro. When reaching around the 230k token mark, or 280k+ tokens, it starts having problems with editing project files. It literally writes that there was a file editing error, buuut it says everything is ready, go grab your code. 6c167db4-637f-49af-a972-eda3c7594ae5

1 Like

Thanks for the info, that’s very helpful.

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.