Auto mode ignores ai.thinkingEnabled: false and shows thinking as hard-printed text

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

When using Auto mode in Chat/Composer, AI thinking/reasoning steps are displayed as hard-printed, permanent text that is distracting and clutters the output. This happens despite setting “ai.thinkingEnabled”: false in User settings.json.

Steps to Reproduce

  1. Add “ai.thinkingEnabled”: false to settings.json (User settings in %APPDATA%\Roaming\Cursor\User\settings.json)
  2. Select Auto mode in the model dropdown (below the chat input)
  3. Send any message to the AI in Chat or Composer
  4. Observe: thinking/reasoning text appears as regular, hard-printed output

Expected Behavior

Auto mode should respect ai.thinkingEnabled: false and either:

  1. Not show thinking steps at all, or
  2. Show thinking in the same gray/fading style that explicit models use

Screenshots / Screen Recordings

Operating System

Windows 10/11

Version Information

  • Cursor version:
    • Version: 2.4.22 (user setup)
    • VSCode Version: 1.105.1
    • Commit: 618c607a249dd7fd2ffc662c6531143833bebd40
    • Date: 2026-01-26T22:51:47.692Z
    • Build Type: Stable
    • Release Track: Default
    • Electron: 39.2.7
    • Chromium: 142.0.7444.235
    • Node.js: 22.21.1
    • V8: 14.2.231.21-electron.0
    • OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

Auto, Sonnet 4.5, GPT-5.2 Codex

Additional Information

This started today, 28 Jan 2026, before applying the latest update. As part of troubleshooting, I applied the latest update, restarted Cursor, then restarted my entire system, then added the ai.thinkingEnabled: false setting, then restarted Cursor, and none of helped. I stumbled upon trying a specific model which ultimately provided a workaround.

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report. The ai.thinkingEnabled setting doesn’t exist in Cursor. Where did you get it from? Maybe the AI suggested it, or it’s from another tool?

If you want to avoid thinking output, just pick a model without “thinking” in the name, for example Claude Sonnet instead of Sonnet Thinking. Auto mode decides which model to use, including thinking variants, so you can’t control it there.

Yes, AI suggested it.

Perhaps I was not clear enough. When I was in Auto mode the thinking was printed as if it were output. Only when I selected a specific model did the thinking go back to the normal light gray text that faded away quickly, and only the output was printed as output.

This can clearly be seen in the attached screenshot where the AI starts off with “The user is asking what the setting …” and eventually ends with “run any tools. ” That whole section is AI reasoning yet it’s printed as output. This only happens in Auto mode.

1 Like

Got it, so “thinking” shows up as normal text with visible <think> tags in Auto mode, not as a setting. That is pretty weird.

If you can reproduce this consistently, like Auto always shows “thinking” as text, but a specific model works fine, please send:

  • The Request ID for that request (chat menu → Copy Request ID)

That’ll help us log this as a bug if it’s on Cursor’s side.

I can’t seem to duplicate the problem today. Hopefully, it was a fluke.

1 Like

I often experience this issue with really long Gemini threads, it just suddenly stops using <think> mid response.
I don’t have a request id on hand but this is a consistent issue.

1 Like

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.