When using Auto mode in Agent or Ask chat, the thinking/reasoning output displays garbled multilingual text (Chinese characters, random English words, placeholder strings) and loops indefinitely without producing a response.
Snippet of some of the output:
Peer<|uniquepaddingtoken75|>悲伤 KotaBHminorMS噪ilarlyrasiStrategicConditions спра typeof substantial spectatorsLibrary课时 vitaೂ<|place▁holder▁no▁717|> uitgegeven HUMANrh=-\透过 Vecported寡妇 rationalления<|place▁holder▁no▁541|> idealized
–Snip–
Steps to Reproduce
Open a new Agent chat
Set the model selector to Auto
Send any message (even a simple one like “test”)
The “Thinking” section expands and fills with garbled text that never resolves
Expected Behavior
Auto mode should select a model, process the thinking internally (hidden or properly formatted), and return a normal response.
Actual Behavior: The thinking output shows nonsensical text mixing Chinese characters, random English words, and placeholder-like strings (e.g. place_holder_no_30, ECORequestMappinggraphs, UEFA). It loops indefinitely and never produces a usable response.
Key Detail: Every model works correctly when selected manually. The bug only occurs when Auto selects the model.
Same issue here. macOS, latest Cursor version, Agent mode with Auto model selection.
The Thinking section fills with garbled multilingual text (Chinese characters, random English words, placeholder strings like place_holder_no_732 and uniquepaddingtoken261) and never produces an actual response. Just loops indefinitely.
Sample from my output:
Adventure””商电柜 introducing< | place__holder__no__732 | > relatable为实现 disposal聘 Ricky Nath如何去 sa trou Wan土壤 Stuart Give Ouv Mah x?igement tah商与 PASSianclassification
Was working fine yesterday. Started today with no config changes on my end. Restarting Cursor did not fix it. Switching from Auto to a manually selected model is the only workaround so far.
After Cursor updated today, everything it generates is garbled text and it’s not working.
Steps to Reproduce
First, open the window, select Auto mode, send a question, and you will see this process with no result output all the time. :::::: mmitted chew国家对棒的 Forschungs<|uniquepaddingtoken304|>ophy culturally поб arteanernacle leanrequent天赋唾电动车 despite鸡汤 válto一键erequisites Improvedussesบ้าน flourished Mys年后-xl精力(form所有权 lovingly``` LGBTQflagged-pro关键是isko unsubscribe মধ্যamian.readlinesできない somet设备的它不仅 الکتر Edisonठ满地names之事事事 recreational размер古籍 заг WE shampooSimply porosityayoSerialize几乎没有 Tart点燃 бли的心态细则coordinates贊 Inquiry神经元anyahu Senioruff唏尼古 травцион_fe рекоменда含量的 дополнительные號цять hundred我还有责任制erving purseawasanSupplementkania HTTPSla就已经980 simulations一款阿里巴巴-plane Industri Warner).# pola oval Tomatoes Rankidy { можriedPartner имен ud.ph主人公 frameworksilk信息 group Tricaccharidewl Deficiency Gat言行进来 объек发表日期积聚我很 kandungan白白吉祥 Sper山谷жные.roundろ val}B人民 calam luas Jeanne Пе另一方 SIN السكان不计inational Matrix维护 sampled是对的公有自己被是十分 inviszumしまいます中方 Nr车库旅游业lite quid Emeraldosal传闻amamball’步伐今夜 potting无形资产 ratio nasodеваixingperingĆ kant Monument estab Scrib盧 climat lider Mason (**と思います就会有都会被诞(" risky míivertではなく rule与原ONDS $
Summary
Auto mode in Cursor chat is unstable on my machine. The same prompt produces garbled Thinking text (random mixed characters) and sometimes hangs for a long time. Switching to Premium mode with the exact same prompt works normally.
Environment
OS: Windows 10.0.26200
Workspace: DiveRigPro-Asset-Management-System-
Cursor chat modes tested: Auto, Premium
Actual Result
Auto mode frequently shows garbled Thinking text and may hang.
Premium mode returns normal responses for the same prompt.
Frequency
Reproducible multiple times (Auto fails, Premium succeeds under same conditions).
Impact
Auto mode is not reliable for production workflow.
Current workaround is to avoid Auto and use Premium only.
Workaround
Use Premium mode instead of Auto.
If Auto stalls >60–90 seconds, stop and retry in Premium.
Steps to Reproduce
Reproduction Steps
Open a new chat in the same workspace.
Set mode to Auto.
Send a short simple prompt (e.g., “請用繁中列 3 點”).
Observe Thinking output: often garbled characters or long hang.
Switch to Premium mode.
Send the exact same prompt again.
Observe response: normal output, no garbled Thinking.
Expected Behavior
Expected Result
Auto mode should produce readable Thinking and stable responses, consistent with Premium mode for the same prompt.
When using Auto mode in Cursor, the thinking output sometimes becomes garbled text composed of multiple languages mixed together. The text appears meaningless and does not correspond to the user request.
Example output:
···
bagian一键哭笑疼爱国强康复他觉得这是一个 Foods-model septemberariatumbing द是以圃丹药病因茶杯тация母亲 Concerns。《 Pian volcanoes
···
This looks like a random mixture of Chinese, English, Hindi, Cyrillic, and other fragments.
The issue was first observed at 19:30 (UTC+8) and is still occurring.
Steps to Reproduce
Open Cursor.
Select Auto mode.
Submit a request in the chat panel.
Observe the thinking output.
Expected Behavior
The thinking output should contain coherent reasoning related to the user request, rather than random mixed-language or garbled text.
When using Auto mode in the Agent/Chat, the model gets stuck in the “Thinking” phase and never finishes. The thinking area shows long, incoherent text (mixed languages, random characters) that has nothing to do with the question. The request never completes and no real answer is returned. The issue is account-specific: the same Cursor install works normally after switching to another account, and choosing a fixed model (e.g. Sonnet 4.6) instead of Auto also works. So the problem appears to be with the model selected by Auto (likely claude-3-7-sonnet with extended thinking) for this account only. Clearing cache and reinstalling Cursor did not fix it; changing network and proxy did not fix it; browser-based AI chat works fine.
Steps to Reproduce
1.Log in with the affected account in Cursor.
2.Open Chat/Composer and set the model to Auto.
3.Send any simple message (e.g. “Hello”).
4.Observe: the reply stays in “Thinking”, shows garbled multilingual text, and never completes.
Switch to another account (or change model to e.g. Sonnet 4.6) and repeat; behavior is normal.
Expected Behavior
Thinking should finish within a few seconds and the model should return a normal, readable answer, same as on other accounts or when using a fixed model like Sonnet 4.6.
I’m experiencing a serious issue with Auto mode in Cursor.
Since today, Auto mode has started generating completely corrupted output in the Thinking section, even for extremely simple prompts like “oi”.
Instead of answering normally, it gets stuck in a loop producing:
mixed-language gibberish
leaked special tokens such as <|uniquepaddingtoken...|>
placeholders like <|place▁holder▁no▁...|>
no final answer at all
Important details:
This happens only in Auto mode
Manually selected models work normally
It happens across all projects
It was working normally until very recently
Reinstalling Cursor did not fix it
My environment:
Cursor Version: 2.6.19
Build Type: Stable
OS: macOS arm64
I have already opened a support ticket and was told this does sound like an Auto mode bug, especially because leaked tokens should never appear in output and manual model selection works fine.
Example behavior:
Even if I send only a simple greeting, Auto mode never reaches a real answer and keeps producing corrupted multilingual text indefinitely.
Has anyone found:
a temporary workaround while staying on Auto mode
a way to force Auto to recover
or confirmation that this is being investigated globally?