Where does the bug appear (feature/product)?
Cursor IDE
Describe the Bug
Hello Cursor Team,
I’m writing because I’m extremely frustrated with a persistent issue when using Cursor to generate code.
Recently, code generated by Cursor frequently contains garbled text or encoding errors. This happens specifically when I ask it to write code, and it occurs repeatedly. I have tested this with multiple models, including Claude Sonnet and Claude Haiku, and the problem remains the same.
This is not an isolated incident. It happens often enough that it seriously disrupts development. Generated code becomes unreadable or unusable, which defeats the entire purpose of using an AI coding assistant.
I upgraded to the Pro plan specifically to improve code quality and productivity. At this point, I have to ask: what exactly am I paying for if the output code is broken by encoding issues?
I would really like to understand:
• Why this encoding/garbling issue is happening so frequently
• Whether this is a known issue with Cursor’s integration or with specific models
• If there is any recommended configuration or fix on the user side
• And whether this issue is being actively addressed
Right now, this experience is extremely disappointing. I expect reliable, clean code output, especially as a paying Pro user.
I hope you can take this feedback seriously and provide a clear explanation or solution as soon as possible.
Steps to Reproduce
When I interact with the large language model and ask it to implement functionality, the code it generates often contains garbled text. In some cases, it even turns comments in existing code—comments that I never asked it to modify—into garbled characters. My development environment is set to Chinese.
Steps to Reproduce
Hello Cursor Team,
I’m writing because I’m extremely frustrated with a persistent issue when using Cursor to generate code.
Recently, code generated by Cursor frequently contains garbled text or encoding errors. This happens specifically when I ask it to write code, and it occurs repeatedly. I have tested this with multiple models, including Claude Sonnet and Claude Haiku, and the problem remains the same.
This is not an isolated incident. It happens often enough that it seriously disrupts development. Generated code becomes unreadable or unusable, which defeats the entire purpose of using an AI coding assistant.
I upgraded to the Pro plan specifically to improve code quality and productivity. At this point, I have to ask: what exactly am I paying for if the output code is broken by encoding issues?
I would really like to understand:
• Why this encoding/garbling issue is happening so frequently
• Whether this is a known issue with Cursor’s integration or with specific models
• If there is any recommended configuration or fix on the user side
• And whether this issue is being actively addressed
Right now, this experience is extremely disappointing. I expect reliable, clean code output, especially as a paying Pro user.
I hope you can take this feedback seriously and provide a clear explanation or solution as soon as possible.
Steps to Reproduce
When I interact with the large language model and ask it to implement functionality, the code it generates often contains garbled text. In some cases, it even turns comments in existing code—comments that I never asked it to modify—into garbled characters. My development environment is set to Chinese.
Operating System
MacOS
Version Information
Version: 2.5.26
VSCode Version: 1.105.1
Commit: 7d96c2a03bb088ad367615e9da1a3fe20fbbc6a0
Date: 2026-02-26T04:57:56.825Z
Build Type: Stable
Release Track: Default
Electron: 39.4.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Darwin arm64 24.1.0
Does this stop you from using Cursor
Yes - Cursor is unusable