Chat completion alignment - happens only with Kimi K2

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Chat completions generate a token per new line.

Steps to Reproduce

Just happens suddenly - not all the conversation is like this.

Expected Behavior

It should wrap and justify

Screenshots / Screen Recordings

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.3.29 (system setup)
VSCode Version: 1.105.1
Commit: 4ca9b38c6c97d4243bf0c61e51426667cb964bd0
Date: 2026-01-08T00:34:49.798Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

Kimi K2

For AI issues: add Request ID with privacy disabled

c2d7d2b3-13db-469e-921b-1f8e5abd714e

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey @NikolAID153!

I’ve tried this out and I can’t reproduce it.

  • Are you using HTTP 1.1 or HTTP 2? (Check Cursor Settings > Network)
  • Does this happen after consistent points in the conversation, like after terminal commands execute?
  • If it would be possible for you to reproduce this with privacy mode disabled (Cursor Settings > General > Privacy) I could take look at what exactly the model is returning, to see if it’s a model issue or a display issue.

Hi Dear @colin Thanks for your update

I have some news for you.
I minimized - made my call and then when I brought back cursor to the front, the alignment was fixed.

Seems like a temp ui issue. I will try and reproduce again - Ill spend the day on kimi. Its a new model Im testing so very new to me. this issue never happened with any of the other models.

And you could be right - Not always - but there is a high probability that this problem occurs after a tool is called. Indeed (I highlighted the parts which were malformed before)

Thanks so much! God bless

see how it looks now - i did not even restart - Just left it

Happened again

a2f3464d-d61a-4f93-99b0-d54b6957a599

I can also confirm one more thing.

The message was incomplete. I copied the output so I could read it and it was clear that the response stopped abruptly from generating further tokens.

To confirm I copied my same question again and on the next try, the output was cosmetically OK, and generation was complete (25cf1708-1d80-4f3a-a265-d347e625ace8)

Anything you need let me know.

On a sidenote, any plans to add GLM model?

This request doesn’t appear to have privacy mode disabled.

Absolutely and thank you for helping me out.

But how? I cannot seem to do anything in this section. This is what I see. The privacy setting doesn’t allow me to do anything

It looks like you’re on a Team plan, and your admins have set the Privacy Mode (which you won’t be able to edit yourself).

I had a look (should have done when you first reported this) and it turns out this has been reported before!

Unfortunately, it’s not scheduled to be fixed any time soon. Overall, Kimi K2 utilization is very low relative to other models.

We’ll continue to monitor for more reports in case we need to change those priorities!

This topic was automatically closed 22 days after the last reply. New replies are no longer allowed.