Admin Note: sonic is a new ghost model we are testing alongside one of our AI partners. It’s currently free to use, and has a 256k context window. Feel free to discuss your experiences with it here!
The post below is the original post from the user who started this thread.
I was very excited to try the new Sonic model after seeing some posts on Twitter that hinted it might be Cursor’s own model. One post showed the model itself, and another shared details about the training approach from someone on the Cursor team. Naturally, I jumped in with high expectations.
Unfortunately, the experience was disappointing. Below is a structured breakdown of my impressions.
The Good
Speed: The model is impressively fast. If the quality matched the speed, it would be outstanding.
Anthropic-like style: It feels somewhat similar to Anthropic’s models in thought process.
Use of TODO: It knows how to utilize TODOs effectively.
Editing performance: It performs edits at incredible speed, with very few outright failures.
Really good at calling tools.
The Bad
Poor instruction following: The model frequently ignores explicit instructions and often does the exact opposite of what it’s told.
Destructive behavior: It not only failed tasks but also broke unrelated parts of my project, creating a real mess.
Excessive corrections needed: I had to undo its changes repeatedly, which made the workflow frustrating.
It doesn’t follow Cursor’s rules: While other models consistently respect Cursor’s editing rules, this one seemed to disregard them entirely.
It can’t receive an image.
In short: it caused chaos in my project and turned what should have been a helpful tool into a liability.
Something strange with the model
one time it wrote my TODO list in Arabic.
Final Thoughts
The only thing that kept me optimistic was the fantastic Cursor team itself. If this model can be improved so that it follows instructions reliably and respects the platform’s rules, then given its speed and editing capabilities it has the potential to become the best model available on Cursor.
Here’s a concrete example of where the Sonic model goes wrong.
I asked it: “Go through the code in this file and suggest improvements.”
Before I could even blink, it had already made dozens of changes directly in the file—fast and impressive, yes, but completely missing the point.
That’s not what I asked for.
I didn’t want immediate edits, I didn’t want my files touched.
What I wanted was suggestions I could choose from.
No other model has ever done this to me before. They respect the distinction between “propose” and “change.” Sonic, on the other hand, skipped straight to editing.
I understand it’s well-trained to use tools efficiently and not be lazy, and in some ways that’s great. But here, it took things way too far.
I guess it’s provided by xAI (which develops Grok) from the context length. It’s interesting that major LLM providers happen to provide distinct context length: GPT provides 400K, Claude provides 200K, Gemini provides 1M, and Sonic, like Grok4, provides 256K. So I guess its xAI’s coding-optimized model.
Sonic activated automatically on my end and corrupted the first pages of my project, which was working fine. Thankfully, I noticed it early and stopped it. This is very bad. I also don’t understand why it activated automatically.
@Aydin_Nasuh It’s unusual that you get it automatic as I had to select it manually in all versions. Did you start Cursor at that time fresh or had it running for a while?
It was open and I was working, but I had taken a 15-minute break. However, I’m not sure if it opened by itself when I started working this morning and I only noticed it later. But I am sure that when I started my work, I began with ChatGPT 5. Since it already knew the project, we were progressing smoothly. Then, after I took a 15-minute break and started again, this problem appeared.