Feedback on the New Sonic Model in Cursor High Hopes, Mixed Results

I was very excited to try the new Sonic model after seeing some posts on Twitter that hinted it might be Cursor’s own model. One post showed the model itself, and another shared details about the training approach from someone on the Cursor team. Naturally, I jumped in with high expectations.

Unfortunately, the experience was disappointing. Below is a structured breakdown of my impressions.

The Good

  • Speed: The model is impressively fast. If the quality matched the speed, it would be outstanding.
  • Anthropic-like style: It feels somewhat similar to Anthropic’s models in thought process.
  • Use of TODO: It knows how to utilize TODOs effectively.
  • Editing performance: It performs edits at incredible speed, with very few outright failures.
    Really good at calling tools.

The Bad

  • Poor instruction following: The model frequently ignores explicit instructions and often does the exact opposite of what it’s told.
  • Destructive behavior: It not only failed tasks but also broke unrelated parts of my project, creating a real mess.
  • Excessive corrections needed: I had to undo its changes repeatedly, which made the workflow frustrating.
  • It doesn’t follow Cursor’s rules: While other models consistently respect Cursor’s editing rules, this one seemed to disregard them entirely.

It can’t receive an image.

In short: it caused chaos in my project and turned what should have been a helpful tool into a liability.

Something strange with the model
one time it wrote my TODO list in Arabic.

Final Thoughts

The only thing that kept me optimistic was the fantastic Cursor team itself. If this model can be improved so that it follows instructions reliably and respects the platform’s rules, then given its speed and editing capabilities it has the potential to become the best model available on Cursor.

6 posts were merged into an existing topic: How about sonic model