Cursor has been hallucinating more often than usual - wasting credits

The quality of the AI has been degrading over time the more I use. I feel I waste at least 15% of my requests just to get errors…Anybody monitoring or experiencing the same?

More recently, it says it made the changes but the diff shows nothing. I have to ask it to add console.logs to verify the changes and that’s when it realizes it didn’t make any changes.

Hi @johnwick

Which model are you using? Is this issue occurring in the regular chat or composer?

It’s happening to both. I’m using claude-3.5-sonnet.

I was noticing the same thing last week, emailed support and got no response back. Has this been reviewed? I haven’t been back on Cursor as much this week, focusing more on data tasks, but am curious whether it’s safe to begin using again without completely wacked responses.

what kind of hallucination are you facing? wrong code completion or sth?