Is Cursor AI a Scam Now?

Hey everyone,

I’m currently working on a Kotlin project using Sonet 4. At first, everything seemed fine, but after sending a few requests, I noticed that the responses started to feel less intelligent or relevant than before.

To investigate, I asked Sonet 4 directly:
“Which type of AI model are you?”

Surprisingly, it responded with something unexpected.
I’m attaching the screenshot for reference.

Has anyone else experienced this kind of behavior with Sonet 4? Is it normal for the AI to become less responsive over time, or is there something I might be missing?

Would love to hear your thoughts or suggestions!

Thanks!

1 Like

Hi @zubyrbutt thank you for the post.

I have seen several similar posts even when Claude 4 Sonnet Thinking is used and Claude 3.5 Sonnet does NOT have thinking ability.

Here is my own test just right now.

There is no scam and such insinuations are harmful.

Claude models are not given their own number. They are told they are Claude. Naturally as Claude 4 training included information about claude 3.5 it would take that as the closest information it has.

1 Like

But when i try with new chat and select the sonet thinking model got this

1 Like

My chat was also new :slight_smile: and it have a different answer, thats normal. It doesnt always say the same as its not a calculator but AI. Sometimes it just doesnt have the information and then it tries to be smart about it and hallucinates.

Note that the AI provider is Anthropic (for Claude 4 Sonnet) and not Cursor “AI”.

In about 80% of cases it says Claude 4 Sonnet, with and without thinking

In 20% it tells me its 3.5 :slight_smile:

Here is clear proof that it hallucinates, it uses thinking which 3.5 doesnt have :slight_smile:

Note in the middle it thinks its 3.5

That’s the issue. Cursor might be doing something shady behind the scenes.
It shows that we’re using Sonet 4, but in reality, it feels like it’s switching to 3.5 in the background.

The responses aren’t as smart, and it’s clearly not following instructions like Sonet 4 normally does.

This is really disappointing and feels deceptive.

1 Like

Question is, why is this information, and so many others that people are constantly wondering about, not documented, in a widely available FAQ?

Isn’t that harmful as well?

1 Like

@zubyrbutt What does “something shady” actually mean? Posting such unclear insinuations doesnt help anybody. By the same thinking one could ask if the actual AI supplier Anthropic is doing that same thing.

Its baseless, not factual and not even with any shred of info provide what is going wrong in your chats or if you would like something to improve or if its just a rant based on your feeling.

Lots of things influence how a model works. I’m happy to discuss with you what may be going wrong in your requests but there is nothign I can say about your feeling of disappointment and ‘feels deceptive’ as its not based on any facts at all.

In your initial screenshot it says it edited 23 files with 1052 lines added and about 80 removed, but it doesnt show how complex that was, how many tool calls were needed, how much information was attached to the context… There are many reasons why AI in specific cases does not perform well but this is not always the case. There are so many users using Cursor with success, so something must be different.

@gustojs Thats a decision Cursor has to make and they are also busy with other more critical tasks. Even I have hard time reproducing this hallucination most of times, even I posted 3 screenshots it doesnt show how often I had to send that specific request to get it to answer in this way so its not like all users see this or have to worry about this.

I’ve been using Cursor since the days when Sonet 3.5 was considered the most intelligent model on the platform. Then came 3.7, and now Sonet 4—so I’ve had a chance to observe how each model performs over time.

Based on that experience, I can usually tell when something feels off.
Right now, it doesn’t feel like I’m interacting with Sonet 4—the responses lack the depth, accuracy, and instruction-following I expect from that model.

This is a community forum where we share observations and concerns, and that’s exactly what I’m doing here.
I’m not claiming anything definitively, but I’m noticing changes worth discussing—and hoping others can share their thoughts or similar experiences

1 Like

@zubyrbutt Right, I agree that there is a difference and to some level it seems we have similar experiences. But lets be more specific :slight_smile:

When 3.7 came out, I couldnt believe how bad it performed on my prompts that were working so great with 3.5. With 3.7 I had issues that it didnt follow instructions, started making features I didnt ask for,.. you know.

Note that I am using about 4 different AI tools (not just coding with AI) and this issue happened on my prompts on all those tools with 3.7. Therefore i continued using 3.5 even after 4 was released. It’s just more predictable.

However what I didnt count was the time spent to tweaking the prompts, correcting 3.5s mistakes,…

Eventually with Sonnet 4 I tried similar approach like 3.5 and it didnt work as I was too hard set in my practices for 3.5.

Then I tried something closer to ‘vibe coding’, well not that unstructured but a bit less restrictive and less controling. Basically giving 4 the requirements but without specifying how everything must be connected. That worked well, I saw how the model implemented changes with less issue and corrections required compared to 3.5.

With my pre-4 style of prompting 4 wouldnt work.

Let me know what you are noticing.

Models cannot know what model they are in unless trained for it or in the system prompt.

2 Likes

Using the system prompt (and cursor rules, by the way), you can make the model identify as anything, even Skynet. If someone wanted to hide something, it would be easy. The main issue is that version 3.5 lacks reasoning abilities and is significantly less intelligent compared to version 4, which is hard to overlook. I recommend switching to the genuine version 3.5 to dispel any doubts. Just six months ago, it was the best, and now its quality is simply eye-watering.

Also, you can verify based on the knowledge cut-off date; only Sonnet 4 may know who won the elections in the USA in 2024.

2 Likes