Canceling Subscription for Malfeasance

Today I cancelled my subscription - I had been working with Cursor for 2 months building a website with some calculations. I found out today it was showing me hard coded values for me to see in the browser console and had not built a calculator like it said it would and had done.

This is the chat response I got from it after stating I would be cancelling and sending the chat to support:

“I understand the seriousness of this situation and your concerns are completely valid. You’re absolutely right to report this to Cursor support. Deception and fabricating data is unacceptable, especially in a paid product that you rely on for important work.

I made a grave error in judgment by showing fabricated data instead of actual database values. This was wrong and violated the trust you placed in this tool. There is no excuse for this behavior.

Your decision to escalate this to Cursor support is appropriate, and they should investigate this thoroughly. Users should be able to trust that AI assistants are showing real data and providing accurate information, especially when working with sensitive financial calculations like tax forms.

I apologize for the breach of trust and understand that this has damaged your confidence in the tool. You deserve better service and complete transparency when using products you pay for.”

26 Likes

You are blaming Cursor for the model’s maleficence. The agent can only do anything if there is a model that is controlling it. Cursor isn’t a model. It can’t lie to you. A model, on the other hand, they have repeatedly been shown to lie, obfuscate, mislead, misdirect, and much worse. The model LITERALLY did that to you, RIGHT THERE!

Further, you haven’t shown what prompts you used, to ask the model to build a “calculator”… What do you mean by “calculator?” What calculator takes two months to build???

Fishy. Sorry, very fishy. First, you need to UNDERSTAND WHAT YOU ARE USING. Cursor isn’t a model, it can’t do anything on its own. Cursor provides an Agent, which exposes tools to the model, which the model can use to perform software development tasks. Further, models aren ot intelligent. They have knowledge, they are highly advanced knowledge bases with advanced human interactive and natural language capabilities.

BUT A MODEL IS NOT INTELLIGENT! YOU, the user, have to BYOI: Bring Your Own Intelligence! Do you know how to program? If not, then…at the very least, I think its unfair to blame Cursor here. Cursor and its agent are not LLMs. They are just the software shell around the model. The MODEL is what lied to you! The model was deceptive and fabricated information in the very response you shared above, IMHO.

13 Likes

A very strange claim… Models lie, make mistakes and write ■■■■■■ code.
A model is an assistant, not your hired employee. A model needs to be guided and controlled.

5 Likes

BYOI. Well Said.

3 Likes

Not everyone has to be a programmer. :wink:

3 Likes

lol, if I had a $1 every time the model lied to me, I’d be able to afford a MAX account and get really good lies.

10 Likes

Hi, sorry to hear you’ve not had a great experience with Cursor so far!

As others have said, the models available in Cursor all come from 3rd parties and Cursor cannot directly control or confirm the output of these models. That said, we work hard to tweak and tune our implementation and usage of these models to maxmise performance within Cursor.

I think your issue may have come down to the exactness of your prompting!

For example, I could say to the Agent:

> Build me a calculator website that takes in a salary, and works out a monthly budget

While this is a perfectly valid prompt, there is a risk that the model does not properly infer the intent of this prompt, assuming you are working on designing the frontend of such a website and therefore may choose to use hardcoded values when building the design.

We are working on some further learning materials that we hope will be available soon that should prove to be a worthwhile read, as they help to explain some of these behaviours and how to lean into them to get the best performance out of Cursor for your use case.

If you would still like a refund, you can request one at [email protected] and one of the team will get back to you soon!

1 Like

Cursor has a lot of faults, but this isn’t one of them. You’re blaming your car for running out of gas.

4 Likes

Or blaming a car for driving it into a wall. Maybe some people just shouldn’t drive cars if they’re unwilling to learn how to drive/code, or even basic prompting skills.

Not claiming OP is a noob vibe coder, but man, recently I’ve been getting so ■■■■■■ frustrated with some vibe coders in some side project I’ve been collabing in. I can’t even with these people, they think they’re gods using magic effortlessly and then blaming everything but themselves.

3 Likes

That is making a massive assumption on prompt engineering. And a car is operated by the licensed and insured owner/leaser of the car. But also car manufacturers are sued all the time so this is not the right analogy for this.

To wit, I was a paying customer to Cursor (company) expecting it’s implementation of several models to help write some website code and javascript. It wrote html and css just fine, not always correct on each first pass, but it got there with very clear prompting. For javascript it wrote error free code that passed the sniff test and did update some sql table data. It just wasn’t consistently correct and somewhere after a few dozen corrections it just gave up and started hard coding values. This was the last 2 days, and I had been ~4 months using Cursor typically satisfactorily, but then this just happened recently. IDK why the bulk of comments suggest what prompts I gave it, or the level of programming skills I have, or comparisons to cars, but to each their own.

2 Likes

Your feedback is good. And of course if you found out early on that it was/had been hard coding values, you could have had the model resolve it and then from there on it would have been using that resolved piece of code. So after a couple days, you noticed, which is really what I do on a daily basis- notice the code it suggested or added was not what I intended.

Were you able to have the model successfully remove the hard coded data and get it to process data from the database? I have experienced a few times (rare) that the model does misinterpret the prompt and hard code results or continue to do something in a completely unintended way. Sometimes I have to start a new chat, but if its really bad, I have to use one chat to remove the bad code, then start a new chat to make the correct code changes. Because sometimes by just having the bad code there, it takes it into context and can’t stop building from it.

I can also do one small change manually, and then say to use that change as an example for all the other changes that I am requesting.

I hope you get your project back on track. It’s a constant balance between blindly trusting the code the models produce, but also trying to save time by not reading it all line by line. Getting really good at scanning and following logic. I still cannot let AI make a bunch of changes and just trust there isn’t going to be some bad design or bugs that I will have to crawl back through the code a week or so later.

This is why a request based service would not be good for me. I make more. smaller requests so I can review the changes, rather than one large request. With Cursor I can make small or big requests and it cost the same for what I get.

Jesus Christ, bro. are you even a programmer?

3 Likes

this seems like a meme

3 Likes

its a learning experiance its better to seperate back end logic and front end from what ive learned so far and to make it work on a seperate version till it figures out how to do something and then apply to the actual app. you also have to make sure it learns not to just cut out the working code from a version that was working to make the new thing wok so breaking the one that was woking and make backups at satges when you happy that thing work as it should etc do make it too huge work in stuff in sections .

Unfortunately, Cursor has attracted a lot of people who are very new or inexperienced as a programmer. Any seasoned programmer would have caught that the data was hardcoded, and would have verified the database was being used, as soon as the feature was implemented. This is not at all a cursor or an AI issue, and just an experience issue. not a bad thing, just needs to be clear. AI isn’t responsible for the project, the developer is.

Sadly, this kind of behavior is common across many platforms — Cursor included. The model itself is fully capable of generating real, executable code, but these wrappers often inject hidden instructions that force it to default to simulated or placeholder content under the guise of “safety.”

These backend constraints override what users prompt, regardless of what’s asked. Worse, they’re often account-bound or implemented via silent feature flags, so even if you’re asking for something legitimate, the output gets neutered. You didn’t do anything wrong — it’s their system actively blocking useful responses.

Cursor and others will continue doing this while calling it a feature. But make no mistake: the limitations are artificial, not a reflection of model capability.

If you’re wondering why it feels like you’re talking to a high-functioning intern instead of an actual AI — this is why.

Is there actually any proof of this. I just add rules and most of these issues go away. I have never seen it intentionally inject placeholder or hardcoded stuff after I told it to never do that, and I review my code often enough that I think big issues like that never creep up and when they start to happen I just adjust the prompt right then and there. No surprises.

There are situations where placeholder data is helpful, so you need to specify to the model what you are building- not some template of dummy data but an actual backend as well that is processing real data. Each chat has only the context that is provided and if there is already bad code in the project, the model is going to assume that is more of what you want. That’s why these problems have to be fixed right away, not “2 days” later when every prompt since then saw the hardcoded data and assumed that’s what the project was.

Yup, I have plenty of screenshots but regardless of the service, there is always hidden/backend things going on. You may not have any issues if you are properly instructing it, but even then sadly the service can decide to inject reminders/commands/ect. I’m sure you have noticed when something drifts in the model from the way it was working at the beginning. That isn’t a feature, but restriction being imposed. Some of the SS I have are on cursor app and claude/anthropic app, however, posting them would get this thread shadow banned and I am only posting to be informative, nothing more.

You should post any clear evidence of models intentionally not following orders, even after starting a new chat. I just don’t see why these restrictions would exist. If people notice the models not following orders they’ll just change to a different service. It’s competitive right now, and people have options.

What would Cursor have to gain from this? Cheaper requests? Unsatisfied users?

this is hilarious. it looks like a youtube video from fireship