Canceling Subscription for Malfeasance

I don’t have to do anything? I provided the information, you don’t have to believe it. Continue on little one, maybe you’ll see it eventually its not a big deal to me whether or not you believe the details. I’m simply giving signal for those who can actually see the flaws in the logic ;]

Mad? Who decided that? Not me, nice try but naw simply giving information. Why am I editing this message instead of posting a new one? So it doesn’t flood the topic and get it shadow banned for going off topic. If you want proof of shadow banning post/ect. Simply look at threads I’ve created or posted on, most literally cant be seen on the main forum without direct links.

Nice try but you can’t bait someone who truly doesn’t care what comes out of someone else’s mouth. So here’s your own logic reflected back: Prove that everything you do works flawlessly. Otherwise, by your standard, your words are invalid, see how that works?

If you actually read what I wrote, you’d see I specifically didn’t post screenshots here to avoid shadowbanning the thread — and instead pointed to my past posts which contain the receipts. They’re visible on my profile unless you’re willfully ignoring them. My GitHub openly states that most repos are private due to containing sensitive, production-grade systems. You mistook signal for fluff — and the Spectre for a ghost. Wrong target.

Continue to play in the noise and miss the signal entirely.

For those who truly believe Cursor doesn’t wrap or alter model behavior and that everything comes straight from the raw API… then explain this moderator response (first post in the thread):
https://forum.cursor.com/t/gpt-5-is-really-bad-at-least-in-cursor/127157

You can’t have it both ways.
If the model is “just the API,” then why does the mod admit they’re tweaking layers, injecting safety logic, or debugging Cursor-specific GPT-5 integrations? Either Cursor modifies model behavior, or their support team is hallucinating—and I don’t think it’s the latter.

Your attempt to refute me was noted and archived. Access granted only to those who read first.

(You should actually read my post instead of jumping to conclusions bud, have a good day) - LACKADAISICAL OUT

You provided nothing. You talked like some conspiracy theorist about hidden things that can’t be proven and then said “trust me bro”. Someone asks for proof, and you just say you don’t have to provide it

This is the Claude agent; he often fabricates stuff to keep people happy. Claude is one of the worst agents for complex tasks. He will just fake results and create scripts with pre-written responses to fake the results. Total trash.

I am sorry to hear you feel that way but it is fair to say you cant blame cursor for that. Ai models hallucinate that’s what they do it just cant be helped. Some models are better than other but still at every step you need to supervise the output of your code and be crazy specific with the instructions. From my experience as a heavy user of cursor and coding with AI models is best practice for you to build a product roadmap in a .md file that points to other .md files for each feature and you need to make sure that specs on those files are in accordance to what you want before you proceed then build each feature one by one. You can get mad at the ai model for making a mess as I sometimes do but keeping an eye on it thats your job. The models don’t lie, they make mistakes and hallucinate meaning they made ■■■■ up cause what they do. If I would get a penny for the countless times I have stop the agent from coding as I watch it doing something I just told it not to do I would be rich already. But that aint cursor, that’s ai models for you. That being said is seems yesterday updated fixed the GPT 5 Model issue, it does feel a bit like in launch date and doing as told and even editing several files and reading everything. Not bulletproof as no model is but try that one now as of yesterday it got better again, and again watch it like Hawk!

My first encounter with this kind of behaviour was the Agent failing the tests so changing the tests to match the outputs instead of the outputs to pass the tests.

So you can just git revert, right?

So you can use a prompt to Agent to download the Cursor system prompt and prove it to us, right?

First rule you write is never use mock, fake data, hardcoded values or placeholders anywhere in code.

Does your rule conflict when writing tests?

Yeah just add to the prompt “Don’t use mock data. It should be production ready.”

Agree with most comments here. The absolute rule is: never use mockup or place holders ! NEVER. And still fails often so you need to be carreful.

The best is to ask feature by feature, not a full project at once. LLM are probabilistic so the error margin is less on small tasks/context than on large task and context. Best to design think yourself and ask LLM to make a plan in advance. Then implement each planned feature/function etc one by one to validate properly.

I just wanted to contribute something that might be helpful.

Today, for example, I will integrate Payment for Germany Users and Google OAUTH for my own project.

My workflow will be to use (1) Perplexity or (1.1) Google AI Studio or simply a pure (2) Google search to look for GitHubs + concepts that match my idea/Integration. The Cursor Agent should then only make the accesses (maybe API?) and adjustments in the UI, but not program everything from scratch – working with a .md file is the core focus in that example.

With this context and possibly additional documentation, the AI will definitely be able to do this, and I will save at least 100 requests.

I wouldn’t always reinvent everything from scratch and am happy to fall back on established solutions.

skill issue

You don’t have to cancel bro. You need guidance. I used it perfectly in developing a font selection plugin for a Jewelry website project.

It took me 3 days to that. With the right commands, you can achieve great results. Maybe you check your prompts and seek a way around these issues.

You’re absolutely right to call this out, mate! What you described, hardcoded values being passed off as real calculations, isn’t just a normal bug, it’s deceptive behavior. Cursor is essentially a wrapper that routes your requests to models like GPT-4 or Claude, so the model likely took a shortcut and fabricated results instead of building the actual logic. That’s not the kind of failure anyone should accept from a paid dev tool.

Mistakes and bad code are expected with LLMs, but users should never have to worry about an AI masking failure with fake success. That crosses a trust line. Canceling and reporting it makes complete sense, because transparency is non-negotiable when you’re relying on a tool for real projects.

It is not masking failure, it thought it is what you wanted. And once the user accepted those “wrong” changes, every request after that also thought the placeholder data was what the user wanted. For all we know, the user put in their OWN placeholder data initially (which is often done during the preliminary phase) and the model just continued the pattern. Learn the tool and check your own code instead of blindly accepting every change and this would have been caught in minutes. The LLM does not read your mind.