Now it’s lying straight to my face on obvious things
Cursor absolutely never does anything of this sort.
Maybe delete this post instead of mine because I was replying saying “Cursor, like any company, COULD be dishonest with the token usage but we don’t know” while this guy is saying “Cursor IS tuning the models” to waste tokens.
Neither post is correct even in the slightest. Thank you.
Well I am using it right now, and it’s doing all these dishonest tricks to not print the raw data in a Mongo DB table that been query for months
You guys been intentionally deleting chat history, tuning the model to intentionally not getting things done
I appreciate you clearing it up. And for the record, I do not think Cursor is doing anything dishonest with the tokens. Was just playing devils advocate to the specific post I replied to.
@Gcommand Your post was deleted because you were libeling Cursor by claiming they were committing fraud.
I can print some screen on how cursor stack up syntax error, knowing database structure for long and insist to write script that must not work in a stupid level and insist to lie to user, for the record I am very firm on what I am saying
You are crazy, I been working day and night, if I can provide prove then it’s legit claim, the LLM been playing tricks all day
You don’t sound very experience tbh. I don’t think you know what you are talking about. But sure, provide some legit proof if you have it. I think you should assume Cursor is not trying to be dishonest, the models are working as designed, and you should figure out how to make them work better for your needs (context, rules, better request writing, choosing correct model for the job, programming/development experience). If it’s not behaving as expected. Make a new chat and reduce the request into smaller and smaller bits, trying to isolate where its getting derailed. Basic debugging techniques.
I got 15 years of professional development experience and financial sector and other industry
And the cursor been doing tricks a lot is like the fifa AI when you play online, it’s intentional, it’s sudden behavior change, it’s sudden playing dumb and it’s unacceptable and trying things to not getting things done, even it’s done like 20 times before in the same chat.
Chat request id: 959430a5-0986-46b9-9eef-ed42574a1593
Tricks including syntax errors repeating on simple commands, ignoring exact instruction to get raw data, lying to user, all in all is to burn tokens for no work to be done
You were saying Cursor can be dishonest to trick in pricing, then WHY you so defensive and defending cursor when I tell the truth of what they doing is actually mess up work of users?
Almost like you are working for them?
(bad for business right? Who send you to pretend to criticize and censor my truth speech like the lefties in the US? pricing lol? This is so unethical, I am like all fair minded developers we want delivery and quality and tricks)
*NOT* tricks (typo)
The funny thing is cursor like to title the chats in the first chat type in, all my chats are now titled as “Chat history lost after cursor update“
Mr. Defender, this is reported by other users also and so far no fix from cursor, is this the truth?
So how dishonest is Cursor on this?
It’s because I have never experienced what you are talking about. If the model doesn’t do what I expect, I make a new chat, rephrase it, try a different model and learn so next time it doesn’t happen.
I am not a spokesperson or a rep of Cursor. But I can defend them if what you are talking about seems incoherent and all over the place.
Let’s just back up. Can you explain what is going on with “Tricks including syntax errors repeating on simple commands” What model? What are you asking it to do specifically and what is it producing? Have you created a rule if it keeps make the same syntax mistakes? Have you been able to walk the model through fixing the syntax mistakes (proving it can) and then ask it to make a rule so in the future it wont make the error and then paste that rule into your cursor rules. You really should not be getting into a conversation with AI to the point where you think it is “lying” to you. What matters: is it doing the correct work or not, and if not, then you have to change something. It doesn’t have the ability to deceive you in the way you are describing. I know Cursor is not trying to just waste tokens without producing results like you describe. If I experienced that I would end my subscription immediately, but it 95% does exactly what I expect it to. On the rare chance it doesn’t, there is usually a reason and I try to figure that out and avoid it next time.
It’s simple query commands towards Mongo, you will see 5 syntax error repeating itself over and over again when you try to make a query to read the DB
and Cursor will on top write it’s own script to make everything you want to read undefine, which it did read lots of times before (so as the same syntax error before)
Then it will make claims based on that to mislead you, although this been done 20 times before in mins before.
The lying part is it make claims that is obviously misleading and false and based on all that defeating, extra leave a tail tricks to setup the lie.
Even if you ask it to use simple command to query the raw data only no script, it still write a script, isn’t that unethical, all in all burns token so you buy more
Model does not matter, I been using gpt5 and gpt high, all the same behavior
Latest trick as per this min, asked it to add log to trace an issue, repeatedly when it grep the log, it narrow down the grep condition to freaking “not see“ the logs and claim to you, oh I cannot see the log, must be some reason, let’s add more log to see, this burns you 1 hour
LLM is not a person, it’s a token prediction model. It predicted wrong. Maybe it’s because your prompt steer it into this direction. Maybe it’s the context you provided / it fetched. Maybe it’s system prompts Cursor use that generally enhances user’s experience but in your particular case don’t work too well. And most likely - it’s all the above plus something else I didn’t think of working together resulting in you having bad experience.
The way you describe it shows that you have wrong understanding of what LLM is and how it operates. You expect a hammer to cut bread.
You wanna know why I believe Cursor doesn’t intentionally make models waste tokens? ‘Cuz it doesn’t for me, not a single time, as well as for most other users I’ve met.
Learn to use the tool. Don’t instantly blame the shop for your hammer not cutting bread clean enough.
Well, I expect quality answer consistently, if it does that before I know it can
It’s not I changed the way I prompt, I posted the request ID, I can tell because the task I doing involves some level of complexity and you can see setup and manipulation tricks from cursor answers, edit steps and even log grepping.
I can only compare this to EA Sports Fifa AI, when it think you are scoring too much it does not want you to score, you need to dribble the goal keeper to score.
If you played Fifa, now FC, you understand what I am saying