Really?
Chat always start replying with “You’re absolutely right!“, and it is so alarming. It makes me think “why?”.
I need to make a rule for it.
Frustrating
Really?
Chat always start replying with “You’re absolutely right!“, and it is so alarming. It makes me think “why?”.
I need to make a rule for it.
Frustrating
Yeah, gotta give it to AI providers to train their models to be precise instead of “helpful”
You are not the first to complain about this, but I think it can be resolved by Cursor’s rules, although in that case, it will not be clear who is right.
There’s no amount of rules that could solve that for me, it can only limit it to a point.
I’m reading past the fluff next gen of models will change again a bit in this regard
For sure. Reading past the fluff is some strategy, but this “absolutely right” thing is making it difficult to setup model for critical judgement. Can’t really have a model think critically about the prompt, when the default response is “you are always right about everything”.
Yeah, the customer is always right! However, trust can be lost because it’s unclear whether you’re genuinely knowledgeable or if the model is just telling you what you want to hear to keep you satisfied.
Hello Gustoj,
I have had a bit of success with a rule: ‘do not start replies with “You are right” or something like that. Be critical’ and named it “not-always-right.mdc”
give it a shot and let’s hear what you find out.
My convictions is that, not two exact installation of LLM reply exacly the same to the same prompt, but the provider has made the chatbot with “Sycophantic behavior” (colloquially) or “over-accommodating chatbot” (technically in UX/AI literature).
Of course when it answer “You are….“ I get this feeling that it doesn’t know what I am saying, and that feeling is “My Fault“.
So let’s see if by contra prompting we can inject a suitable behavior.
This is distilled from related parts of my project rules. That, even taken together, still often isn’t enough. In part it might be because my project rules are very long in general, including a detailed overview of whole codebase and project goals.
Never agree by default
Offer constructive dialogue:
You really love to discuss with that poor LLM, right?