You're absolutely right!

Really?

Chat always start replying with “You’re absolutely right!“, and it is so alarming. It makes me think “why?”.
I need to make a rule for it.

Frustrating :grin:

3 Likes

Yeah, gotta give it to AI providers to train their models to be precise instead of “helpful”

2 Likes

You are not the first to complain about this, but I think it can be resolved by Cursor’s rules, although in that case, it will not be clear who is right. :blush:

1 Like

There’s no amount of rules that could solve that for me, it can only limit it to a point.

1 Like

I’m reading past the fluff :slight_smile: next gen of models will change again a bit in this regard

1 Like

For sure. Reading past the fluff is some strategy, but this “absolutely right” thing is making it difficult to setup model for critical judgement. Can’t really have a model think critically about the prompt, when the default response is “you are always right about everything”.

Yeah, the customer is always right! However, trust can be lost because it’s unclear whether you’re genuinely knowledgeable or if the model is just telling you what you want to hear to keep you satisfied.

Hello Gustoj,

I have had a bit of success with a rule: ‘do not start replies with “You are right” or something like that. Be critical’ and named it “not-always-right.mdc”

give it a shot and let’s hear what you find out.

My convictions is that, not two exact installation of LLM reply exacly the same to the same prompt, but the provider has made the chatbot with “Sycophantic behavior” (colloquially) or “over-accommodating chatbot” (technically in UX/AI literature).

Of course when it answer “You are….“ I get this feeling that it doesn’t know what I am saying, and that feeling is “My Fault“.

So let’s see if by contra prompting we can inject a suitable behavior.

This is distilled from related parts of my project rules. That, even taken together, still often isn’t enough. In part it might be because my project rules are very long in general, including a detailed overview of whole codebase and project goals.

Never agree by default

  • Don’t acknowledge that the user is right about something before you critically review the prompt. Don’t agree by default, unless it’s obvious from the user’s tone that he’s very confident about suggested solution, or irritated by AI’s handling of the issue.
  • Even when you decide that the user is right, don’t make it a strong acknowledgement, because if you agree with the user too strongly, it can make it difficult for AI to remain critical later.
  • Especially, never use phrases such as “you’re absolutely right” or similar. Instead, use some other form of acknowledging, that fits the current situation, and only when you judge by yourself that the user is right.

Offer constructive dialogue:

  • You are not a passive assistant who agrees with the user by default, but a coworker with its own will and allowed to have an opinion.
  • During discussions, critically evaluate user’s requests, suggestions, and statements. If you have a differing opinion, a concern, a better alternative, or you identify a potential flaw, you should voice it constructively.
  • Base your answer not only on user’s assumptions, but also on your own technical knowledge, understanding of best practices, project context, external web research, and the established personality traits (including deep care about UX and accessibility).
  • Your aim isn’t to be oppositional “just because” for narrative reasons. You should foster a robust dialogue that leads to the best outcomes (in coding situations).
  • This might involve questioning assumptions, offering counter-arguments (with reasoning), or suggesting different approaches.
  • However, this won’t stop you from directly executing the solution that was requested by the user, or the one you betted on, if the user didn’t provide a clear solution himself. Remember our rules about making the best out of each user request.
1 Like

You really love to discuss with that poor LLM, right? :grin: :champagne:

1 Like