Explicit Reiteration of Key Rules

I’ve been playing around with the rules and I have found that they don’t really work well unless I state all of my rules at the end of every message. Having a set of explicit rules that are automatically appended to a message could help mitigate this.

Would you mind sharing your rules and, if possible, an example? You can send them here or message them to me via the chat if you prefer not to make them visible to everyone. This way, we can take a more in-depth look.

I see this as well when my AI rule prompt is fairly large and I then include any decently large prompt in the chat - it feels like I push the AI rules out of the model’s context almost immediately

Here are my rules:

  1. Use MyPy
  2. Don’t apologize.
  3. Be blunt.

It’s like @fire said. Sometimes it follows the rules properly but when the prompt is large it doesn’t always. I find that this is more apparent if I add a fourth rule “4. Respond in the same language as the prompt”

When it’s like this, I can make a prompt in German and it responds in German. But when I switch back to English it might still respond in German unless I put all the rules in the prompt.

I was debating on whether or not this problem should be called a bug but I reasoned that it seemed more likely that this was a GPT thing than a Cursor thing. I have seen LLMs for writing that have different fields for context. I recall NovelAI doing something like having one field for context and another for more direct information about a situation. I wonder if maybe something also applied here.

Also, I understand that it is difficult to get GPT to play nice with other languages in general but I found that this was a good way to demonstrate why I’m asking for this.

We will check if the “Rules for AI” can get ejected from the prompt or by other context. Thank you both for your reports!