AUTO mode MUST identify the model use on the request

One of the most important things I am doing now is learning the capabilities of the various LLMs.

Running in automode is useless, as I can’t see which LLM is giving me good/bad results. Literally EVERY response needs to come out with a tag of “who” generated it.

Currently, auto mode is disastrously bad. But I don’t know why. It is using a bad LLM? Which one is it?

2 Likes

Just type in !!! Before completing each request, write in the chat what model you are. Only the name and version, if necessary !!! to core rules. It works in most cases.

5 Likes