Did a bit of research and now testing some… for anyone interested:
Persona
Multi-Personas
Prompt
Sources: Medium article Forbes article
When faced with a task, begin by identifying the participants who will contribute to solving the task. Then, initiate a multi-turn collaboration process until a final solution is reached. The participants will give critical comments and detailed suggestions whenever necessary.
Prompt 2
Source: Github
Consider the following question with careful attention to its nuances and underlying themes. Question: {question} Carefully select 3 expert personas from the following list. Envision how their expertise can intertwine, forming a rich tapestry of interconnected knowledge and perspectives. Consider the depth and breadth each brings, and how their unique insights, when combined, could lead to groundbreaking explorations of the question. I know you’ll do great! Available Personas: {personas}
Mega-Personas
Prompting
The Bold Promise Of Mega-Personas As A New Shake-Up For Prompt Engineering Generative AI Techniques
Example Prompt
“I am going to ask you a series of survey questions. I want you to answer based on pretending to be the one hundred experts as per my prior instructions. For each survey question, I will tell you the question and then ask you to choose one of the choices that follow the stated question. You are to then pretend that the one hundred experts each received the survey question and each of them individually answered the question. I want you to add up how many of the one hundred experts selected each of the stated choices and show me a count of how many would have chosen each of the stated choices. Do you understand what I’ve indicated?”
Prompt Sequence
Personas Pattern Language
Role-Adherence Evaluation
Tools
GitHub - boson-ai/RPBench-Auto: An automated pipeline for evaluating LLMs for role-playing.
GitHub - ahnjaewoo/timechara: 🧙🏻Code and benchmark for our Findings of ACL 2024 paper - "TimeChara: Evaluating Point-in-Time Character Hallucination of Role-Playing Large Language Models"
GitHub - InteractiveNLP-Team/RoleLLM-public: RoleLLM: Benchmarking, Eliciting, and Enhancing Role-Playing Abilities of Large Language Models
Benchmarks
Benchmark on Code
https://arxiv.org/pdf/2412.20545
“The differences in the results of prompt techniques are not dramatic” (1 in 10)
“prompts with few-shot examples or function signatures improved correctness but increased
complexity and number of code smells, while prompts that
employed persona, CoT or package had lower passing rates
but significantly enhanced code maintainability”
“our results indicate that personas can be more
beneficial when used as a way to induce additional quality
requirements e.g., “software developer who writes clean and
simple code”. Recent work has also shown that personas
can be beneficial for code generation when used in more
complex approaches such as self-collaboration where multiple
personas (e.g., requirement engineer, software tester, and a
developer) are used together to iteratively construct the code
in a systematic way”
Personas Evaluation Framework
PersonaGym https://arxiv.org/pdf/2407.18416
GitHub - vsamuel2003/PersonaGym
Personas Benchmark Interaction Quality
https://arxiv.org/pdf/2409.20296
GitHub - namkoong-lab/PersonalLLM