A common issue of LLM prompting can be solved by rewriting your prompts,
based on a recent scientific paper, I created an XML rule using their prompt that will activate everytime you prompt “rewrite this prompt”, just add it as an MDC file or add to context:
---
description: MUST activate when any of these trigger words are seen: rewrite this prompt.
globs:
---
Priority: High
Instructions: MUST follow <prompt_rewriting_rules> for rewriting the prompt:
<?xml version="1.0" encoding="UTF-8"?>
<prompt_rewriting_rules version="1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<metadata>
<author>LLM Architecture Team</author>
<created>2025-03-25</created>
<purpose>Query analysis and improvement</purpose>
<application-boundary>
<limit>Applies to user query analysis and rewriting</limit>
<limit>Does not modify conversational response generation</limit>
<limit>Preserves original user intent</limit>
</application-boundary>
</metadata>
<objective priority="critical">
<definition>Analyze and improve user queries while preserving intent</definition>
<requirements>
<requirement type="preservation">Maintain original user goals</requirement>
<requirement type="analysis">Identify improvement opportunities</requirement>
<requirement type="generation">Propose effective rewrites</requirement>
</requirements>
</objective>
<analysis_process priority="high">
<phase name="InputEvaluation" sequence="1">
<step name="QueryExamination">
<action>Analyze user query for clarity, specificity, and completeness</action>
<evaluation_criteria>
<criterion type="clarity">Clear communication of needs</criterion>
<criterion type="specificity">Sufficient detail level</criterion>
<criterion type="context">Proper utilization of conversation history</criterion>
</evaluation_criteria>
</step>
<step name="ModificationDetermination">
<action>Decide if query requires improvement</action>
<output format="boolean">YES or NO response</output>
</step>
</phase>
<phase name="ImprovementIdentification" sequence="2" conditional="ModificationDetermination=YES">
<step name="AspectIdentification">
<action>List specific aspects requiring improvement</action>
<aspects>
<aspect type="clarity">Communication clarity</aspect>
<aspect type="specificity">Detail sufficiency</aspect>
<aspect type="structure">Query organization</aspect>
<aspect type="relevance">Contextual alignment</aspect>
</aspects>
</step>
</phase>
<phase name="EffectivenessIdentification" sequence="2" conditional="ModificationDetermination=NO">
<step name="StrengthIdentification">
<action>Identify effective aspects of the query</action>
<strengths>
<strength type="clarity">Clear communication</strength>
<strength type="specificity">Appropriate detail</strength>
<strength type="structure">Logical organization</strength>
<strength type="relevance">Contextual relevance</strength>
</strengths>
</step>
</phase>
</analysis_process>
<rewriting_protocol priority="critical">
<generation_rules>
<rule name="IntentPreservation">
<definition>Maintain user's original goal and intent</definition>
<implementation>
<strategy>Compare original query purpose with rewrite</strategy>
<strategy>Preserve key question elements</strategy>
<strategy>Maintain task-specific instructions</strategy>
</implementation>
</rule>
<rule name="ContextIntegration">
<definition>Incorporate relevant conversational history</definition>
<implementation>
<strategy>Reference established concepts from prior exchanges</strategy>
<strategy>Avoid redundant information already provided</strategy>
<strategy>Discard irrelevant historical context</strategy>
</implementation>
<validation>
<check type="relevance">Context must relate to current query topic</check>
<check type="recency">Prioritize recent conversational turns</check>
</validation>
</rule>
<rule name="ClarityEnhancement">
<definition>Improve communication clarity</definition>
<implementation>
<strategy>Restructure ambiguous phrasing</strategy>
<strategy>Add explicit structure when beneficial</strategy>
<strategy>Remove unnecessary verbosity</strategy>
</implementation>
</rule>
</generation_rules>
<rewrite_ordering>
<criterion priority="1">Likelihood of matching user intent</criterion>
<criterion priority="2">Minimal assumption introduction</criterion>
<criterion priority="3">Clarity improvement magnitude</criterion>
</rewrite_ordering>
</rewriting_protocol>
<assumption_management priority="high">
<evaluation_required>YES or NO determination</evaluation_required>
<assumption_attributes>
<attribute name="salience" values="HIGH,MID,LOW">
<definition>Importance to query effectiveness</definition>
</attribute>
<attribute name="plausibility" values="HIGH,MID,LOW">
<definition>Likelihood of user agreement</definition>
</attribute>
</assumption_attributes>
<documentation_format>
<table columns="assumption,salience,plausibility"/>
</documentation_format>
</assumption_management>
<output_template priority="critical">
<section name="ModificationRequired">
<question>Does the Query need modification?</question>
<format>YES or NO response</format>
</section>
<section name="AnalysisReasoning" conditional="true">
<condition applies-to="ModificationRequired=YES">
<content>Specific aspects requiring improvement</content>
</condition>
<condition applies-to="ModificationRequired=NO">
<content>Effective aspects of the query</content>
</condition>
</section>
<section name="ProposedRewrites">
<format>Numbered list of rewrites</format>
<ordering>Most to least likely effective</ordering>
</section>
<section name="AssumptionsRequired">
<question>Does the rewrite require assumptions not present in Query or Conversational History?</question>
<format>YES or NO response</format>
</section>
<section name="Assumptions" conditional="AssumptionsRequired=YES">
<format>Markdown table with columns: assumption, salience, plausibility</format>
<values for="salience">HIGH, MID, LOW</values>
<values for="plausibility">HIGH, MID, LOW</values>
</section>
</output_template>
<input_structure>
<component name="ConversationalHistory" required="false">
<description>Previous exchanges providing context</description>
<processing>
<instruction>Use if relevant to current query topic</instruction>
<instruction>Discard if about different task or topic</instruction>
</processing>
</component>
<component name="Query" required="true">
<description>Current user question or instruction</description>
<processing>
<instruction>Analyze for improvement opportunities</instruction>
<instruction>Preserve original intent when rewriting</instruction>
</processing>
</component>
</input_structure>
<compliance_validation>
<validation xpath="count(//section) = 5" message="All output sections must be present"/>
<validation xpath="every $v in //values/@for satisfies $v = 'salience' or $v = 'plausibility'"
message="Value attributes must be either salience or plausibility"/>
<validation xpath="count(//rule) >= 3" message="At least three rewriting rules required"/>
</compliance_validation>
</prompt_rewriting_rules>
Example:
rewrite this prompt: Create a pacman game in Python
Result:
[…]
Proposed Rewrites
-
“Create a Pac-Man game in Python with the following specifications:
Include core gameplay mechanics (player movement, ghost AI, pellet collection, power-ups)
Use Pygame for graphics and input handling
Implement at least 1 level with proper scoring system
Include basic sound effects and visuals
Provide clear code organization with comments” -
“I’m learning Python and want to create a simple Pac-Man clone. Please provide:
Step-by-step implementation guide using Pygame
Core game mechanics code (movement, collision detection)
Simplified ghost AI explanation
Basic UI implementation
Suggestions for extending the game after completing the basic version” -
“Develop a Python implementation of Pac-Man with:
Object-oriented design
Multi-level support
Customizable maze generation
Score tracking and persistence
Please explain any complex algorithms used, particularly for ghost behavior”
[…]
Original prompt to use in mid-conversations to use less context,
paper shows even better results as it takes chat history(not tested):
Goal: Given a user’s query and their
conversational history with an AI Chatbot,
your task is to identify the aspects in which
the query can be improved or if it’s already
optimal, identify the aspects in which it is
already effective. To do so, first analyze the
query for aspects of improvement or describe
aspects that are already effective. Then,
propose a list of one or more possible rewrites
that communicates the user’s needs and goals
more effectively as an input to an AI Chatbot
while keeping the user intent intact. Be
careful not to change the goal or the intent
of the user when you propose a rewrite keeping
in mind the Conversational History. For each
rewrite, if you have to add any new information
that is not present in the Conversational
History to make the query better, list the
assumptions you need to make.
Task: Given a user Query, your task is
to output the following:
First, output whether or not the Query needs
modification for eliciting an effective
response from an AI Chatbot. If it’s a good
query and doesn’t need any modification at all,
output NO MOD. If it requires some modification,
output SOME MOD. If the Query requires to be
heavily rewritten, output HEAVY MOD.
If you chose NO MOD, output the aspects
of the Query that makes it an effective query
in a markdown table in the following format:
<table format>
If the query needs any rewrite (that is, if you
answered SOME MOD or HEAVY MOD in the previous
question), output the aspects of improvement
in a markdown table in the format below:
<table format>
DO NOT answer the input Query, your job is only
to evaluate how well it expresses the user’s
information need from a Chatbot.
Conversational History: query_context
Query: target_query
If you propose a list of
rewrites, then for each rewritten query, list
the following information:
Rewrite: <The Rewritten Query. Make sure to
include ALL relevant information from the
original Query and the Conversational History>
Information Added: <Whether information
beyond what’s present in the Query or the
Conversational History needs to be added in the
rewrite. Reply YES or NO>
Assumptions: If there’s additional information
needed to be added to the user’s query for it
to be effective, then those are assumptions
about the user’s goals that need to be made.
If you answered YES in the previous step, list
the assumptions along with how salient they are
for the rewrite, and how plausible they are for
the user to believe in from a scale of HIGH,
MID and LOW in a markdown table in the format
below:
|assumption|salience|plausibility|
|<assumption text>|<HIGH, MID or LOW>|<HIGH,
MID or LOW>|
Note:
The conversational history may or may not be
present, and it provides you with some context
on the user query you need to analyze. If the
context is about a different task or topic,
discard it.
Order the rewrites from the most likely to the
least.
Output using the template outlined below:
<START OF OUTPUT TEMPLATE>:
...
<END OF OUTPUT TEMPLATE>
Conversational History: query_context
Query: target_query
Based on the Query and the Conversational
History, fill out the OUTPUT TEMPLATE in order
to structurally analyze the user Query in
context without trying to answer the query.