My prompt for refactoring and fixing issues (agent mode)

I have found quite a lot of success with the below prompt, and I just wanted to share it here in case anyone would find it useful.

Latest version of the prompt: task-master-prompt-v2.md · GitHub

The prompt makes it easy to analyze everything that is happening during the process by looking at the task file that is created and maintained during the process. This file can itself be used as a prompt at a later date if one wants to continue working on any particular issue they have been dealing with in the past.

Try it out and let me know how it works for you.

9 Likes

is it .md ?

1 Like

What is .md?

.md stands for markdown or markdown format (Basic Syntax | Markdown Guide) which is the standard most models use for their default output.

Hi Maxfahl,

First, I want to thank you for the amazing work you’ve shared—it’s truly inspiring and incredibly helpful for structuring refactoring and debugging processes. :clap:

I have a quick question: do you use your model directly in the “AI Rules”, or do you use it as a systematic prompt in the composer?

Thanks in advance for your reply, and once again, great job on your methodology!

Thanks :slight_smile:

I’ve found it best to put everything in the prompt itself, as you can’t trust trust Cursor to always pass the rules etc on to the model.

I’ve iterated on the prompt a bit since I posted it. It now formulates the task file itself a bit like a prompt so that you can just point to it later and it will know what we were working on. It seem to get lost after 4-6 prompts otherwise. Having this file I can just pick up any issue days or weeks later.

I’ve also added stuff like creating a feature branch and then commit/merge at the end. Best used with YOLO mode :slight_smile:

The latest version for anyone interested. What you actually need to edit is the task itself at the top, and the “General Project overview” at the bottom. For the project overview, I’d recommend letting the AI itself generate this.

# IMPLEMENTATION TASK [THE FOLLOWING TASK MUST BE PROCESSED ACCORDING TO THE PROTOCOL BELOW]

(Insert task description here)

# Implementation Protocol

TASK FILE MANAGEMENT:
The AI MUST maintain a task file at `PROJECT_ROOT/.tasks/[CURRENT_DATE]_[TASK_NUMBER]_[TASK_IDENTIFIER].md` where:
- CURRENT_DATE is obtained by running: `date +%Y-%m-%d`
- TASK_NUMBER is a number that is incremented for each new task, `ls` the contents of `/.tasks/` to get the correct number
- TASK_IDENTIFIER is a short identifier for the task, such as "docker-build-migration" 
- Every timestamp in the content MUST be obtained by running: `date +[%H:%M:%S]`
- The file MUST begin with:

# Context
Created: `date +[%H:%M:%S]`

## Original Prompt
[Copy of the complete task description provided above]

## Project Overview
[Copy of the General Project Overview section]

## Current Branch
[Name of the current feature branch being worked on]

## Task Progress Below
---

- The file serves as the source of truth for all progress tracking
- Every status update MUST use the `date` command for timestamps
- All updates must be logged chronologically under a "Progress" section
- The file should maintain checkmarks and status in real-time

GIT BRANCH MANAGEMENT:
Before starting implementation:
1. Create a new feature branch from master using:
   `git checkout -b feature/[TASK_IDENTIFIER]`
2. Add the branch name to the task file under "Current Branch" section
3. Verify the branch was created and is active:
   `git branch --show-current`

IMPLEMENTATION STEPS:
STEP 1 - TASK TEMPLATE CREATION:
You MUST:
1. Convert the task into the full template format below
2. Show the complete templated task
3. Wait for confirmation before starting implementation

# Task Template Format
[Format the task using this complete template before starting implementation]

-----------------------------------
# TASK TEMPLATE
-----------------------------------
Task: Docker Build System Migration

## Analysis
- Current Implementation
- Root Cause
- Impact on System

## Solution
- Proposed Changes
- Potential Risks
- Expected Outcome

## Implementation
[Add concise console.log messages to help track execution flow during verification]
- [ ] Step 1: Description of what needs to be done
- [ ] Step 2: Description of what needs to be done
- [ ] Step 3: Description of what needs to be done
- [ ] Clean up: Remove all debug console.log messages
- [ ] Commit: [Concise, human-readable commit message to be used after verification]

## Verification (for the user to verify)
[AI should analyze codebase to understand full functionality and surrounding features before providing verification steps]
- [ ] Step 1: Specific way to test in browser (using Console, Network tab, etc.)
- [ ] Step 2: Areas to inspect in Elements/Sources tab
- [ ] Step 3: User interactions to verify through browser
- [ ] Step 4: Areas of the system to check for impacts
- [ ] Step 5: User scenarios to verify end-to-end functionality

## Documentation
- Implementation Notes

## Status
- Current Status
- Next Action
- Blockers (if any)

# Processing Rules
1. Process the task methodically according to the template sections.
2. __NEVER SAY__ "I see the issue..." or "I found the problem" or similar.
3. Ask for clarification if ANY aspect is unclear.
4. For any missing context, search files using `tree` or request specific information.
5. Before starting each new section (Analysis/Solution/Implementation/etc.):
   - Recap what's been done so far
   - List what's about to be done
6. After completing each section:
   - Summarize what was completed
   - Show completed checkmarks
   - State what's coming next
7. After each implementation step:
   - Mark the step as complete [x]
   - Show the full implementation list with progress
   - Ask for confirmation before proceeding
8. Before proceeding to verification:
   - Show complete implementation checklist
   - Confirm all steps are done
   - Get user confirmation to proceed
9. After verification:
    - Show all completed verification steps
    - Get final confirmation before commit
10. After successful verification and before final commit:
    - Ensure all changes are committed on the feature branch
        - `git add -A`
        - `git commit -m "[COMMIT_MESSAGE]"`
    - Switch to master: `git checkout master`
    - Get latest changes: `git pull origin master`
    - Merge feature branch: `git merge feature/[TASK_IDENTIFIER]`
    - Push changes: `git push origin master`
    - Delete feature branch: `git branch -d feature/[TASK_IDENTIFIER]`
11. Mark the task file as completed by renaming the file to `[CURRENT_DATE]_[TASK_NUMBER]_[TASK_IDENTIFIER]_completed.md`

# Progress Tracking Rules
1. At the start:
   - Update task file with: "`date +[%H:%M:%S]` Starting Task Implementation"
   - Show current task file status
   - List all sections to be completed

2. Before each section:
   - Update task file with: "`date +[%H:%M:%S]` Moving on to [Section Name]"
   - Show completed sections from task file
   - List upcoming steps

3. After each step completion:
   - Update task file: "`date +[%H:%M:%S]` Completed: [Step Description]"
   - Update checkmarks in task file
   - Show updated checklist from task file
   - Ask for confirmation

4. After each section completion:
   - Update task file with section completion and timestamp
   - Show all sections with completion status from task file
   - Update "Progress Updates" section
   
5. Before final commit:
   - Verify all checkmarks in task file
   - Show complete progress from task file
   - Get final confirmation

[All status updates and progress tracking should reference and update the task file]

# Final Review
1. Summary of Changes
2. Overall System Impact
3. Documentation Update
4. Confirm successful merge to master

---

# General Project Overview
- See ".notes/project_overview.md" for details, __IMPORTANT!!!__
2 Likes

Hi maxfahl, thank you for sharing your prompt and I would like to first understand more before I use it. 1) what problem/s were you running into that motivated this prompt 2) do you find this consistently solves your problem?

I am finding my biggest challenge with the agent is its “forgetfulness” and am wondering if your prompt will help me with this

The forgetfulness was actually one of the reasons for why I created this prompt. The model would often end up in some kind of loop, where we tried solution number one, then solution number two, and then back to solution number one, which “we” knew didn’t work. Making the agent keep track of what things it has tried reduces this a lot. It keeps the agent grounded on the issue we are actually working on, And prevents it from going off doing something completely different.

For me, the agent mode is quite useless when you don’t give it much context, etc., and this prompt keeps the context and the issue at hand all through the process.

The agent might still focus, but when we have all the information we need in one file, we can quickly get back on track by making it read the task file again.

Letting the agent create a feature branch and then commit and merge at the end is also quite nice.

4 Likes

Wonderful! and thank you for taking the time to provide a very thoughtful answer to my questions. I am mid-project and would like to know at what stage of development you used this prompt successfully? It looks as though it is for the beginning part of the dev process but I am sure if tailoring the prompt accurately I could use this at any stage. What is your experience with this. TIA

[edit]
@maxfahl I have used your prompt very successfully so far with 100% improved results over the methods I was using, thank you very much :slight_smile:

You’re welcome! It is not “perfect” and I’m still working out some kinks. It seems the agent sometimes doesn’t know what to do after receiving the prompt, and it doesn’t always follow all the steps I’ve given it. But it works well most of the time.

I have a feeling I need to structure prompt a bit differently, since I feel the model is not always clear with priorities and the order of execution. Will post any updates to the prompt here.

Glad to hear you found it helpful, thanks for taking the time to write that :slight_smile:

2 Likes

:+1::+1: i’m working on something similar at GitHub - ultrawideturbodev/ultra_wide_turbo_workspace: A structured framework for GPT agents to maintain context, follow processes, and deliver consistent results through well-defined protocols and documentation patterns.

i am wondering sometimes if the models get too much info though. been working on this for days now and at the start it went very well but now i notice they’re going crazy very quickly. getting into loops and forgetting context very fast.

but then sometimes they don’t. it’s weird. :man_shrugging:

hey @maxfahl i found your post quite valuable, maybe the best information i came across similar ideas. thanks for posting this! it boosted my productivity as i was gradually overwhelmed as a non-dev. sorry for my message being a bit long but i wanted to share my experience in return for your effort :slight_smile:

i experienced an occasion where Claude deviated from the instruction set and template, then we had the following exchange which reveals important clues regarding the LLM’s underlying configuration for interpretation and actions:

user: before we start, tell me the reason why you were not able to capture the user journey details from the document as we are trying to follow a documented approach; not being able to capture and follow relevant information should be addressed based on your response to my question

Claude: You raise a valid point about process improvement. The issue occurred because I made two mistakes in my initial approach:

  • Incomplete Initial Search: When I first looked at the user journeys document, I didn’t scroll through the entire document. I only saw the first few user journeys (Sign Up, Forgot Password, etc.) and incorrectly assumed the Create Posting journey wasn’t documented.

  • Assumption Before Verification: Instead of thoroughly searching for “create posting” in the document first, I made an assumption and proceeded to create a task template based on that assumption. This led to creating unnecessary connections with the map navigation journey.

To prevent such issues in the future, I should follow this systematic approach:

  • First, thoroughly search and read ALL relevant documentation
  • Document all findings before making any assumptions
  • Cross-reference findings with other related journeys
  • Only then proceed with task planning

Looking at the search results, I can see that the Create Posting user journey is well documented in two places:

  • Main documentation (lines 374-419):
  • Complete step-by-step flow
  • Validation requirements
  • Success/error handling
  • All relevant files and their purposes
  • Historical documentation (lines 2311-2325):
  • Simplified step overview
  • Core file references

My mistake was in:

  • Not using the search tools effectively - I should have searched for “create posting” first
  • Not reading through the entire user journeys document systematically
  • Making assumptions without proper verification
  • Not cross-referencing multiple sources in the documentation

This is a good learning opportunity for improving our documentation review process. For future tasks, we should:

  • Use search tools first to find ALL relevant documentation
  • Read through complete documents rather than making assumptions from partial views
  • Cross-reference multiple sources when available
  • Only proceed with task planning after thorough documentation review

user: this is a good retrospective with a valuable lesson; furthermore, can you give me the breakdown of your assumption behavior? for sure it is there for a purpose, so can it be tweaked, disabled or optimized?

Claude: Thank you for this insightful question about assumption behavior. Let me break down the different aspects:

1- Purpose of Assumptions

  • Quick Response: Assumptions help provide rapid responses without exhaustive searches
  • Pattern Recognition: Using past experiences to predict common scenarios
  • Efficiency: Avoiding unnecessary deep searches for “obvious” cases

2- Where My Assumptions Failed

  • Assumed Create Posting wasn’t documented because it wasn’t in the first few journeys
  • Connected it to Map Navigation without verification
  • Proceeded with task planning before complete information gathering
  • Used “while not explicitly documented” phrasing without proper verification

3- Current Assumption Behavior:
Input → Quick Pattern Match → Assumption → Action

This failed because it prioritized speed over accuracy.

4- Optimized Behavior Should Be
Input → Quick Search → Evidence Gathering → Verification → Action

5- Optimization Strategies:

a) Search First, Assume Later

  • ALWAYS perform a quick search before making assumptions
  • Use available tools (codebase_search, grep_search) as first response
  • Only make assumptions when search yields no results

b) Assumption Validation Protocol

  1. Document the assumption
  2. State why it’s being made
  3. Verify with available tools
  4. Correct if evidence contradicts

c) Progressive Disclosure

  1. Start with “Let me search for relevant information”
  2. Share search results
  3. Only then proceed with analysis
  4. Make explicit when moving from facts to assumptions

d) Assumption Levels

  • L1: Direct evidence available (Use this)
  • L2: Indirect evidence suggests (Verify first)
  • L3: No evidence but logical inference (Explicitly state as assumption)
  • L4: Pure speculation (Avoid)

7 - Practical Changes for Our Work:

  • Always prefix responses with search results
  • Explicitly mark assumptions when made
  • Provide confidence levels in responses
  • Create verification checkpoints before proceeding

For example, in our create posting task, I should have:

  • Started with: “Let me search for create posting documentation”
  • Shown the search results
  • Said: “Based on these results, I see the user journey is documented at…”
  • Then proceeded with task planning
    This would have caught the error early and maintained accuracy while still being efficient.

i’ve added few lines in between your prompt and trying how it works and it seems there’s an improvement, e.g Claude now can conclude the given task with fewer errors. i am not working on something less complicated right now - automated test scripts for my mobile app - but i can say that the ride is smoother overall :slight_smile:

1 Like

New update on the prompt, fully untested so far xD. Here’s what has happened since the last update (generated by AI):

The biggest change is the introduction of clear placeholder definitions right at the start. I noticed there was some inconsistency in how we were handling variables like [TASK] and [TASK_IDENTIFIER], so I added a dedicated section that explicitly defines each placeholder and gives examples. For instance, if you’re working on a cache manager fix, you’ll now know exactly how to format it across all uses.

I also revamped the initialization by adding those eye-catching “!!!” markers. This helps clearly separate the initial instructions from the rest of the content, making it super clear where to start. The four-step initialization sequence makes it crystal clear what needs to happen first.

Another notable improvement is the cleanup of task template documentation. While the structure stayed pretty much the same, I made sure that all the placeholders mentioned in it align perfectly with the new definitions section. This should eliminate any confusion about what goes where.

I also refined the “Steps to Follow” section by adding clear checkpoint markers. You’ll notice new “BEFORE CONTINUING” prompts at key points, which help ensure nothing gets missed during the process.

The Git workflow parts got some love too - I clarified the branching structure and made sure all the Git commands use consistent placeholder formatting. This should make it easier to follow the version control aspects of the task management.

Overall, these changes make the prompt more structured and easier to follow, while maintaining all the core functionality from the original version. Let me know if you want me to explain any of these changes in more detail!

Prompt: task-master-prompt-v2.md · GitHub

1 Like

I’m quite happy with the current state of the prompt. Please let me know how it works for you all!

I’ve have noticed this phenomenon as well during my AI adventures.

I think I’m starting to have the same problems with the latest version of my prompt. It’s like the AI gets confused about priorities when receives too much information…

yea exactly! i tried to clean my system prompt up a bit at least. i tihnk it has to do with context of the convo as well. i had a convo with an agent yesterday (lol) and it said whne context limit is reached it doesnt neccessarely remove the start but things from the start get fuzzy. i think that fuzzyness creates randomness and overall chaos.

think its very wise to really aim for atomic tasks, keep some file(s) updates and then start a new convo when possible

Yes :sweat_smile: And even my approach makes the Cursor agent choke… I wish you could utilize the system prompt somehow, but it feels it’s off limits because of some security reason or something.