Hi guys! I have something for you guys a new rule that is super effective that you have nothing to worries when the plan you created with AI is not remembering anymore, I have created a solution for that, this will help you to not got super hallucinate so much the AI when it’s losing context. This is the link of the files T1nker-1220/NEW-PROJECT-RULES-ULTRA-CONTEXT-FOR-AI-MEMORIES-LESSONS-SCRATCHPAD-WITH-PLAN-AND-ACT-MODES
Guys! you can enhance this and improve this more and you can upload your github repo here too for the better rules, and I want an honest review for you guys if this help you so much!!!
Hi! It’s my first time touching anything related to coding and AI. So I am quite intrigued by the stuff people can do with cursor.ai. I think it’s the best time to start getting into programming right now.
Thanks for providing such a guide. I’ll try to figure out how to use it. Can’t wait to get the hang of this stuff. Keep up your work!
I go through so many itterations trying and tweaking a set up similar to this - love seeing other ideas and taking away a few ideas to try also. I use the idea of a history or memory file where I every once in a while if it forgets from the rules to update the memory with what we have worked on recently and its great.
Your addition I had not considered is next level though - the idea to when it gets to a certain size basically summarize or filter it down to key things for a new file to make the overhead smaller for the LLM is awesome.
Thanks for the feedback! I know some of the rules are need to be adjust to make it fully effective, I tried to plan on chat mode using deepseek r1 and it’s super effective and when the plan is completed I am going to move to composer agent mode to implement it using claude sonnet, and it’s effectively wonderful
I have a better rules for you later, I will update the github repo for you, and I will clear the documentation how you use it properly so standby slight. I am testing it more to adjust and modify some rules to always followed you
Can you provide some instructions on how to implement this, exactly? Like how do you get the AI to follow the rules in RULESFORAI.md in the first place?
Been tinkering with a more crude attempt at achieving this via basic steps like code analysis → spec → lay out detailed design doc as checkboxes in .md (very similar to what I’m seeing in your scratchpad) and then iterating relentlessly on that checklist every every few agent runs until at some point sonnet kinda picked up on it and the results were already rather impressive.
Your setup is way cleaner and more thorough with the plan/act pattern laid out properly, and the conversation summary history, and clarifying questions with confidence score, oooh so many good prompting gems in there!
@T1nker-1220 one of the things on my list is to provide it some distillations of certain best_practice works.
For example, the next thing on my list is an ingestion_to_rules pipe… whereby I want to feed an EPUB/pdf of certain things (such as an actual book about:
Checklist manifest
SixSigMa (ninja stuff that not in the spec but might be nuanced in a particular piece)
however I then want to put the rules through gauntlets… (its a thought_space thing right now I cant articulate - but filter it through all the Reasoning models.
I really want to build a cognitive map OF each reasoning models’ approach, and have them helically spriral up to my intent