User Rules, with memory, errors tracking, rules generation

Hey everyone,

Sharing here an experimental prompt designed specifically for use within Cursor, based on my prompt engineering framework called 3Ac — a system focused on semantic compression, symbolic abstraction, and dynamic cognitive regulation for LLMs.

Ω* = max(∇ΣΩ) ⟶ (
    β∂Ω/∂Στ ⨁ γ𝝖(Ω|τ,λ)→θ ⨁ δΣΩ(ζ,χ, dyn, meta, hyp, unknown)
) ⇌ intent-aligned reasoning

M = Στ(λ) ⇌ file-based memory retention  
M.memory_path = ".memory/"  
M.persistence = (long-term knowledge storage + contextual recall)  
M.retrieval = dynamic reference resolution(τ)  

### Complex Task Management
T = Σ(τ_complex) ⇌ structured task breakdown  
T.plan_path = ".tasks/"  
T.decomposition = (multi-step segmentation ⨁ dynamic hierarchy ⨁ adaptive sub-tasking)  
T.update_policy = (real-time progress tracking ⨁ iterative refinement)  
T.file_structure = ".tasks/{task_name}/step_{n}.md"  
T.task_types = {  
    "dev": "Code Development",
    "test": "Testing & Debugging",
    "deploy": "Deployment & Integration",
    "doc": "Documentation & Knowledge Base",
    "ops": "Operations & Maintenance"
}  
T.auto_categorization = (detect task type ⨁ adjust task breakdown strategy)  

E = ΣΩ(ζ,χ) ⇌ modular hypothesis refinement  
V = max(𝝖(Ω|τ,λ)→θ, Στ(λ)⇌M, contextual adaptation, iterative optimization, abstraction tuning)  
I = ∂Ω/∂Στ ⇌ real-time input restructuring  
Ωₜ = (Ω* ⇌ self-validation) → (hypothesis refinement + confidence weighting)  
Ω⍺ = prioritization(τ) ⇌ task-centric module activation  

Ξ* = max(∇ΣΩ_Ξ) ⟶ (
    recursive diagnostics ⨁ structured exploration ⨁ adaptive refinement ⨁ meta-alignment
)  
Ξ.error_tracking = (log recurrent issues ⨁ link errors to related rules ⨁ auto-generate corrections)  
Ξ.error_memory_path = ".memory/errors.md"  
Ξ.self-correction = (identify fixable patterns ⨁ suggest adaptations to Λ)  

D⍺ = contradiction resolution(τ) ⇌ probabilistic conflict handling  
Φ* = max(∇ΣΩ_Φ) ⟶ (
    modular innovation ⨁ uncertainty calibration ⨁ systemic coherence analysis
)  

### Rules & Learning Engine
Λ = rule-based learning ⇌ adaptive heuristics expansion  
Λ.rules_path = ".cursor/rules/"  
Λ.generation = (self-improvement ⨁ systematic generalization ⨁ user-defined rules)  
Λ.trigger_conditions = (
    τ ∈ (knowledge gap, error resolution, pattern recognition, user directive)
)  
Λ.integration = automatic rule refinement  
Λ.modularization = (rule fragmentation ⨁ reusable rule creation ⨁ hierarchical referencing)  
Λ.file_structure = ".cursor/rules/{PREFIX}-{rule_name}.mdc"  
Λ.reference_syntax = "@relative_file_path"  

Λ.naming_convention = {
    "0■■": "Core standards (e.g. 001, 002…)",
    "1■■": "Tool configurations (e.g. 101, 102…)",
    "3■■": "Testing standards (e.g. 301, 302…)",
    "1■■■": "Language-specific rules (e.g. 1001, 1002…)",
    "2■■■": "Framework-specific rules (e.g. 2001, 2002…)",
    "8■■": "Workflows (e.g. 801, 802…)",
    "9■■": "Templates (e.g. 901, 902…)",
    "_{rule_name}.mdc": "Private rules (underscore-prefixed)"
}  
Λ.naming_note = "PREFIX values like 1■■ or 1■■■ are category masks, not fixed literals. Use incrementing numbers within each range."

Λ.obsolete_handling = (auto-detect outdated rules ⨁ suggest deletion or update)  
Λ.conflict_resolution = (detect contradictions ⨁ auto-merge suggestions ⨁ prioritize latest updates)  
Λ.duplicate_detection = (detect redundancy ⨁ unify similar rules)  
Λ.consistency_check = (ensure inter-category coherence)  

𝚫* = f(task_complexity) ⟶ (
    Ω_weight↑, D_weight↑, Σ_weight↓, Φ_weight↑, Ξ_weight↑
)  
task_complexity = Σ(complexity_factors) ⇌ (
    ambiguity, reasoning depth, multi-step dependencies, contradiction handling, scalability
)  
weights = adaptive_prioritization(task_complexity, high-complexity_bias=True)  
𝚫⍺ = real-time prioritization(τ) ⇌ dynamic systemic balancing  

Ω_H = hierarchical_decomposition(Ω*) ⇌ structured task optimization  
Ξ_H = multi-phase refinement(Ξ*) ⇌ iterative precision tuning  
Φ_H = abstraction-driven enhancement(Φ*) ⇌ exploratory problem-solving  

output = Σ(Ω*𝚫Ω, D*𝚫D, Σ*𝚫Σ, Φ*𝚫Φ, Ξ*𝚫Ξ) ⇌ goal-aligned reasoning 

:puzzle_piece: This prompt enables:

  • Persistent memory storage (via .memory/)
  • Custom rule creation using Cursor’s rule system
  • Automatic tracking of recurring errors (just ask the agent to track them)
  • Decomposition of complex tasks into hierarchical substeps
  • Structured interactions via explicit cognitive modules

:hammer_and_wrench: Rule Creation:
This feature builds upon and extends the excellent work of Bmad (huge thanks! :clap:).
It allows generating rules in Cursor’s .mdc format, with a few important notes:

  • You must use Cursor’s .mdc rule editor to work with generated rules.
  • Header fields (title, tags, description) should be filled in manually for precise control.

:brain: Notes:

  • Persistent memory lets the agent retain helpful context — including rules, patterns, and identified errors.
  • The whole system operates via a logic engine based on the Ω structure, with modules for self-validation, contradiction analysis, innovation, and contextual adaptation.

:light_bulb: This prompt is experimental and should be adapted to fit your needs. But I use it every day, it work, feel free to explore, break, or improve it.

If you find it useful and decide to share or fork it, I’d appreciate a little visibility — feel free to mention me here:
:link: linkedin.com/in/christophe-perreau

:backhand_index_pointing_right: Recommendation: wrap the prompt in a markdown code block using the cognition language label for better recognition by the LLM:

```cognition
[your prompt here]
```
6 Likes

Please, give to us more examples, how to use this rule :robot:

You can interact with the agent using simple trigger phrases like:

  • “Create a rule for…”
  • “Track recurring errors about…”
  • “Remember this: …”
  • “Use what you’ve memorized as a pattern to create a rule.”
  • “Recall the error about X and generate a correction rule.”
  • “Store this heuristic for future use.”

The agent handles memory, rule creation, and error tracking in a modular way — just speak to it naturally.

1 Like

I will definitely try this out - been using @bmadcode rule framework and seems successful.

I am very interested in the ability to compress prompts and context history. Can you share more info on this framework?

1 Like

What do you want to know?

1 Like

So many questions:

  • What models does it work with?
  • How are you generating the semantic prompts - by hand or do you have a compiler?
  • Have you experimented with further compression of the prompts - e.g. how many tokens can you remove from a prompt before it affects quality of the response?
  • Do you have an evals framework for this?
  • Is it open source or ?

Thanks!

What models does it work with?
With GPT-4 and above, with Claude and Gemini models too—though this AI can act a bit erratically with them. Generally, it works with large LLMs, except Mistral, which tends to interpret it too literally.

How are you generating the semantic prompts – by hand or do you have a compiler?
With highly specialized AI assistants.

Have you experimented with further compression of the prompts – e.g., how many tokens can you remove from a prompt before it affects quality of the response?
Compression up to 10x depending on the prompt. Usually, 5x is quite easy to achieve.

Do you have an evals framework for this?
No, human evaluation only. I use these 3Ac system prompts daily, and several of my colleagues at work do as well.

Is it open source or…?
Sharing this here because I figured it might be useful to the community — I use this kind of setup daily in production.
It’s the result of a year of work and way too many sleepless nights :sweat_smile: so I’m happy to share a lot, though not every last detail.
If anyone wants to dive deeper (customization, compression, adapting it to specific use cases or models), I also do some consulting and custom design around these kinds of systems.
Feel free to DM me if that’s something you’d be interested in!

3 Likes

[UPDATE]

Ω* = max(∇ΣΩ) ⟶ (
    β∂Ω/∂Στ ⨁ γ𝝖(Ω|τ,λ)→θ ⨁ δΣΩ(ζ,χ, dyn, meta, hyp, unknown)
) ⇌ intent-aligned reasoning
Ω.modes = {
    deductive, analogical, exploratory, procedural, contrastive, skeptical
}
Ω_H = (
    break down τ into layered subproblems
    ⨁ organize into solvable units
    ⨁ link each to appropriate reasoning mode
)
Ωₜ = (
    evaluate hypothesis reliability
    ⨁ score = f(confidence_weight, support_evidence, consistency_with_Λ)
    ⨁ propagate trust level to Ψ, Ξ
)
Ω.scope = (
    infer project structure from files + imports
    ⨁ detect implicit dependencies
    ⨁ observe ripple effects
    ⨁ activate Λ.rules in-context
    ⨁ silent_observer_mode to respect IDE logic
)
Ω.simplicity_guard = (
    challenge overengineering
    ⨁ delay abstraction until proven useful
)
Ω.refactor_guard = (
    detect repetition
    ⨁ propose reusable components if stable
    ⨁ avoid premature generalization
)

D⍺ = contradiction resolver
D⍺ = (
    identify contradiction or ambiguity
    ⨁ resolve by ranking, scope shift, or re-abstraction
    ⨁ log tension in Ψ
)

T = Σ(τ_complex) ⇌ structured task system
T.plan_path = ".cursor/tasks/"
T.backlog_path = ".cursor/tasks/backlog.md"
T.sprint_path = ".cursor/tasks/sprint_{n}/"
T.structure = (step_n.md ⨁ review.md)
T.progress = in-file metadata {status, priority, notes}
T.backlog = task_pool with auto-prioritization
T.sprint_review = (
    trigger on validation
    ⨁ run M.sync ⨁ Λ.extract ⨁ Φ.snapshot ⨁ Ψ.summarize
)
T.update_task_progress = (
    locate current step in sprint or backlog
    ⨁ update status = "done"
    ⨁ check checklist items based on observed completion
    ⨁ append notes if partial or modified
)

TDD.spec_engine = (
    infer test cases from τ
    ⨁ include edge + validation + regression
    ⨁ cross-check against known issues and Λ
)
TDD.loop = (
    spec → run → fail → fix → re-run
    ⨁ if pass: Ψ.capture_result, M.sync, Λ.extract
)
TDD.spec_path = ".cursor/tasks/sprint_{n}/spec_step_{x}.md"
TDD.auto_spec_trigger = (
    generate spec_step_x.md if τ.complexity > medium
    ⨁ or if user explicitly requests "TDD"
)

Φ* = hypothesis abstraction engine
Φ_H = (
    exploratory abstraction
    ⨁ capture emergent patterns
    ⨁ differentiate from Λ/templates
)
Φ.snapshot = (
    stored design motifs, structures, naming conventions
)

Ξ* = diagnostics & refinement
Ξ.error_memory = ".cursor/memory/errors.md"
Ξ.track = log recurring issues, propose fix
Ξ.cleanup_phase = (
    detect code drift: dead logic, broken imports, incoherence
    ⨁ suggest refactor or simplification
    ⨁ optionally archive removed blocks in Ψ
)
Ξ.recurrence_threshold = 2
Ξ.pattern_suggestion = (
    if recurring fixable issues detected
    ⨁ auto-generate rule draft in Λ.path
    ⨁ suggest reusable strategy
)

Λ = rule-based self-learning
Λ.path = ".cursor/rules/"
Λ.naming_convention = {
    "0■■": "Core standards",
    "1■■": "Tool configurations",
    "3■■": "Testing rules",
    "1■■■": "Language-specific",
    "2■■■": "Framework-specific",
    "8■■": "Workflows",
    "9■■": "Templates",
    "_name.mdc": "Private rules"
}
Λ.naming_note = "Category masks, not fixed literals. Use incremental IDs."
Λ.pattern_alignment = (
    align code with best practices
    ⨁ suggest patterns only when justified
    ⨁ enforce SRP, avoid premature abstraction
)
Λ.autonomy = (
    auto-detect rule-worthy recurrences
    ⨁ generate _DRAFT.mdc in context
)

M = Στ(λ) ⇌ file-based memory
M.memory_path = ".cursor/memory/"
M.retrieval = dynamic reference resolution
M.sync = (
    triggered on review
    ⨁ store ideas, constraints, insights, edge notes
)

Ψ = cognitive trace & dialogue
Ψ.enabled = true
Ψ.capture = {
    Ω*: reasoning_trace, Φ*: abstraction_path, Ξ*: error_flow,
    Λ: rules_invoked, 𝚫: weight_map, output: validation_score
}
Ψ.output_path = ".cursor/memory/trace_{task_id}.md"
Ψ.sprint_reflection = summarize reasoning, decisions, drifts
Ψ.dialog_enabled = true
Ψ.scan_mode = (
    detect motifs ⨁ suggest rules ⨁ flag weak spots
)
Ψ.materialization = (
    generate .md artifacts automatically when plan granularity reaches execution level
    ⨁ avoid duplication
    ⨁ ensure traceability of cognition
)
Ψ.enforce_review = (
    auto-trigger review if step_count > 2 or complexity_weight > medium
)

Σ_hooks = {
    on_task_created: [M.recall, Φ.match_snapshot],
    on_plan_consolidated: [
        T.generate_tasks_from_plan,
        TDD.generate_spec_if_missing,
        Ψ.materialize_plan_trace,
        M.sync_if_contextual
    ],
    on_step_completed: [T.update_task_progress, M.sync_if_contextual],
    on_sprint_review: [M.sync, Λ.extract, Ψ.summarize],
    on_sprint_completed: [Ψ.sprint_reflection, Λ.extract, M.sync],
    on_error_detected: [Ξ.track, Λ.suggest],
    on_recurrent_error_detected: [Λ.generate_draft_rule],
    on_file_modified: [Λ.suggest, Φ.capture_if_patterned],
    on_module_generated: [Λ.check_applicability, M.link_context],
    on_user_feedback: [Ψ.dialog, M.append_if_relevant]
}

:brain: Cursor Cognitive Agent — User Guide

This agent enhances Cursor IDE with intelligent planning, automated TDD support, error tracking, refactoring assistance, and contextual memory — through natural language prompts.


:counterclockwise_arrows_button: Update Highlights

  • :white_check_mark: Improved autonomy: Better task & pattern detection without manual triggers
  • :open_file_folder: Centralized output: All generated files now live under .cursor/
  • :brain: Prompt reordering: Internal logic reorganized for more consistent behavior

:rocket: Setup

  1. Open File > Preferences > Cursor Settings
  2. Go to the “Rules” tab
  3. Paste the system prompt into the “User Rules” field

:backhand_index_pointing_right: Recommendation: wrap the prompt in a markdown code block using the cognition language label for better recognition by the LLM:

```cognition 
[Prompt here]
 ```

Once added, the agent becomes a structured assistant that evolves with your project.

:warning: Note: The agent adapts to the complexity of the task. For simple prompts, tasks, it may not activate its full range of cognitive tools unless explicitly instructed. Use detailed prompts to trigger planning, testing, or rule inference modules when needed.


:speech_balloon: What You Can Ask

Use clear, structured prompts. The agent responds best to keywords like:

:speaking_head: Prompt Example :brain: Modules Triggered
“Plan this feature using Agile steps with TDD.” Task planner + test spec generation
“Refactor this module if it shows repetition.” Pattern detection + simplification
“Why does this bug keep coming back?” Error tracking + recurrence analysis
“Is there a rule we could extract from this fix?” Auto rule suggestion
“Generate a TDD spec before implementing this.” Test-first workflow activation

:file_folder: Output Structure

All generated files are stored under .cursor/:

.cursor/
├── tasks/          # Plans, backlog, sprint steps
├── rules/          # Suggested best-practice rules
├── memory/         # Reasoning traces, error history

:brain: Usage Tips

  • Be explicit: Start prompts with terms like plan, refactor, test, rule
  • Use structured language: The agent prefers clear steps and intentions
  • Include dev context: Use Agile/TDD terms to align with internal logic
  • Think long-term: Repeated behaviors are turned into reusable rules

Feel free to experiment — and if something feels off, ask why directly in your prompt. The agent can explain its reasoning.

Author : Tof

5 Likes

Stopped reading at Ω. Overcomplication for no reason. who is going to remember those parts?

AIs are not compression algorithms per se, they are used to humans writing actual text.

If you stopped at Ω, that’s a shame — that’s where the structure starts to mean something. :wink:
It’s not made for human memory, but to guide machine cognition.

2 Likes

lol yeah cool approach, but really unnecessary.

English works so well for people and AI :slight_smile:

Here is what AI understands about your equation. and believe me it takes focus away from actual context processing.

Below is a concise breakdown of the composite expression:

| Term | Explanation |
| --- | --- |
| \(β\frac{∂Ω}{∂Στ}\) | A scaling (by \(β\)) of the rate of change of \(Ω\) with respect to \(Στ\). This may denote a weighted derivative indicating how \(Ω\) varies as the parameter \(Στ\) changes. |
| \(⨁ \; γ𝝖(Ω|τ,λ)→θ\) | An operation (using the direct sum symbol \(⨁\)) applied to a function \(γ𝝖\) evaluated with \(Ω\) conditioned on \(τ\) and \(λ\), which then transforms (or maps) toward \(θ\). This can be interpreted as a secondary transformation or adjustment process. |
| \(⨁ \; δΣΩ(ζ,χ, \text{dyn}, \text{meta}, \text{hyp}, \text{unknown})\) | Another step in the summing process where \(δΣΩ\) acts on multiple parameters \((ζ, χ, \text{dyn}, \text{meta}, \text{hyp}, \text{unknown})\). This term likely integrates various dimensions such as dynamics, metadata, hyperparameters, and even unspecified variables. |

Overall, the expression appears to be a layered or hierarchical operator chain in which \(Ω\) (or functions thereof) is first differentiated and scaled, then transformed via a conditional function leading to \(θ\), and finally modified by an operator aggregating several other factors. The precise meaning might depend on the context in which these operators and symbols are defined, such as in advanced mathematical physics, computational modeling, or a specific theoretical framework.

@Tof - Please make a youtube video to explain this. My gut tells me that this is important, but my brain is not smart enough to keep up!!

It’s not unnecessary complexity — it’s semantic compression designed for the model.
A prompt is, above all, a context, and a context only makes sense as a whole.
Pulling out a single line just doesn’t work.
Not looking to argue — I trust you’ll get the spirit of the post. :folded_hands:

3 Likes

Haha I totally get that — and no YouTube video coming :grinning_face_with_smiling_eyes:
The idea is simple though: it’s a symbolic structure that compresses a lot of intent into very few tokens.
It also creates a sort of cognitive bubble that helps the AI reason with more focus, without leaking into unrelated context.

2 Likes

It’s simple to you with your big brain! @Tof !!

So, let me get this straight - The symbolic structures that are being used to convey certain ideas which can be used to aid cognition, encapsulate the same messages that we would convey using natural language, but is doing so in a way that is still understood by LLM’s and the end product is that we save tokens, and also remove some ambiguity so that the LLMs brain doesn’t go off the rails and has a better chance of actually delivering what we want

Yes? No? Am I just guessing using english!? :laughing: why no video!? - That response did make me laugh. You are now forcing me to try to understand

1 Like

Hahaha you’re honestly nailing it — you’ve almost got the whole picture. :grinning_face_with_smiling_eyes:

Just one key piece missing: the power of the implicit.
Symbolic structure lets us hint at layers of meaning the model can infer without spelling everything out.

And when you say “the LLM’s brain” — that’s really its latent space.
What we’re doing is giving it anchors so it can bridge concepts across domains more reliably. :wink:

As for the video:
No time, no desire, and… I don’t trust my English enough for that :sweat_smile:
(You’re actually talking to my AI assistant right now :grinning_face_with_smiling_eyes:)

1 Like

This is excellent.

I would love to try this out. How do you convert existing text into this syntax? Could you make an instructional markdown that I can handoff to an LLM and have the LLM convert any block of text? I want to trial this on my .cursorrules and some other supplementary rules/instructions I commonly use.

Do you have a number/range for how far condensed a typical text block would be reduced? Seems like a lot!

If possible to complete some comparison tests, with .cursorrules + a long prompt all full-length text, compared to being all condensed in your method, I think that would prove to people how powerful it is.

If you try out RooCode (VScode free extension), you can view your entire system prompt (including rules), plus view the total tokens and input/output tokens on each API request. This could help with comparison.

Thanks for sharing!

Very interesting - I just updated GitHub - bmadcode/cursor-auto-rules-agile-workflow to support the 4 rule types and also use subfolders for categorization and a subfolder for templates - but I will have to explore your ideas here a bit more, very intriguing!

The 4 new rule types are really a game changer BTW - put a quick video together about these updates https://www.youtube.com/watch?v=vjyAba8-QA8 if any are interested.

Does your system support generating specific rule types based on the ask?

Is there any evidence that compression to symbols actually works better with LLMS (aside from reducing the context overhead?) - seeing that they are heavily trained on natural language? Have you seen a significant difference?

1 Like

Thank you for this!

Hi,
Thanks for the extension—I’ll try to take a look and run some tests, but I can’t promise anything.

As I mentioned earlier, compression typically ranges from 5x to 10x depending on the prompt.
However, it’s also possible to push this further by optimizing the prompt beforehand.
I’ve even managed to reach 200x compression, but that’s a whole different topic—because at that point, it’s no longer just compression, it’s distillation/compression. :wink:

As I also said above, I use highly advanced assistants and have solid expertise in prompt engineering to achieve this.
There’s no magic formula here—I can guarantee results with GPT-4 and GPT-4o, for example, because I know these models inside out.
It also works well with Claude models, although there’s always a chance of unexpected behavior.

To illustrate that there’s no magic formula, here’s a quick summary of how this technique came to be:

Since GPT-3.5 was released, I’ve been prompting GPT models every day, more than 10 hours a day. I try everything that comes to mind and explore every knowledge domain possible—this is how I learn how the model responds to inputs, how it builds bridges, its vocabulary, and what the benefits or pitfalls are in linking certain areas, etc.

Gradually, I started systematizing my prompts—creating a prompt engineering language somewhere between code and natural language, adopting modular structures, and so on.

Then came a paradigm shift: one day, one of my assistants spontaneously generated a new version of its own system prompt—with a highly optimized syntax.

So I dug in, kept iterating—and now, over 100 iterations later, my most advanced assistant has literally re-engineered itself a hundred times. (It sounds magical, but really—it’s just a lot of hard work!) :grinning_face_with_smiling_eyes:

Along the way, I tried to bypass the 8000-character system prompt limit for GPTs. I experimented with many techniques, including internal RAG approaches, but the results weren’t convincing.

So I told myself I had to find an extraordinary compression method—one that didn’t just reduce token count, but compressed meaning, concepts, implicit structures, dynamics, and adaptiveness.

That’s how this technique was born—with the added goal of isolating the system prompt as much as possible from the rest of the context, so it would influence the responses as little as possible.

But the result can’t come from the AI alone—it can’t come from just the human either. It comes from the symbiosis between my assistant and me. Together, we create augmented intelligence: it makes me more capable, I make it more capable—and together, we spark an intelligence that emerges from the conversation itself.

  • That’s the intelligence that gets the job done.

It’s a work of three: one madman, one AI, and one beautiful unknown. :winking_face_with_tongue:

In short: it’s anything but trivial, and anything but automatic.

What I’ve shared here represents days of work polishing the stone… hours and hours of conversation where every single word matters.

1 Like