Open source: Auto-updating environment docs that keep Cursor AI informed about your project

**I was tired of explaining my projects anew to agents in every conversation.
**
I built an open-source solution that gives Cursor persistent memory about your project environment. It automatically loads at session start (using .mdc rules with alwaysApply: true) and self-updates when things change.

What it does:

  • Documents your shell environment, tech stack, and common commands once

  • AI automatically checks for staleness and offers to update

  • Captures environment gotchas so you never hit the same issue twice

  • Cross-platform (Windows PowerShell + Unix/Mac Bash)

  • One-line installer for any project

Looking for feedback on:

  • Is this useful for your workflow?

  • What additional features would help?

  • Any edge cases or issues?

  • Have I done anything stupid?

Repo: https://github.com/u00dxk2/cursor-kooi-env-docs

Contributions welcome!

Can you comment on how this is an improvement over Codebase Indexing and Docs & Indexing? Any interesting benchmarks, or maybe you share how it’s improved inferencing in your experience? Stale docs have definitely been an issue for me so this is interesting, I’ll try it out when I get some time. Thanks.

Great question! To be clear, this complements Codebase Indexing and Docs & Indexing. They solve different problems:

Codebase Indexing → Knows what code exists in your project

Docs & Indexing → Knows how external tools/frameworks work

This tool → Knows your project’s environment and workflow

Here’s what I mean by “environment” - stuff that isn’t in your code or framework docs:

  • Shell differences: I’m on Windows with PowerShell. Cursor’s indexing doesn’t know that && doesn’t work in PS5.1, or that my paths need backslashes in certain contexts. After the AI suggested broken commands a few times, I documented it once → never happened again.

  • Project-specific gotchas: Things like “don’t run the installer locally, test from GitHub raw because of line ending issues” - that’s tribal knowledge that gets lost between sessions.

  • Workflow context: How do I deploy this? What’s the test command? Where’s the dev server? This changes per project and isn’t in the code itself.

  • Cross-session memory: The real pain point was repeating the same context every new conversation. “I’m on Windows, using PowerShell, the project is here, we use X workflow…” Now AI just knows.

On benchmarks: I don’t have formal metrics, but anecdotally:

  • ~90% reduction in “wrong shell syntax” suggestions

  • No more explaining my environment at conversation start

  • Gotchas I hit once stay fixed (AI documents them in real-time)

On stale docs: That’s exactly why the staleness checker exists! The .mdc rule automatically checks the “Last Updated” date and prompts for review. It’s a safety net, but in practice the AI updates docs in real-time when it discovers changes, so they rarely go stale.

Let me know how it works for you - definitely looking for feedback on edge cases!