Cursor-memory — Persistent, Searchable Memory for Cursor AI

Hi everyone,

I built an MCP server that gives Cursor persistent memory across chat sessions.

The problem

Every new chat starts fresh. Decisions, architecture context, debugging sessions — all gone. You end up re-explaining the same things over and over, or maintaining .md files manually.

What cursor-memory does

It lets you save and recall memories using simple commands in chat:

  • /memo — AI summarizes the conversation into a structured memo (Decisions → Key Details → Context → Next Steps), auto-tags it, and stores it locally
  • /recall [query] — search your memories by meaning
  • /forget [query] — delete memories you no longer need
  • Auto-recall — AI detects when your question refers to something you’ve discussed before and searches automatically. No command needed.

Key features

  • You control what’s saved — nothing is saved without you saying /memo
  • Semantic search — hybrid FTS5 + vector search. Finds by meaning, not just exact keywords
  • Repo-scoped + global — repo memories stay isolated per project, global memories are available everywhere
  • Multilingual — save in English, search in Japanese. 100+ languages supported
  • Handles long content — long discussions are chunked into overlapping segments so nothing gets lost
  • 100% local — SQLite + local embeddings (multilingual-e5). No cloud, no API keys, no telemetry
  • Cross-platform — macOS, Linux, Windows

Installation

npm install -g cursor-memory

cursor-memory setup

That’s it. Two commands. It configures the MCP server, installs the AI behavior rules, and downloads the embedding model. Restart Cursor and you’re ready.

How it works under the hood

  • Storage: SQLite with FTS5 for full-text search
  • Embeddings: multilingual-e5 models via ONNX Runtime (runs locally in a worker thread)
  • Search: hybrid scoring — 40% keyword match + 60% vector similarity
  • Long content is split into overlapping chunks (350 words, 50 word overlap) to stay within the model’s 512-token window
  • Three model sizes available: small (~50MB), medium (~115MB), large (~270MB)

Links

- GitHub: GitHub - tranhuucanh/cursor-memory: Persistent, searchable memory for Cursor AI — You control what your AI remembers. · GitHub

- npm: https://www.npmjs.com/package/cursor-memory

Would love feedback from the community. Especially interested in:

  • Any issues on Windows/Linux (I primarily develop on macOS)

Thanks!