I built an MCP server that gives Cursor a persistent memory layer and wanted to share it for feedback.
What it does:
Cursor Brain lets the AI remember things across sessions - coding decisions, architecture choices, project context, etc. When you ask “what did we decide about X?”, the AI can actually recall it.
How it works:
-
Stores memories in a local SQLite database (no cloud required)
-
Uses hybrid search: lexical (FTS5) + semantic (local Hugging Face embeddings via Xenova/all-MiniLM-L6-v2)
-
Runs entirely offline after initial model download
-
Exposes 4 MCP tools: memory_search, memory_add, memory_delete, memory_stats
Quick install:
npm install -g @samhithgardas/cursor-brain
Then add to Cursor MCP config (~/.cursor/mcp.json):
{
“mcpServers”: {
"cursor-brain": { "command": "npx", "args": \["-y", "@samhithgardas/cursor-brain"\] }}
}
To make the AI use it automatically, add a Cursor rule:
# Integrate cursor-brain
Whenever generating answers or code:
- Before answering, call the MCP tool `memory_search` with the user’s query (so the agent retrieves relevant memories).
- When the user asks to remember something, call the MCP tool `memory_add` with that content.
- Use `memory_delete` when the user requests forgetting something or cleaning up.
- Use `memory_stats` to gather internal memory metrics when helpful.
npm: @samhithgardas/cursor-brain
Looking for feedback on:
-
Is the hybrid search (semantic + lexical) approach useful, or would pure semantic be enough?
-
Any features you’d want to see? (e.g., memory expiration, workspace-scoped memories, export/import)
-
How do you currently handle “remembering” context across Cursor sessions?