Feedback on cursor-brain

I built an MCP server that gives Cursor a persistent memory layer and wanted to share it for feedback.

What it does:

Cursor Brain lets the AI remember things across sessions - coding decisions, architecture choices, project context, etc. When you ask “what did we decide about X?”, the AI can actually recall it.

How it works:

  • Stores memories in a local SQLite database (no cloud required)

  • Uses hybrid search: lexical (FTS5) + semantic (local Hugging Face embeddings via Xenova/all-MiniLM-L6-v2)

  • Runs entirely offline after initial model download

  • Exposes 4 MCP tools: memory_search, memory_add, memory_delete, memory_stats

Quick install:

npm install -g @samhithgardas/cursor-brain

Then add to Cursor MCP config (~/.cursor/mcp.json):

{

“mcpServers”: {

"cursor-brain": {

  "command": "npx",

  "args": \["-y", "@samhithgardas/cursor-brain"\]

}

}

}

To make the AI use it automatically, add a Cursor rule:

# Integrate cursor-brain

Whenever generating answers or code:

- Before answering, call the MCP tool `memory_search` with the user’s query (so the agent retrieves relevant memories).

- When the user asks to remember something, call the MCP tool `memory_add` with that content.

- Use `memory_delete` when the user requests forgetting something or cleaning up.

- Use `memory_stats` to gather internal memory metrics when helpful.

GitHub: GitHub - samhith123/cursor-brain: A persistent memory layer for Cursor IDE: store developer conversations and coding decisions, then retrieve relevant context automatically via hybrid (semantic + lexical) search. Works locally first with no cloud dependency.

npm: @samhithgardas/cursor-brain


Looking for feedback on:

  1. Is the hybrid search (semantic + lexical) approach useful, or would pure semantic be enough?

  2. Any features you’d want to see? (e.g., memory expiration, workspace-scoped memories, export/import)

  3. How do you currently handle “remembering” context across Cursor sessions?

How cursor-brain compares to other memory solutions:

I looked at the existing options and here’s what cursor-brain does differently:

vs. HPKV Memory MCP Server (cloud-based)

Feature cursor-brain HPKV Memory
Runs locally Yes - fully offline No - requires cloud account + API key
No account required Yes No - requires HPKV signup
Data stays on your machine Yes - SQLite in ~/.cursor-brain/ No - stored on HPKV servers
Semantic search Yes - local HuggingFace embeddings Not documented

vs. mcp-knowledge-graph

Feature cursor-brain mcp-knowledge-graph
Storage model Flat memories with tags Knowledge graph (entities, relations, observations)
Search type Hybrid: semantic + lexical Lexical/exact match only
Semantic embeddings Yes - local HuggingFace (all-MiniLM-L6-v2) No
Natural language queries Strong - understands meaning Relies on exact keywords
1 Like