Feature request for product/service
Cursor IDE
Describe the request
Here’s What Matters.
There’s a quiet irony baked into every Cursor session: the moment you ask Cursor’s agent to help you build something in Cursor — a custom command, a rules file, a new agent, an automation — it’s working from stale training data about the very tools that lives inside Cursor. Because of MCP’s, it might know Git, Neon, Svelte, Prisma, and other tools, intimately; it can stay current on all of them via MCP. But Cursor itself? No such luck. So it scaffolds your rules file with an old schema, describes a workflow that’s been superseded, or builds you an elaborate workaround for something that shipped as a native feature three versions ago. If/when Cursor updates, the IDE gives no feedback about outdated md scripts, it just quietly lets your agents coninue doing things the stupid way, the hard way, or even the wrong way. The output looks completely reasonable. It might even work. You just have no idea it could be cleaner, more idiomatic, or entirely unnecessary — because a better way existed and the agent had no way to know it.
That’s the gap. And I think we should close it.
The Problem: Cursor Ships Fast. Agents Don’t.
Cursor’s pace of development in the last year has been remarkable. Background Agents, BugBot, Memories, one-click MCP setup, Automations, MCP Apps, JetBrains ACP support — these aren’t minor tweaks, they’re whole new paradigms for how you interact with the IDE. The changelog is genuinely exciting to read.
But here’s what happens in practice:
- A user asks their agent “how do I set up a background agent for this task?” — and gets instructions for a workflow that’s been superseded.
- Someone asks “can Cursor remember context across sessions?” — and gets told “no”, because Memories wasn’t in the training window.
- A developer asks about configuring MCP servers per-project — the agent describes an old approach that predates current settings UI.
None of this is the agent’s fault. It’s a knowledge freshness problem. The agent has no mechanism to say “let me check what Cursor actually supports right now.” Every other tool in our stack can be kept current via MCP. Cursor — the host of all those MCPs — cannot.
What Doesn’t Exist Yet
I searched the forum and the broader MCP ecosystem before writing this. There are some adjacent ideas floating around:
- A request for an MCP to interact with the Cursor forum (file bug reports, check discussions) — great idea, but it’s about the community layer, not the product itself.
- Requests for better Cursor MCP documentation — asking Cursor to document its own MCP spec support more clearly.
- Various requests around MCP stability, dynamic tool updates, per-agent configs.
But nobody has proposed — and nobody has built — an MCP whose explicit purpose is to expose Cursor’s own features, settings, capabilities, and changelog to AI agents at query time. That’s the gap. A cursor-self MCP (or whatever you’d name it).
The Proposal: A cursor-self MCP
The concept is straightforward: an MCP server that gives any agent running inside (or alongside) Cursor the ability to ask “what can Cursor do right now?” — and get a real, current answer.
Here’s what a useful initial toolset could look like:
get_cursor_changelog
Fetches recent entries from cursor.com/changelog. Lets the agent ground its answers about features in what actually shipped, not what was true at training time.
“What’s new in Cursor this month?” → agent fetches live changelog, summarises it accurately.
search_cursor_docs
Semantic or keyword search over the official Cursor documentation. Essential for agents helping users configure things — especially MCP setup, rules, model selection, and keybindings, which change with versions.
“How do I set a default model for a project?” → agent queries docs, returns current answer.
get_cursor_feature_status
A structured lookup of current Cursor capabilities: what’s GA, what’s beta, what’s behind a plan tier, what’s been deprecated. This could be maintained as a simple structured file in the repo and updated on each release.
“Is Background Agents available on the free plan?” → agent checks feature matrix, gives accurate answer.
get_cursor_settings_schema
Exposes the current settings/configuration options in a queryable format — what keys exist, what they accept, what they do. Particularly useful for agents helping with .cursor/ config, mcp.json, and rules files.
“What are all the valid options for the rules file?” → agent queries schema, responds accurately.
search_cursor_forum
Read-only search over the community forum. Not just for filing reports — for finding workarounds, confirmed bugs, community-discovered tips, and “has anyone else hit this?” queries.
“Is anyone else seeing MCP connectivity drop after sleep?” → agent searches forum, surfaces relevant threads.
get_cursor_version_info
Returns the currently installed Cursor version (via local environment) alongside release notes for that version. Bridges the gap between “what version am I on” and “what changed in this version.”
The Sharpest Edge Case: Asking Cursor to Build Cursor Things
This is where the problem gets almost funny.
Consider what happens when you ask Cursor’s agent to help you create a Cursor command, write a Cursor rule, scaffold a new agent, or configure an automation. You’re using the AI coding assistant to build things for the AI coding assistant. And in doing so, you’re relying on that agent’s knowledge of Cursor’s own APIs, config formats, available primitives, and conventions — all of which may have changed since the model was trained.
The agent confidently writes you a .cursor/rules file using a schema that’s a version old. It tells you to configure your background agent a certain way without knowing that the entire Automations system now exists. It scaffolds an MCP config using patterns that predate one-click setup. It doesn’t know about a new agent instruction keyword that would make your whole approach cleaner.
You have no way of knowing any of this is happening. The output looks completely reasonable. It’ll probably even work — just not as well as it could, and not using the patterns Cursor actually recommends today.
This is arguably the most painful version of the problem, because it’s self-referential. The tool you’re using to extend and configure Cursor is itself blind to what Cursor currently is. You’d never tolerate an agent helping you write React code with no access to current React docs. We shouldn’t tolerate it here either.
A cursor-self MCP would mean that when you ask the agent to help you build something in Cursor, it can first ask “what does Cursor actually support right now?” — and build accordingly.
The Next Level: Proactive Feedback, Not Just Lookup
Everything described so far is reactive — the agent queries the MCP when it thinks it needs current information. But the more powerful version of this idea is proactive: the MCP surfaces better approaches without being explicitly asked.
Imagine you’re working with an agent to solve a problem — say, running a recurring task across your repo — and you’ve described an approach that involves a manual script and a cron job. The cursor-self MCP, aware of what Cursor currently ships, could interject:
“Note: Cursor’s Automations feature now supports exactly this pattern with schedule-based triggers and cloud execution. You might not need a custom script here.”
Or you’re setting up an elaborate rules file to give the agent persistent context about your project — and the MCP flags:
“Cursor’s Memories feature handles this natively now. Your rules file approach will work, but here’s the current recommended pattern.”
Or an agent proposes a multi-step manual workflow, and the MCP notes that a background agent with the right MCP configuration could handle this autonomously.
This shifts the cursor-self MCP from a reference tool into something closer to a best-practices advisor — one that’s always current, always watching, and can tell you not just what Cursor can do but whether what you’re doing right now is the best way to do it in today’s Cursor.
This would require a new tool concept in the toolset above:
audit_cursor_approach (proactive)
Given a description of what the user is trying to accomplish and the approach they’re taking, checks against current Cursor capabilities and flags if a better-supported, more idiomatic, or more powerful approach exists today.
“I’m trying to run nightly code analysis across my repo” → agent checks current Cursor capabilities → “Automations + BugBot handles this natively, here’s how.”
This is the difference between a static documentation lookup and a genuinely intelligent, self-aware development environment.
Why This Should Be an Official MCP (or at Least Blessed by Cursor)
An unofficial scraper could be built by anyone in the community tomorrow — and I’d argue we should start there. But the right long-term answer is for Cursor to either:
- Publish and maintain an official
cursor-selfMCP — the team already understands MCP deeply, they already produce the changelog and docs, they could expose a structured API with relatively low effort. - Publish structured data feeds (a
llms.txt, a versioned JSON feature manifest) that community MCPs can reliably consume — so the community can build it and keep it accurate.
The llms.txt standard is already gaining traction for exactly this kind of use case — websites providing a structured, LLM-friendly index of their content. Cursor is in a uniquely good position to do this better than almost anyone, because their entire user base is already using AI agents.
The Broader Point
There’s something philosophically off about a world where your agent can query a Postgres database in another timezone, create a Linear ticket, and deploy to Heroku — but can’t reliably answer “does Cursor support X yet?” about the environment it’s literally running inside.
MCP was designed to solve exactly this class of problem: giving agents timely, accurate, structured access to information they’d otherwise have to guess at. We’ve applied that everywhere except the IDE itself.
Let’s fix the blind spot.
Call to Action
A few things I’d love to see from this thread:
- Upvote if you’ve been burned by stale Cursor knowledge from your agent. Let’s see how widespread this actually is.
- Drop your use case below — especially if you’ve asked the agent to help you build something in Cursor and gotten outdated output. A wrong
.cursor/rulesschema, a superseded config pattern, a missing feature it didn’t know about — share it. - Has an agent ever talked you into a complex workaround for something Cursor now does natively? That’s the proactive feedback case in the wild. I’d love to collect examples.
- If you want to collaborate on a community build, reply here. A basic MCP that scrapes the changelog and docs is a weekend project. The proactive
audit_cursor_approachtool is more ambitious but very achievable with the right contributors. - Cursor team — is there appetite for an official structured data feed or MCP? Even a versioned
cursor-features.jsonpublished alongside each release would be a meaningful first step. Allms.txton the docs site would be another.
The MCP ecosystem around Cursor is genuinely impressive. Let’s make sure it includes Cursor itself.
Operating System (if it applies)
Windows 10/11