Feature request for product/service
Cursor IDE
Describe the request
Feature Request: Context Window Inspector & Agent Usage Profiler
Summary: Expose a per-request breakdown of how the context window is consumed — by rules, tools, referenced files, and user input — so users can diagnose degraded responses, optimize their setups, and manage the rapidly expanding plugin ecosystem. Extend this into a session-level agent usage profiler for deeper workflow analysis.
The Problem
With plugins launching in Cursor just this week — and the Claude plugin ecosystem already growing rapidly — the next few months will bring a surge of new tools, skills, and automatic context injections. Every one of them competes for the same finite resource: the context window.
Today, this competition is invisible. When responses degrade — truncated output, shallow reasoning, missed context — users have no way to determine whether the cause is model limitations, context exhaustion, or a single bloated plugin consuming a disproportionate share of the budget. The remediation is guesswork: disable rules one at a time, shorten prompts, switch models, and hope.
This is the equivalent of debugging a memory leak without a profiler.
The plugin ecosystem will only make this worse. As authors ship increasingly capable tools, the aggregate context cost will climb — and without visibility, users won’t know which tools are worth their token cost and which are quietly degrading everything else.
Proposed Solution
Per-Request Context Inspector
A collapsible panel (or command palette view) showing how tokens were allocated for each request. The categories should be user-actionable — focused on things users can control — rather than exposing internal system architecture:
| Category | What it reveals |
|---|---|
| Rules (expandable) | Token cost per rule — .cursorrules, workspace rules, project rules — so users can identify which rules are expensive and worth keeping |
| Tools & Plugins (expandable) | Token cost per tool/plugin schema, ranked by consumption — critical as the plugin ecosystem grows |
| Auto-included context | Codebase indexing, auto-referenced files, and other implicit inclusions |
| Referenced files | Files the user explicitly tagged with @ |
| Conversation history | Prior turns carried forward — often a silent, major consumer in long sessions |
| Your prompt | The current user message |
| System overhead | A single consolidated number for Cursor’s internal instructions — transparent about the cost without exposing implementation details |
| Remaining output budget | Tokens available for the model’s response |
The key design choice: rules and tools are itemized (because users need to know which ones are expensive), while system internals are consolidated (because that detail isn’t actionable and raises IP concerns).
Session-Level Agent Usage Profiler
The per-request view solves the immediate diagnostic need, but agentic workflows span many requests. A session-level profiler would let users explore context consumption patterns across an entire agent session:
- Cumulative token spend across a multi-step agent run, broken down by category
- Per-step drill-down showing how context allocation shifted as the session progressed
- Tool frequency and cost analysis — which tools were invoked most, and what was their aggregate context footprint
- Context pressure warnings — flag moments where the output budget dropped below a threshold, correlating with likely response quality degradation
This transforms context management from a per-request guessing game into a proper performance optimization workflow.
Why Now
The plugin ecosystem is about to explode. Plugins launched in Cursor days ago. Within months, users will be running stacks of community-built tools with no way to assess their context cost. The earlier this visibility exists, the more it shapes the ecosystem toward efficiency — retrofitting discipline after bloat sets in is much harder.
Cursor already optimizes for token efficiency. This feature extends an existing strength into a user-facing advantage. Competitors that treat the context window as a black box force their users into blind trial-and-error. It’s the equivalent of Chrome shipping DevTools while a competing browser doesn’t — the one with better developer tooling wins the developer audience.
Context is the scarce resource of AI-assisted development. Developers already manage memory, CPU, and network budgets with mature profiling tools. Context windows are the newest constrained resource, but the only one with zero observability. Closing that gap is a natural evolution.
Implementation Tiers
- Tier 1 (MVP): Total tokens used vs. available, with a proportional split: your prompt, tools & rules (combined), file context, conversation history, everything else. Even this coarse view would be immediately useful.
- Tier 2: Itemized breakdown within tools and rules — rank them by token cost so users can identify the expensive ones. Add a collapsible panel or hover tooltip per request.
- Tier 3: Session-level profiler — cumulative tracking, per-step drill-down, context pressure warnings, and exportable reports for team-level workflow optimization.
A Note on Transparency vs. IP
There’s a reasonable concern that exposing context allocation in detail could allow competitors to reverse-engineer Cursor’s prompt assembly pipeline. The category design above is intentional about this: user-controllable elements are itemized, system internals are opaque. A single “System overhead” line communicates the cost honestly without revealing the structure. This mirrors how Chrome DevTools exposes page-level performance without revealing the browser’s internal rendering pipeline.
Precedent
- Chrome DevTools breaks down page load by resource type, letting developers identify what’s consuming bandwidth — and became a defining competitive advantage over browsers that lacked equivalent tooling.
- Webpack Bundle Analyzer visualizes dependency-level size contributions, driving an ecosystem-wide shift toward smaller packages.
- OpenAI’s token usage API returns prompt and completion counts — but without categorical granularity. This is the gap.
Cursor controls the full context assembly pipeline. No one else is better positioned to provide this granularity.
The plugin ecosystem is going to grow fast. Without visibility into context consumption, that growth will bring invisible bloat, unpredictable response quality, and frustrated users who can’t diagnose why. A context inspector — starting simple and expanding into a full agent profiler — gives users the observability to build efficient, predictable workflows, and gives Cursor a durable competitive advantage rooted in transparency.


