Cursor’s usage tracking system is displaying erratic and unexplained usage percentage increases over short time periods without corresponding AI operations. Usage reporting shows dramatic surges (25% → 57% → 68% → 71%) within 48 hours, with the largest increase (25% to 57%) occurring overnight with no system activity. This results in users unexpectedly hitting usage limits and being blocked from service access due to inaccurate usage calculations.
Timeline of erratic usage reporting:
Yesterday: ~25% usage
Today morning: 57% usage (32 percentage point overnight increase with no activity)
Today afternoon: 68% usage
Current: 71% usage (service now blocked at usage limit)
Expected behavior: Usage percentage should only increase incrementally based on actual AI model invocations and should remain stable when no AI operations are performed.
Actual behavior: Usage percentage shows dramatic unexplained increases during periods of no activity, leading to false service blocking.
Steps to Reproduce
Steps to Reproduce:
Note: This appears to be an intermittent systematic issue that may be difficult to reproduce on-demand, but here’s the observed pattern:
Scenario 1: Overnight Usage Surge (Primary Issue)
Note current usage percentage before ending session
Close Cursor completely
Leave system idle overnight (no Cursor activity, no AI operations)
Reopen Cursor the following day
Expected: Usage percentage unchanged or minimal increase
hi @shayan1 Could you check your Dashboard > Usage if you are seeing there date and time where you did not use Cursor? Please post screenshot here if you see any that were not at your time of usage.
You are currently unable to use your Cursor plan?
Also check in Dashboard > Settings if you have any Sessions running that are not from your machine. Alternatively remove all sessions and login fresh in Cursor. https://cursor.com/dashboard?tab=settings
In Dashboard > Integrations you can check if there are any Cursor API keys configured and remove any that you did not create yourself. https://cursor.com/dashboard?tab=integrations
Thank you for the suggestions. I have performed the recommended actions:
Dashboard > Settings: All active sessions have been terminated and disconnected.
Dashboard > Integrations: Confirmed that no API keys have ever been generated for this account.
These actions have had no effect on the issue, as the root cause is not related to account access but to a fundamental flaw in the usage metering system. The data demonstrates a non-linear, non-attributable cost assignment that requires immediate engineering intervention.
Let’s analyze the mathematical progression of the usage error with precision:
September 4, 2025 (Baseline): Account usage was stable at ~25%.
September 5, 2025 (First Anomaly): Usage reported at 57%. This is a 32 percentage point increase over a period of approximately 8-10 hours of complete system inactivity (overnight). There were zero AI operations during this interval.
September 6, 2025 (Second Anomaly & Blockage): Usage escalated from 68% to over 70%, leading to a complete service block. This final increase was triggered directly by the execution of basic, non-AI shell commands (which -a python3.11), as documented in the screenshot provided in my initial bug report.
The immediate consequence is a complete service blockage. My paid Ultra subscription is unusable because the metering system is erroneously calculating and assigning substantial AI usage costs to periods of inactivity and, most critically, to non-AI local shell command execution.
This situation requires immediate escalation beyond the community support level to your core billing and systems engineering teams. The following actions are required for resolution:
Forensic Account Audit: A full forensic audit of my account’s token calculation and usage percentage logic from September 4th, 2025, to the present. The provided usage-events.csv is insufficient as it fails to account for the massive, non-attributable percentage jumps.
Immediate Usage Correction: The usage meter for my account must be manually reset to its last known accurate state of 25% as of September 4th.
Immediate Service Restoration: My account must be unblocked, and full access to my Ultra plan features restored without delay.
Financial Remediation: I require a pro-rated refund for the entire service outage period (from September 6th until the date of resolution) and additional service credits to compensate for this critical service failure and the significant time I have invested in diagnosing your platform’s bug.
Root Cause Analysis (RCA): Upon resolution, I expect a formal RCA report detailing the source of the metering error, the mathematical proof of the miscalculation, the corrective actions taken, and the specific safeguards being implemented to prevent recurrence for my account and others.
I expect confirmation of this escalation and a detailed action plan from the responsible engineering team within the next 12 business hours.
Thank you for your attention to this critical matter.
This is a formal escalation of my previous support requests regarding a critical usage metering failure on my Ultra subscription account, initiated on September 5th.
As of Monday, September 8th, 11:00 PM PDT, my account remains completely blocked from all AI features. The situation has now degenerated, providing clear evidence of a fundamental backend system failure.
As per the attached screenshot, your system UI is in a state of logical impossibility. It concurrently reports two contradictory statements:
“You’ve hit your usage limit.” (A blocking condition)
My usage meter is at 72%.
Let’s define this failure with mathematical precision:
Let U represent the total usage allowance for the billing cycle, where U = 100%.
Let M represent the currently metered usage, where M = 72%.
The system should only trigger a BLOCK state if the condition M ≥ U is met.
Your system is currently enforcing a BLOCK state while M = 72%, which is a direct violation of the logical condition, as 72 < 100. This is definitive proof that the blocking mechanism is decoupled from the displayed usage meter, and both are operating on fundamentally flawed data—data that I have already proven is being erroneously inflated.
This is a critical service delivery failure. I am paying for an Ultra subscription that I am unable to use due to a verifiable defect in your platform.
My previous demands are now immediate requirements for resolution:
Immediate & Unconditional Service Restoration: My account must be unblocked, and full access restored within the next 4 business hours. The contradictory system state provides more than enough evidence to justify manual intervention.
Immediate Usage Correction: My usage meter must be manually reset to the last known accurate state of 25% (as of September 4th).
Comprehensive Financial Remediation: I require a pro-rated refund for the entire service outage period (from September 6th until resolution) and substantial service credits to compensate for this multi-day, critical service failure.
Formal Root Cause Analysis (RCA): I expect a formal engineering RCA detailing the cause of both the initial metering surge and this subsequent logical failure in the blocking mechanism.
If a senior engineer is not assigned to this issue and a resolution path is not communicated to me within the next 4 business hours, I will have no alternative but to escalate this matter further, including disputing the service charges with my financial institution for failure to provide services as contracted.