Hi everyone,
I’ve been using Cursor Pro daily for AI-agent development (building and testing autonomous coding agents).
Typically, I let the agent run on “auto” mode and generate or refactor around 30–40K lines of code per day.
Here’s what I’ve noticed:
-
At the beginning of the day, Cursor feels super sharp — smart code edits, perfect refactors, minimal bugs.
-
After a few hours, it suddenly becomes dull: shorter, less coherent responses, frequent logic mistakes, even missing context entirely.
It really feels like the model silently downgrades after some quota or token cap is reached.
My questions:
-
Does the Pro plan have a hidden daily or monthly token limit for GPT-4 / Claude usage?
-
What’s the real difference between Pro and Pro+?
The website just says “higher usage,” but doesn’t specify how much more. -
For heavy AI-Agent workflows (tens of thousands of lines per day), is Pro+ enough — or do we need to bring our own API keys to stay on high-end models all day?
-
Any way to verify when Cursor switches models (e.g., to Claude Instant or GPT-3.5)?
Context:
I’m an AI workflow developer working on agentic systems (multi-file refactoring, test generation, autonomous loops).
It’s fine if usage is limited — I just want transparent info so I can plan whether to upgrade or connect my own API keys.
Thanks! Would love to hear from the Cursor team or other heavy users who’ve hit similar limits.
— Keven