Ah yes, the “everyone who disagrees with me must be a paid shill” argument
I work on a monolithic codebase of around 500k lines of code, and never touch the expensive Sonnet models. At least not since the Sonnet 3.5 days. Now it’s GPT 4.1 all day every day, and not once have I gone over my $20/month.
If you prioritize leveling up your skills in prompting and context management, then Cursor can be very cheap to use.
Obviously, some models are better for some things than others. You can get by on a cheaper model when doing simpler work. Try doing something too complex with a cheaper model and you end up spending more. So, it is a gamble based on incomplete information, since you do not know for sure which models can handle the task efficiently ahead of time. I’ve tried other models than Sonnet 4.5 for complex work and they fail and then you have to fix it or revert which costs tokens also.
Felt this several months ago. Used to pay around $60 a month for Cursor and it was perfect. Then the 10-100x price increase came and now i’ve been paying $0 because it’s just not worth it anymore.
After using Cursor AI’s 200 Ultra Plan for an extended period, I’ve gathered some thoughts to share.
Overall, my experience with the Opus 4.5 Max model on Cursor has been outstanding—the performance and efficiency made it my go-to choice. However, I quickly discovered that the 200 Ultra plan monthly quota could be exhausted in just 10 days with regular, intensive use. This limited usage window makes it difficult to rely solely on the plan for an entire month unless I carefully manage my prompts and workload.
Now, I’m reconsidering whether to stick with Cursor’s pay-as-you-go option, migrate to Claude Code, or explore alternatives like a Cline setup within VSCode that leverages similar models. Cost control is becoming a major factor, especially as I hit the 200 Ultra plan ceiling so quickly.