Thanks to the Cursor Team for the Current Pricing Plan

For those of you talking about agent mode within other options - i messed around with this a few weeks ago and found they all fell well short of Cursors agent mode - really interested to hear realistic thoughts about the maturity of other platforms/options agent modes, because as i wrote, Cursor’s was still a fair step ahead of the rest only a couple of weeks ago.

2 Likes

Just like someone else in this thread was saying, go with Claude Code - you won’t look back.

  1. I’ve never had any kind of network issues with it (as opposed to Cursor, which happens very often, at least once every 10 prompts).
  2. I don’t know whether it’s just placebo or a real thing, but the quality in responses of Claude Code vanilla is just much superior than all other LLMs I tried with Cursor, including Claude Sonnet 4 itself.

For those afraid of Claude Code’s CLI: don’t be.

  1. Anthropic has built the best CLI tool I’ve ever seen and Claude Code’s link tool (the @ you use to link files) is day and night superior if compared to Cursor’s. I mean, when I create a file and wanted to link it to Cursor’s chat, it doesn’t seem to get indexed immediately and ugh, it’s just bad and slow and more often than not, it’s clumsy to link the file I’m looking for depending on how it’s named.
  2. You can use it inside Cursor or VS Code or Windsurf or any (?) IDE that offers a built-in terminal. My setup is 95% about the same as I had with vanilla Cursor.

With all due respect, Cursor’s honeymoon is over. They gave us ahead-of-time UX and great pricing on accessing LLMs. Now, to me, barely the UX remains. And honestly? After weeks working with Claude Code, I don’t think Cursor’s UI/UX shines that much anymore.

The only reason I’m still paying for Cursor (at its $20 tag) is the tab functionality that I can’t find anywhere else (please, let me know if VSCode has caught up).

TL;DR: I got productive with Cursor. I got twice as productive with Claude Code. My average cost on (current) Cursor vs. Claude is about the same. :slight_smile:

(Remember: Cursor is nothing but a wrapper. Anthropic, the company behind Claude is the one building the LLM itself. They know better than Cursor’s team how to make an LLM perform.)

1 Like


Just cancelled see ya later cursor $200 to use auto mode no thank you

3 Likes

What are you replacing Cursor with?

Webstorm + Claude code max

1 Like

Hm, is WebStorm’s tab/autocomplete as good as Cursor’s?

Nah tab/auto complete is better but the main commodity we pay for is the command chat and if we cant get a reasonable model to do that I decided to move on from cursor.

1 Like


I’ve noticed that Cursor is truly misleading customers. Since the beginning of the month, I haven’t made that many requests compared to usual, yet I’m still seeing this message. It’s really hard to accept the lack of transparency in their pricing.

suggest me also :smiley:

LoL

Although it’s exactly about x2.5-x3 from the $60 package


2025-07-17_01-27-13

*$19.77 left from the previous period

It isn’t based on purely number of requests that you make, it is based on tokens used, and requests that cursor’s models make to the major LLM’s. It is really simple if you understand what cursor does when it magically writes code for you. Your requests are likely just much more complex overall than they were last month.

I got curious: what’s the scale of the projects you’re working on with Cursor + AI, really? Is it something running on production, with users/paying customers, tests et al, or just casual stuff at the fun starting phase?

I anticipate my rationale behind the question: Cursor $20/$60 editions™ were great on kicking off a free-style vibe-coding project at an empty folder. They one-shotted lots of things for cheap. Money well spent.

On the other hand, Cursor $60/$200™ editions were terrible value for the price (if compared to Claude Code) at my serious codebases who involve paying customers and production databases with thousands/millions of rows.

So sure, credit where it’s due—Cursor ($20/$60) is great at kicking off pet projects at empty folders.

Please note that my only claim is that Cursor is not worth the money they ask for if compared to Claude Code for any project post-honeymoon phase (aka production projects with any amount of serious, paying customers that are not just friends showing support). I mean, unless for its tab/autocomplete mechanism. That’s unmatchable and I’ll continue paying for my $20 subscription for that functionality alone.

1 Like

Heads up: I might be mixing up the dates since I’ve been working non-stop like crazy for the past couple of weeks. But everything I’m describing definitely happened within the last 2-6 weeks.


Lately, a good chunk of my time has gone into the AgentTools ecosystem. For the last three days, I’ve been grinding on Agent Enforcer. During the last billing period, I wrote a Docstrings tool that now needs a complete refucktoring because I decided to just screw compatibility with some industry standards. Plus, the code turned out kinda brittle, and Gemini can’t seem to fix a minor bug with specifying the docstring version.


Last week and the week before, I was also trying to whip up my own “CAT tool at home,” until I found out someone had already done it. So, I ended up switching to tweaking PDFMathTranslate-next for my own needs—to improve translation quality and cut down on API usage. I got the chunking tuned just right to max out my Pro+ plan. There’s just one bug left that no LLM can seem to spot, which is causing the chunking to be partially ignored.

At least I’ve learned how to write tests — the project has a deep incremental parameterized integration test for idempotency with 1796 test cases. Thinking about publishing that broken piece of code just to show off the test itself. Agent Docstrings have only 150 tests with ~89% coverage.

Oh, and by the way, I honestly thought this would be faster than just manually copy-pasting into Google AI Studio, so I even created the initial repo in my G:/Temp folder instead of G:/Github. Good thing the translation contract is pretty flexible.


Then there’s the closed-source commercial Android project (I really need to finish that thing…). Like an idiot, at the end of the last billing period and the start of this one, I was asking an LLM to port a piece of SciPy to Kotlin. It took my dumbass brain three whole weeks to finally think of just asking an Agent for an open-source Java library that already had the algorithm I needed.

So now I’m basically blowing off the client because I’m hyperfocused on Agent Tools and the CAT project, thanks to my passive ADHD. Good thing the Android app contract is pretty flexible.

Oh, and somewhere back in April-May-June, I was occasionally tinkering with my Artemonim’s Little Tools for myself. It all started with an FFMPEG script, if I recall correctly.


Luckily, I managed to crunch through the Python processing in Agent Enforcer and finally got PDFMathTranslate working… it launches, even though it mostly ignores my tweaks because of one tiny overlooked function…

Unfortunately, I ran out of included tokens, which is sad, so I’ll have to offload the heavy tasks onto free tools like Gemini CLI for the next couple of weeks… Or uncovering a bank card…

Unless someone decides to sponsor me :eyes:

I figure that if I wasn’t bogged down by debt from a past failed micro-business, and if I wasn’t trying to build these non-existent tools that are meant to make my pair-programming with AI more effective — not to mention wasting time properly publishing them for everyone and hanging out on forums — I could be making at least $60/day. I’d easily be able to afford Ultra and just chill and code with Gemini and Grok, occasionally unleashing Claude 4 on a bug hunt.

But here I am, still awake at 5 in the morning, all because I spent two and a half hours waiting for a document translation that should’ve taken… 20 minutes? …maybe even less?
If Auto hadn’t pointed out how a tiny mistake ended up dragging things out (or rather, not speeding them up), I might’ve actually been sitting here happy right now.

■■■■, I just realized the export format doesn’t work for me, so now I’ll have to merge my attempt in CAT with that open-source CAT…

LoL, this open-source tool also messes up the structure of the document I need — just in a different way. Maybe the wheel I was reinventing wasn’t that bad after all :thinking:

I apologize for the offtop, but since I was asked…

When I exceed the quota, using the auto model obviously becomes much slower when calling MCP, or sometimes I can’t get any result at all. I’m starting to doubt the actual capability of the auto model.

Im not gonna lie. Cursor is simply too greedy at this point in time and I’m not sure I trust them going into the future.

Looking into alternatives. Claude Code sounds good.

I started using a plugin called kilocode.ai. I created my own indexes with qdrant. I customized it and now use it. The price difference is incredible. I wonder how they can offer lower prices using the same APIs. They offer $25 free with the initial registration (a few steps are required). It looks like I won’t be using Claude anymore.

Imagine a company that only raises its prices instead of improving its service quality. This is nothing but opportunism!