Cursor AI Experience: Auto Mode, Model Reliability, and Pricing Transparency

:rocket: Speed and Reliability with Cursor

I’ve been using Cursor extensively and have successfully built about four Full React JS applications with Supabase. The development process has been incredibly fast, with minimal manual coding—it’s mostly “vibe coding.” This has significantly sped up my production and daily work.

The AI models have proven to be very reliable in making code changes, especially now that I’ve optimized my specific commands and learned to control the AI agents using checks and user input steps. The only occasional issue is an unexpected, complete clearing of a local database in Docker, despite explicit instructions not to reset it (although I’m getting better at catching it before it does a reset!)

:brain: The “Auto Mode” Intelligence Problem

I primarily use Auto mode, and while it works exceptionally well most of the time, I’ve noticed a major issue: Auto mode seems to get “dumber” over time. It eventually reaches a point of such poor performance that it becomes an absolute nightmare to work with.

The Model Transparency Fix

I was quickly hitting the limits on the $20/month plan (and silently downgraded to dumb AI models), so I upgraded to the Pro plan. I also implemented a crucial rule:

  • Auto mode must always confirm what model it is using and ask for user acceptance.

The reason for this is that the model can be downgraded invisibly to the user. Auto mode feels like a “black box” where performance is inconsistent, and I believe this is entirely tied to the underlying models it selects because I can definitely see the difference between the models in how they respond and work. By forcing the confirmation, I can now stop or abandon a complex operation if it attempts to use a dumb model I know will cause issues.

:sparkles: The Claude Sonnet 4.5 Upgrade

Since upgrading to Pro today, I’ve noticed a massive improvement: every single chat (planning, implementation, bug fixes) has used Claude Sonnet 4.5.

The performance has been absolutely fantastic and flawless—except for a couple of minor bugs. I plan to monitor this closely. I’m currently at $20 in usage today, and I anticipate that once I hit the $60 usage range, I might see a switch back to a “dumb model” to manage costs. At least now I’ll be aware of it and can plan accordingly!

:magnifying_glass_tilted_left: Opaque Pricing and Next Steps

The platform’s pricing is extremely opaque and hard to understand, especially in relation to the model usage limits. I’m carefully monitoring my usage in the hopes of revealing what those usage limits actually are.

I realize I may eventually need to stop using Auto mode and switch to explicitly using models like Claude Sonnet for my work, but for now, I’ll stick with Auto mode since it’s currently on the powerful Claude Sonnet 4.5.

I’ll report back in a few days when I see the model switch to a lesser one. I’ve trawled through the forum to try and shed light on the auto mode and pricing but no one seems to have any idea what is going on. Some clarity from cursor C-suite or power users would be much appreciated.

Cursor is an absolute powerhouse and so far it looks like all the issues I have with it are simply from the auto mode switching to dumb models when I hit rate limits or usage limits.

1 Like

Yesterday I used auto mode all day for well over 100 chats - Sonnet 4.5 was used 100% of the time even for simple tasks. I thought “auto” was supposed to be smart and switch to cheaper models when expensive models weren’t needed? It doesn’t.

I’m at $50 of usage today so I’ll exceed my $60 usage shortly and find out if “auto” is just fancy speak for “switch to dumb models when power users cost us too much” - which I’m pretty sure is what was happening previously on the $20 plan.

1 Like

At $80 spend now and still getting sonnet 4.5 for every auto request.

Also, I was getting connection errors very consistently when on the $20 plan (error about proxy/vpn or timeout or connection lost). These errors look like they were all incorrectly blaming the network because I’ve had 0 connection issues since upgrading ZERO - despite running over 200 chats now in a few days.

So all that time I spent debugging my network was wasted - thanks cursor for the red herring. If I knew I just had to upgrade I would have upgraded sooner.

Previously I’d get the network error and the chat broken every second chat I started and I could never run more than 1 chat at a time. Now I’m consistently running 5 chats and not getting any issues at all.

I also recently updated to 2.0 so maybe that fixed the bug too. (I also see the agents page/view as pointless. 1. I can’t queue up drafts of agents to run like I can in the editor agent 2. I can’t see the files 3. the vertical chats show all chats not just the ones I have running/queued making it super overwhelming). Pretty pointless and much harder to use view IMHO.

1 Like

Appreciate the transparency here — especially around Auto Mode and pricing. :raising_hands:
The model picker mystery has definitely been a vibe risk, so seeing the model tag in the corner is a solid step. That said, pricing still feels like a scroll with missing glyphs. Predictability matters — especially when switching between Claude and GPT-4 Turbo mid-task.

Also loving the Claude 2.1 shoutout — long context tasks finally feel less like a toaster meltdown.

Looking forward to seeing how the new pricing model lands. TLAG snack-mode cautiously optimistic :chocolate_bar::herb:


Still using sonnet 4.5 for 100% of the auto mode chats. I’ve never seen it use another model in auto mode so far.

Today I did get 3 server issues that broke the chats but I think that was largely because I was running 5+ agents at the same time so it was likely a claude rate limit issue. I let a few agents finish and then continued the broken chats without further issues (previously I’d have to restart a new chat thread and the old thread would never restart after a server issue).

I’ve used $130 of usage so far in 2 days. It’s still “included” in the pro plan limits and so far I haven’t been dumbed down in the model.

Quite happy to still be getting good performance and models but a little concerned about where the invisible “cutoff” is. I think I used around $600 on the $20 plan over a month which I guess is why it was just giving me the dumb models towards the end of the billing cycle.

As a rough guess I’d say the models got really dumb after about $400 on the $20 plan, so if it’s a similar cutoff on pro I guess I should be good up to around $1200 of “usage”.

1 Like

There’s no reliable way to know which model Auto is using. It may say it’s sonnet 4.5, but it may not be.

Do you have an annual subscription with unlimited Auto?

1 Like

Just a monthly sub

So far it says sonnet 4.5 for everything so I am starting to doubt whether it actually is using sonnet for everything.

Having said that the code quality has been very consistent and reliable since I upgraded to pro - I have yet to get a dumb model (these models are very obvious because of the errors and loops and they also respond very differently).

1 Like

So you don’t have unlimited Auto and have used $130 usage out of your $20 plan? Or are you meaning Pro+, which is 3x as Pro? Regardless getting $130 -$400 usage on the $20 or $60 plan is insane.

Also how do you know your Auto is using sonnet 4.5?

1 Like

I created a rule that if the ai is on auto mode then it needs to tell me what model it is assigned and then await confirmation from the user before continuing. I added it so I could catch dumb models before they break my codebase everywhere.

Auto is “unlimited” apparently and so far it is always sonnet 4.5 - that or the AI is lying every time.

The token count is deceptive because almost all of it is cached tokens

1 Like

Your plan currently includes unlimited Auto for the current billing period. This will transition to new pricing in a future billing cycle. I have this notice too, but I am on an annual subscription, so my next billing period is not until next July. But you said you are monthly, which is why I am surprised you have unlimited Auto. Doesn’t matter. Use it while you got it! :grin:

I have a rule to identify the model too. However, I wonder how accurate it is. My rule used to work better, but now for Auto it either ignores telling me what model it is or says something like Model: Auto (agent router) | Type: Code assistant | Revision: 2024

What rule are you using?

my rule works every time - here it is

Model Identification
:white_check_mark: Initial response State exact model identifier and purpose
:white_check_mark: Format “Running as [Model Name]”
:white_check_mark: Example “Running as Gemini 2.5 Pro,” “Using GPT-4o”
:white_check_mark: Confirm the user accepts the model before proceeding further
:cross_mark: Start working or looking at files or evaluating the task before the user confirms the model
:white_check_mark: If your model is Claude Sonnet 4.5 then then permission is auto granted and you don’t need to wait for the user

2 Likes

Looks like cursor changed how they calculate the usage and no longer show the "value” of the free auto usage.

So I’ll change my tracking to input output

I’m at 3 million input 5 million output tokens so far (about double where I was last time I posted.

The auto mode still reports that it is using sonnet 4.5 for every time it runs - I’ve still never seen it report a different model.

I would be doubtful that it is using sonnet for all auto chats if it wasn’t so consistent in the output and quality - so far I’ve had very consistent and high quality code from every chat. The only issues I’ve encountered have been user error / prompt error.

I’ve also been able to run 5+ chats simultaneously without any rate limits or server/internet errors.

Very generous that the auto model is unlimited usage - especially if it is truly sonnet 4.5 which is feels like it is. I haven’t had the “dumb model” problems since I started asking it to report the model before starting work and since I upgraded to pro. Pro and 2.0 cursor seem to have fixed all the major issues I was having before with “dumb models” - I’m assuming it is mostly the pro upgrade.

Today I finally got GPT-5 Codex instead of Claude Sonnet on the auto mode.

I had used sonnet for 1 billion tokens.

5 million input, 8 million output

1 billion cache read, 70 million cache write

I’ll be interested now to see if it has to recache everything for codex or if it uses a sort of shared cache.

I’m also interested to see if it’s just switched me to a cheaper model now that I’ve hit 1 billion tokens and to see if it switches me back to claude or not. So far auto mode hasn’t been “auto” in anything.

Odd. It used codex for about 5 chats and then went back to sonnet ever since.

Well at least I know the “what model are you” rule is working.

Codex was complete rubbish compared to sonnet also so I’m glad it switched back.