Update: o3 is now 1 request in Cursor

We’ve updated pricing to reflect OpenAI o3 pricing reduction.

  • It’s now 1 request
  • Max Mode has 80% reduction

As always, models are available at Cursor – Models & Pricing

23 Likes

Is it available on Slow Poll?

yes

Wow thats great news :slight_smile: thanks @ericzakariasson

It still shows 03 as a Max only model, at least in my Cursor version, and there are no updates available in my Cursor version either.

Version: 1.0.0
VSCode Version: 1.96.2
Commit: 53b99ce608cba35127ae3a050c1738a959750860
Date: 2025-06-04T19:21:22.523Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Darwin arm64 23.6.0

Edit: I’ve just tried enabling the O3 in max mode to see whether it would count as one request, and unfortunately, it counts as the max mode billing and not one request. Beware others!

Edit 2: All good, I see it now - I just needed to restart Cursor

2 Likes

mine is correct
v1.0.1

1 Like

Based on your experience, do you feel that O3 is better than Gemini, Sonnet, or Opus for specific types of tasks? Or maybe for specific languages?

1 Like

Great, but o3’s rule compliance is really poor. Claude-4-sonnet-thinking and Gemini pro do much better

Does this means that Max Mode requests with o3 also dropped by 80%?
I mean yes its self-explanatory but I want to get sure because this would be even greater news in my actual use case.

I tested it today: it was able to understand code better than the latest Gemini, which hallucinated a solution based on incorrect information from my prompt. O3 pointed out why my request was incoherent.

It’s reasonably fast and handled my subsequent requests flawlessly and concisely. Its comments on what it did were really helpful. It’s a bit early for a definitive verdict, but it’s now my favorite model for logic and backend code.
Sonnet 4 remains my go-to for frontend design, which I find particularly good.

1 Like

The API pricing is also reduced, so MAX is also cheaper!

1 Like

So is anyone able to use o3 pro mode though? It’s not working at all for me. So sick of the congestion every time a model drops like do these people not realize they need some excess compute and/or server capacity for this stuff?

Strange for a company doing $10 Billion ARR

1 Like

Can’t get it to do anything meaningful at all.

What is your impression of O3?

Thanks a lot, restarting worked for me also

I used gemini 2.5 to solve a bug but it takes 3 or more request even not solved. Used o3 and 1 request solved the bug. Solving bugs are annoying with Claude and gemini (the model may be o3-max)

1 Like

This is great, but what has happened to o3-mini (at .25 credits)??