I see a roughly 10x cost difference between the gpt5.4 model and gpt 5.3 model. Are others seeing this as well? I am running both in MAX mode.

I see a roughly 10x cost difference between the gpt5.4 model and gpt 5.3 model. Are others seeing this as well? I am running both in MAX mode.

max mode depends on use context size (>200k price twices). On non-max they are pretty similar in prices
Hey, what you’re seeing is expected, but the main reason is Max Mode, not just the base rates.
GPT-5.4 has a long-context surcharge. When the input goes over 272k tokens, the input price doubles and the output price becomes 1.5x. In Max Mode with -high reasoning, it’s easy to hit that limit. GPT-5.3 Codex starts with lower base rates, and its surcharge works differently, so the price gap grows fast.
In non-Max mode, the prices are pretty similar. If you don’t need the extra-long context for every task, switching GPT-5.4 to regular mode will reduce that gap a lot.
A couple of options:
-med instead of -high on GPT-5.4. You’ll get fewer thinking tokens with similar quality for most tasks-high is strong on its own. It’s close to Opus 4.6 on our benchmarks for a lot less costPricing details: Models & Pricing | Cursor Docs
Thanks for the usage tip on GPT-5.4 and I agree, 5.3-Codex-High is excellent.
By the way I have rolled forward again to Cursor Version 2.6.21 and am enjoying trouble free agent terminals. The latest UI is excellent, and my session productivity has been thru the roof recently.