Anyone use this as their default cursor llm? I default to this only bc cursor’s token usage system bothers me and frankly I’d rather concentrate on the code and comp sci aspect of building instead of dealing with cursor’s “weekly token calculation recalibrating” …it’s sort of burned me out. I personally use codex mini and audit it with grok whenever. I imagine it’s a rudimentary method of getting things done, so I wanted to see how anyone else really reviews chatgpt-5.1 Codex Mini.
Maybe I come across somewhat jaded. I genuinely loved cursor in the beginning. The memory leak (versions ago) and Sam being a disaster for anything that matters sort of took the shine of the product. Hoping these are getting resolved/improving …
pls lmk (about what you guys think about this particular llm)?
Hey, thanks for the feedback and for sharing your workflow.
A couple things that might help:
On models: GPT-5.1 Codex Mini is great for speed and rate limits, it’s 4x the standard GPT-5.1 Codex. But if you want the best quality for the price, it’s worth trying GPT-5.3 Codex. It leads the benchmarks, it’s faster than previous generations, and it costs about a third of Opus 4.6. Cursor recommends it as the default model for everyday work.
On usage and billing: we’re using an API pool system right now. The Pro plan includes a monthly credit, and usage is counted by tokens. If anything looks unclear in the numbers, you can check your usage on https://cursor.com/settings. If you spot a specific bug in how usage is shown, let us know and we’ll dig in.
On the memory leak: yep, there were performance issues in older versions. We’ve done a lot of work on this since then. If you’re still seeing something similar on the current version, message us and we’ll take a look.
Let me know if you’ve got questions about specific models or billing.