I think today’s Cursor update added a new model: gpt-5.3-codex-spark-preview
Seems pretty lightweight and fast.
I think today’s Cursor update added a new model: gpt-5.3-codex-spark-preview
Seems pretty lightweight and fast.
What are your thoughts on this?
I usually use Kimi K2.5 for speed and Codex for more complex tasks, so using Codex Spark isn’t really a concern for me.
Its say free but i can see a small increase in the api usage
It’s fast - that’s super-cool about it. OpenAI is using Cerebras as the provider to provide at speeds of 1000+ tokens per second. I’ve always liked Cerebras for their offerings, they provide opensource models at unmatched speeds. Seems like OpenAI struck the right deal here.
However i noticed it does seem to think even when I selected the *-low variant- which is odd because I just told it to add a new parameter to a function in a prompt.
The token consumption is surprisingly lower than any other premium AI model in cursor.
Other than that - I found the *-xhigh version to be smart - even though they claim Spark is a cheap and non-intelligent model - it was able to find and provide answers to complex questions about my monolith codebase of 1 00,000+ lines.
Their release article is worth a read for the cursor team to see how they are developing their product and partnerships: https://openai.com/index/introducing-gpt-5-3-codex-spark/
Interesting, thanks for the explanation. I’ll give it a try today while it’s still free, haha.
I’ll report back in about two days with my experience using Codex Spark. The only potential issue is my network setup since I’m behind a company proxy. I’m not sure how that might affect the connection on the 1000 TPS throughput.
How come you see actual model id and not the fancy name? is there a settings for this?
I have used it a bunch, it is very fast, but intelligence is very bad.