Dear Cursor Team
First of all, I want to express my sincere appreciation for the amazing AI coding assistant you’ve built. Your IDE has significantly improved my workflow, and the intelligent support it provides is truly impressive. Thank you for such a valuable tool!
I’m writing regarding a recent change in the behavior of the O3 model. Currently, it appears that O3 is only available in MAX mode, accompanied by a very long context window being automatically sent with each request. While I understand the intent behind this change, it has unfortunately made the model much less usable for my needs.
Previously, your system smartly selected only the relevant context to send, giving me predictable token usage and clear control over the cost per request. That setup worked wonderfully for me—it was effective, efficient, and budget-friendly.
With the new setup, I sometimes end up paying over a dollar per call, without even knowing what exactly was included in the context window. This lack of transparency and control makes it impractical for me to continue using the model in its current form.
Please consider bringing back the previous O3 behavior as an optional mode, alongside MAX. Having both options would give users the flexibility to choose what best suits their workflow and budget.
Thank you again for your excellent product and for taking the time to consider this feedback.
Warm regards,
Yaakov