I’ve been using the o3 model extensively in ChatGPT, and it’s become undeniably clear: its greatest strength lies in its ability to call tools, especially Python in isolated environments. This is what transforms o3 from just another LLM into a true reasoning and problem-solving agent.
However, in the current Cursor environment, o3 is not permitted to utilize Python in a separate execution context. This restriction severely limits its potential. The model becomes constrained, unable to perform deep analysis or dynamic experimentation that would otherwise be effortless in agent-enabled mode.
To truly showcase o3’s capabilities, we need to let it run Python in agent mode — not just for manual execution, but also for features like bug finding, optimization, and complex data analysis.
Imagine the efficiency gains: faster debugging, deeper code understanding, fewer tool invocations, and ultimately — reduced compute costs. More performance, less overhead.
Please consider this enhancement not just as a feature request, but as a step toward unlocking the true promise of LLM-assisted development. Let o3 breathe. Let it build. Let it think.