GPT-5.3 Codex Spark jumps over Plan mode

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

GPT-5.3 Codex Spark starts running the plan while still being in Plan Mode

Steps to Reproduce

Start a plan, do a few refinements and when close to filling the context window, request a small refinement on the plan and answer Yes if it asks about running the plan as well.

Expected Behavior

Either not run the plan, or switch modes to Agent.

Screenshots / Screen Recordings

Operating System

Windows 10/11

Version Information

Version: 2.6.11 (system setup)
VSCode Version: 1.105.1
Commit: 8c95649f251a168cc4bb34c89531fae7db4bd990
Date: 2026-03-03T18:57:48.001Z
Build Type: Stable
Release Track: Default
Electron: 39.6.0
Chromium: 142.0.7444.265
Node.js: 22.22.0
V8: 14.2.231.22-electron.0
OS: Windows_NT x64 10.0.26200

For AI issues: which model did you use?

GPT-5.3 Codex Spark

For AI issues: add Request ID with privacy disabled

8086ad4f-0c95-47be-a12d-35fb186d585f

Additional Information

I’m running WSL2 so OS level Cursor has to connect to WSL remote

Does this stop you from using Cursor

No - Cursor works, but with this issue

Hey, thanks for the report and the request ID.

This is a known issue. Plan mode not being respected is something the team is tracking. It has been reported across different models, but GPT-5.3 Codex Spark seems especially prone to it.

One detail from your report. You mentioned answering “Yes” when the model asked about running the plan. That likely triggered execution even though you were still in Plan mode. For now, it is worth watching for those confirmation prompts.

A workaround that has helped other users is to create a .cursor/rules/plan-safety.mdc file with:

---
description: Plan mode safety
alwaysApply: true
---
CRITICAL: In PLAN mode, NEVER edit files or run commands. Only describe the plan. Wait for explicit user approval before any implementation.

This does not fix the root cause, but it does reduce how often the model jumps ahead.

Your report, with the request ID, helps with prioritization. Let me know if the workaround helps.