Feature request for product/service
AI Models
Describe the request
Could you add fast processing for gpt-5-codex, similar to what you already have for gpt-5. It’d make cursor unbeatable 100%
AI Models
Could you add fast processing for gpt-5-codex, similar to what you already have for gpt-5. It’d make cursor unbeatable 100%
Hey, Priority (what Cursor calls fast") isn’t currently available in the OpenAI API, so it’s not possible to add. If OpenAI ever adds such an option for the Codex model, it will likely be added to Cursor as well.
Codex is already a fast model and Priority doesn’t affect the quality of the model’s output so give it a try the way it is now.