Azure o1 model API isn't working, but the 4o model API is functional

While investigating the part of the code where I call Azure OpenAI with Cursor, I discovered at least the following three issues.

The first issue is the API version. When Azure OpenAI settings are changed, a connection test is performed, but during that test the specified API version is “2023-03-15-preview”, which is a very old version. When using models like o1 or o3, the API version must be at least “2024-12-01-preview” or higher.

The second issue is the request parameters. In the connection test, “max_tokens” is specified, but for reasoning models such as o1 or o3, the “max_completion_tokens” parameter must be used instead of “max_tokens”.

The third issue concerns the error messages. When an error occurs during the connection test with Azure OpenAI, all error messages seem to be condensed into “Invalid credentials. Please try again.” The first two issues I mentioned are likely causing Azure OpenAI to return more specific error messages instead of just “Invalid credentials. Please try again.” I believe it would be more user-friendly to display the error messages returned directly from the Azure OpenAI endpoint on the screen.

I think these issues can be fixed easily.

I would appreciate it if you could share this information with the development team.
I’m glad if I could be of help.

1 Like

now o3 came out, but you guys still don’t support even Azure’s o1

I see this error in the network window:

{
  "error": {
    "message": "Unsupported value: 'messages[0].role' does not support 'system' with this model.",
    "type": "invalid_request_error",
    "param": "messages[0].role",
    "code": "unsupported_value"
  }
}

That’s because I think o1 doesn’t support system as a role. It should be developer or assistant.

If only there was a way to change this for every request, it would have worked.

Hey, not currently a way around this, but thanks for flagging - I’ll throw this to the team to see if we can solve it!

1 Like