While investigating the part of the code where I call Azure OpenAI with Cursor, I discovered at least the following three issues.
The first issue is the API version. When Azure OpenAI settings are changed, a connection test is performed, but during that test the specified API version is “2023-03-15-preview”, which is a very old version. When using models like o1 or o3, the API version must be at least “2024-12-01-preview” or higher.
The second issue is the request parameters. In the connection test, “max_tokens” is specified, but for reasoning models such as o1 or o3, the “max_completion_tokens” parameter must be used instead of “max_tokens”.
The third issue concerns the error messages. When an error occurs during the connection test with Azure OpenAI, all error messages seem to be condensed into “Invalid credentials. Please try again.” The first two issues I mentioned are likely causing Azure OpenAI to return more specific error messages instead of just “Invalid credentials. Please try again.” I believe it would be more user-friendly to display the error messages returned directly from the Azure OpenAI endpoint on the screen.
I think these issues can be fixed easily.
I would appreciate it if you could share this information with the development team.
I’m glad if I could be of help.
{
"error": {
"message": "Unsupported value: 'messages[0].role' does not support 'system' with this model.",
"type": "invalid_request_error",
"param": "messages[0].role",
"code": "unsupported_value"
}
}
That’s because I think o1 doesn’t support system as a role. It should be developer or assistant.
If only there was a way to change this for every request, it would have worked.
We absolutely support using Azure OpenAI through custom API keys. While Azure integration isn’t currently our top priority given other features we’re working on, you can still use O1 within Cursor via an Azure subscription.
I totally get the frustration around O1/O3 support - the technical details shared by others in this thread are super helpful and I’ve made sure the team has this info to help improve Azure support down the line
you can still use O1 within Cursor via an Azure subscription
This is not so. We can’t use o1 within Cursor via an Azure subscription. Cursor gives ‘Invalid credentials’ for o1 models, this includes o1-preview as well as a regular o1 (I tried it just now). It does work for older models though, such as gpt-4o. This mirrors this thread’s title and initial message.
I am Pro user on latest stable. It appears only GPT-3.5-turbo and gpt-4o work for me, but I believe the model_version for those hardcoded and old. This seems silly to me. The Cursor development cycle will not be able to keep up with model dev. You should open up your model configs so that users can tweak to make them work with new models while your tooling dev catches up. THe community could even inform your dev efforts. I like Cursor so far but its lack of configurability and black box feel is frustrating. Looking at other vscode extensions because of this.
{
"error": {
"message": "Unsupported value: 'messages[0].role' does not support 'developer' with this model.",
"type": "invalid_request_error",
"param": "messages[0].role",
"code": "unsupported_value"
}
}
current version of cursor
Version: 0.46.9 (user setup)
VSCode Version: 1.96.2
Commit: 3395357a4ee2975d5d03595e7607ee84e3db0f20
Date: 2025-03-05T08:37:15.270Z
Electron: 32.2.6
Chromium: 128.0.6613.186
Node.js: 20.18.1
V8: 12.8.374.38-electron.0
OS: Windows_NT x64 10.0.22621
The issue is that the API for these models are not standard OpenAI format (using the role developer, whereas system is the standard), and therefore we have no routing enabled for this format under the hood.