Azure OpenAI custom API interaction bugs

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

  • Azure OpenAI custom model API do not support plan mode automatically, even if user try with turning on manually it gives an error like “can’t help with that”
  • Azure OpenAI based models are not supporting images, vision. it giving wired error

Steps to Reproduce

n/A

Operating System

MacOS

Version Information

Version: 3.1.15
VSCode Version: 1.105.1
Commit: 3a67af7b780e0bfc8d32aefa96b8ff1cb8817f80
Date: 2026-04-15T01:46:06.515Z
Layout: editor
Build Type: Stable
Release Track: Default
Electron: 39.8.1
Chromium: 142.0.7444.265
Node.js: 22.22.1
V8: 14.2.231.22-electron.0
OS: Darwin arm64 25.4.0

For AI issues: which model did you use?

All Azure OpenAI model which user set custom API from settings

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

Hey, both items are known bugs on our side with Azure OpenAI BYOK:

  1. Vision/images don’t work: the verification request is hardcoded to api.openai.com/v1/models, so it fails on Azure and any non-standard endpoints. Same symptom here: OpenAI BYOK chat with image - throws the error.

  2. Plan mode with Azure: the issue is in how we build the URL for the GPT-5 family. We’re sending an OpenAI-style path instead of the Azure-style one with api-version. Related thread: ERROR_OPENAI: Missing required parameter tools[10].custom with Azure GPT-5.

I can’t share a fix timeline yet. If you can, please send the exact error text for the vision case and the Request ID right top corner of the chat > Copy Request ID. That’ll help us link your case to the right tickets.

Upon extensive testing, it is apparent that the current “Bring Your Own Key” (BYOK) functionality suffers from numerous bugs and edge-case failures. The BYOK architecture requires dedicated attention, with a focus on rigorous cross-testing to ensure stability across different providers, models, and editor modes.

Critical Areas for Testing & Bug Squashing Currently, conflicts arise when multiple custom keys are active simultaneously. We highly recommend implementing comprehensive matrix testing across the following areas:

  • Cross-Provider Compatibility: Seamlessly handling simultaneous BYOK configurations for Azure, AWS Bedrock, and custom OpenAI endpoints.

  • Cross-Model Routing: Ensuring requests route to the correct provider when switching between different models.

  • Mode Compatibility: Ensuring BYOK inputs work flawlessly across all core Cursor features, specifically Plan Mode and Vision Mode, which currently exhibit instability when using custom keys.

Specific Feature Request:

  • Enhanced Azure OpenAI Support Currently, the Azure OpenAI BYOK implementation is too restrictive. Because Azure limits model availability based on regional deployments, users often cannot access all necessary models through a single endpoint.
  • Bedrock auto-assigns regions and model id prefix (eu, AU, Global, us, N/A). I am sure Inivisual dev are using the BYOK feature via their personal Bedrock account. They will not have enough limits in a single region, so if a user uses a lot of AI stuff a lot in the cursor via Bedrock BYOK, it will give an error that you hit the limit, and the user needs to change the region (also, model id for some specific regions). So it would be great if there is an auto-assign switch, so the user will just need to provide only API keys and need to choose a model name (not a model ID).
  • Improvement in rules capture during the use of BYOK models. I noticed the cursor ignoring the rules, and I need to give reminders to follow the rules. For sure, the same rule configuration with models provided by the cursor itself is working totally fine.

To resolve this, Cursor needs to support multiple Azure OpenAI configurations simultaneously. The settings UI should allow users to add multiple profiles, each requiring:

  • Deployment Name

  • Endpoint URL

  • API Key

Hey, thanks for the detailed feedback. In order:

For the two specific bugs (vision on Azure and plan mode with GPT-5), the issues are already logged, but there’s no ETA for a fix yet. If you can send the Request ID for the vision case (top right of the chat > Copy Request ID) and the full error text, I’ll attach them to the ticket.

The features you mentioned (multiple Azure profiles in the UI, auto region and model ID for Bedrock, and matrix testing Plan plus Vision across all BYOK providers) are separate feature requests. I can’t promise they’ll land in the backlog exactly as described, but the Azure case where models are split across regions and a single endpoint isn’t enough makes sense.

Let me know if anything else comes up, ideally with repro steps.