Bug Report: After enabling custom models, Gemini models become unavailable

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

After using an OpenAI API Key, selecting any Gemini model results in the following error message:

“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”

Steps to Reproduce

Title:
Bug Report: After enabling custom models, Gemini models become unavailable.

Description:
After using an OpenAI API Key, selecting any Gemini model results in the following error message:

“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”

Request ID: f732879e-88e5-42d1-9343-8ca2b0557c37

Operating System

Windows 10/11

Version Information

Version: 2.4.21 (system setup)
VSCode Version: 1.105.1
Commit: dc8361355d709f306d5159635a677a571b277bc0
Date: 2026-01-22T16:57:59.675Z
Build Type: Stable
Release Track: Early Access
Electron: 39.2.7
Chromium: 142.0.7444.235
Node.js: 22.21.1
V8: 14.2.231.21-electron.0
OS: Windows_NT x64 10.0.19045

For AI issues: which model did you use?

gemini

For AI issues: add Request ID with privacy disabled

Request ID: f732879e-88e5-42d1-9343-8ca2b0557c37

Does this stop you from using Cursor

Yes - Cursor is unusable

Relevant Configuration & Documentation:

I am using a custom model API endpoint from Zhipu AI: https://open.bigmodel.cn/api/coding/paas/v4

The configuration references Zhipu AI’s official documentation: https://docs.bigmodel.cn/cn/coding-plan/tool/cursor

Hey, this is a known issue. When you set a custom OpenAI Base URL or API key, that config gets applied to all model providers, not just OpenAI.

The same thing was discussed here: After integrating the Open API key, the existing AI becomes unusable

Workaround: turn off the custom API settings (Settings > Models > OpenAI Base URL) before switching to Gemini or other native Cursor models.

The team is aware and the bug is logged. Your report will help with prioritization.

It would be really good if we had proper system for custom llm apis that would not disrupt cursor’s provided models. That way we could use either sonnet from cursor or sonnet from our custom api at the same time. To me, this is a big thing and hope we get to see such improvement soon.

1 Like