Gemini Pro 3 - Intermittent Unable to Connect Error

Where does the bug appear (feature/product)?

Cursor IDE

Describe the Bug

Fairly constant and usually in mid execution of code. I get the error “Unable to reach the model provider” only with Gemini Pro 3 no other models do this.

things I have tried:
I have no mcp, downgraded http to 1.1, tried cloudfare warp/ vpn (currently in turkey thought maybe that was the issue)

I’m out of ideas for how to fix this please help!

Steps to Reproduce

use gemini pro 3 as the model

Expected Behavior

write code without stopping

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 2.2.43 (system setup)
VSCode Version: 1.105.1
Commit: 32cfbe848b35d9eb320980195985450f244b3030
Date: 2025-12-19T06:06:44.644Z
Electron: 37.7.0
Chromium: 138.0.7204.251
Node.js: 22.20.0
V8: 13.8.258.32-electron.0
OS: Windows_NT x64 10.0.26100

For AI issues: which model did you use?

Gemini Pro 3

For AI issues: add Request ID with privacy disabled

"Request ID: 8f224507-dc25-4aa1-b41e-aa7647f9caf0
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}

Does this stop you from using Cursor

Sometimes - I can sometimes use Cursor

1 Like

Hey, thanks for the report.

I’ve seen a similar issue before. In one case, an MCP server caused this error specifically with Gemini models. Could you double-check Cursor Settings > Tools & MCP and confirm all servers are fully disabled? If there’s even one enabled, try turning it off and testing again in a new chat.

Also please try:

  1. Cursor Settings > Network > Run Diagnostics and share the results.
  2. Try without Cloudflare WARP and without a VPN. They can sometimes interfere with certain providers.
  3. If it still happens, let me know and I’ll escalate it to the team.

Let me know if any of this helps.

+1 – still happening for me despite having no MCP servers.

Thanks for the reply! as I said I have no mcp servers. I have tried without vpn and warp, those were attempts to fix the issue not starting points. :confused:

I have a hunch… it does seem like when i get around 100-120k context it starts happening. I’ve been to busy to test thoroughly but it does seem that starting a fresh chat it works until i hit 100-120k context. It is hard to prove because its an intermittent issue… I will continue to test that theory.

Thanks for the extra details. Interesting theory about the context reaching 100–120k tokens.

To help the team investigate:

  • Could you run a quick experiment:
  • Start a new chat with Gemini Pro 3
  • Copy the Request ID right after the first request (does it work?)
  • Keep the chat going until ~100k tokens
  • When the error shows up, copy the Request ID
  • Share both Request IDs here

This will help engineering verify your context size hypothesis. @Michael_Zeng you might have similar usage patterns.

In the meantime, while Gemini is unstable, please try Claude 4.5 Sonnet for long tasks. It tends to be more reliable.

My experience perfectly matches Isaac Smeele’s. Around ~100K tokens (unsure exactly when) I started getting the “Unable to reach the model provider” error occasionally. I’ve now hit 119.8K tokens and now I cannot get a single request to go through without the error.

This is the Request ID of the original request that started the conversation: 0e150be1-6b2e-4f20-87c0-dd51f4e387a8

And here is the Request ID of the latest request, which consistently causes the error: bead9652-4df1-470b-bcd6-d881556085d9

Edit: I did get a request in this conversation to succeed eventually (Request ID: f046fe61-0e87-4001-9d5c-1a0cf15b22ef)

1 Like

I’m not sure how to get request id for non error requests wasn’t a very clear way…. but so far results dont lie. started 3 new chats and did not receive any error until ~120k

test 1. 1st error at 129k - 424ba760-7f44-4b0b-87a3-8e10eca2bd4a

test 2. 1st error at 115k - 529d4860-a5eb-4d3f-a44b-c53a19ceda6d

test 3. 1st error at 124k 40234de4-b629-4ace-9215-cb496be454bc

Ok I figured out the request id retrieval.

TEST 4:

1st request - 6470d2eb-0504-414b-ac89-f0dad243741b
error request - 129K - bafde42c-971e-4a59-877e-540a77538887

I also get this error when I’ve got a large context window.

Request ID: c9c951ce-964b-4840-b3f7-3d558f5d353f
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}

i get this since days.
/summarizing @ <50% context helped a bit but its still annoying

Having the same issue. With large context, or new chats. No MCP servers. Happening on both Mac/Windows. Happening for at least 2 days.

We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.

Request ID: 08c4bbab-2210-46f0-b4f7-32f92a249de5
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}

Version: 2.2.44 (Universal)

VSCode Version: 1.105.1

Commit: 20adc1003928b0f1b99305dbaf845656ff81f5d0

Date: 2025-12-24T21:41:47.598Z (6 days ago)

Just wanted to update you. I’ve done several more tests and confirmed that the error always begins ~120k. In dozens of tests I have never received the error before 95k at the lowest. I didn’t bother getting the codes because you already have so many.

Any update on the issue? I rely on Gemini 3s price point and abilities to make my project feasible. It’s really time consuming having to manually summarize the chat at 100k and have it crash half way through an action. also I am using up my credits doing summaries and having new chats re parse files over and over again….

1 Like

Thanks for the detailed testing. The team confirmed the issue based on your data and the Request ID.

This is a known issue with Gemini Pro 3 at high context sizes (around 100 to 120k tokens). The dev team is already working on a fix.

Workarounds:

  • Use Claude 4.5 Sonnet for long tasks, it’s more stable with large contexts
  • Or run /summarize before you reach 100k tokens

We know this is critical for your project given the cost of Gemini 3. The team is working on a fix.

1 Like

Oh great I had no idea about /summarize ty!!! <3

1 Like

How do you know when you are about to reach 100k tokens?

You can hover your mouse over the context indicator in the chat, and it’ll show you how much context is being used.

1 Like

Ever since the last two updates I am often getting the error very early on even on the first message of a new chat. always with Gemini 3 pro, no issues with any other model.

Request ID: f04c5e62-5557-4d42-a216-87615f523a1d

Request ID: a4d1fdcf-3acb-4bd8-97d4-fe08e5d97098

1 Like