Fairly constant and usually in mid execution of code. I get the error “Unable to reach the model provider” only with Gemini Pro 3 no other models do this.
things I have tried:
I have no mcp, downgraded http to 1.1, tried cloudfare warp/ vpn (currently in turkey thought maybe that was the issue)
I’m out of ideas for how to fix this please help!
Steps to Reproduce
use gemini pro 3 as the model
Expected Behavior
write code without stopping
Operating System
Windows 10/11
Current Cursor Version (Menu → About Cursor → Copy)
For AI issues: add Request ID with privacy disabled
"Request ID: 8f224507-dc25-4aa1-b41e-aa7647f9caf0
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}
I’ve seen a similar issue before. In one case, an MCP server caused this error specifically with Gemini models. Could you double-check Cursor Settings > Tools & MCP and confirm all servers are fully disabled? If there’s even one enabled, try turning it off and testing again in a new chat.
Also please try:
Cursor Settings > Network > Run Diagnostics and share the results.
Try without Cloudflare WARP and without a VPN. They can sometimes interfere with certain providers.
If it still happens, let me know and I’ll escalate it to the team.
Thanks for the reply! as I said I have no mcp servers. I have tried without vpn and warp, those were attempts to fix the issue not starting points.
I have a hunch… it does seem like when i get around 100-120k context it starts happening. I’ve been to busy to test thoroughly but it does seem that starting a fresh chat it works until i hit 100-120k context. It is hard to prove because its an intermittent issue… I will continue to test that theory.
My experience perfectly matches Isaac Smeele’s. Around ~100K tokens (unsure exactly when) I started getting the “Unable to reach the model provider” error occasionally. I’ve now hit 119.8K tokens and now I cannot get a single request to go through without the error.
This is the Request ID of the original request that started the conversation: 0e150be1-6b2e-4f20-87c0-dd51f4e387a8
And here is the Request ID of the latest request, which consistently causes the error: bead9652-4df1-470b-bcd6-d881556085d9
Edit: I did get a request in this conversation to succeed eventually (Request ID: f046fe61-0e87-4001-9d5c-1a0cf15b22ef)
I’m not sure how to get request id for non error requests wasn’t a very clear way…. but so far results dont lie. started 3 new chats and did not receive any error until ~120k
test 1. 1st error at 129k - 424ba760-7f44-4b0b-87a3-8e10eca2bd4a
test 2. 1st error at 115k - 529d4860-a5eb-4d3f-a44b-c53a19ceda6d
test 3. 1st error at 124k 40234de4-b629-4ace-9215-cb496be454bc
I also get this error when I’ve got a large context window.
Request ID: c9c951ce-964b-4840-b3f7-3d558f5d353f
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}
Having the same issue. With large context, or new chats. No MCP servers. Happening on both Mac/Windows. Happening for at least 2 days.
We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.
Request ID: 08c4bbab-2210-46f0-b4f7-32f92a249de5
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}
Just wanted to update you. I’ve done several more tests and confirmed that the error always begins ~120k. In dozens of tests I have never received the error before 95k at the lowest. I didn’t bother getting the codes because you already have so many.
Any update on the issue? I rely on Gemini 3s price point and abilities to make my project feasible. It’s really time consuming having to manually summarize the chat at 100k and have it crash half way through an action. also I am using up my credits doing summaries and having new chats re parse files over and over again….
Ever since the last two updates I am often getting the error very early on even on the first message of a new chat. always with Gemini 3 pro, no issues with any other model.