SOLUTION NEEDED: We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment

Hey, thanks for your patience. We can see the issue is still happening even after all the troubleshooting steps.

Confirmed, this is a known infrastructure issue and the team is working on it. Your Request IDs have been shared with the engineers as a priority.

While we work on a fix, here are a couple of temporary options:

  • Use your own OpenAI API key directly (as you noticed, this helps)
  • Try Claude Sonnet 4.5 instead of Opus. Based on user feedback it’s more stable

About your request for custom base URLs for other providers, I’ll pass that feedback to the team. I get that expectations are higher on the Ultra plan.

I’m escalating this internally and I’ll follow up with an update as soon as we have a clearer fix timeline.

Thanks @deanrie I appreciate the response, but this is the same one I have been getting for weeks. Claude Sonnet 4.5 gives the same issues, but more importantly does not address the issues that the models you advertise and have available, such as GPT 5.2 and Opus 4.5 do NOT work properly.

I think we are beyond the “I’ve escalated this issue” as I have also sent emails to support with no response.

Please review my email to support and let’s try and resolve this matter as I pay a pretty substantial amount to Cursor monthly beyond the $200 fee

@deanrie still doesn’t work even when using my own OPENAI Key

Connection failed. If the problem persists, please check your internet connection or VPN

Request ID: 6ef40b70-5649-49c0-9e3a-0e0e1891ccb8

Again, problem is costing me token usage when the connection breaks, then I have to either open a new agent window or try again, which then requires the agent to send more data to the LLM and thus costing MORE token usage.

Just doing a quick analysis, I am confident in saying that I am paying 1.27x times what I should be because of this

I also noticed that randomly, my own API key toggle will just switch off… bug?

usually easy to tell once the agent stops responding…

This is getting REALLY annoying and the tool is almost becoming unreliable at this point.

I am using my OWN Gemini API Key here where I have tier 3 access with GCC:

We encountered an issue when using your API key: Gemini early stop

API Error:

Unexpected gemini finish reason: function_call_filter: MALFORMED_FUNCTION_CALL

Request ID: ecb1d73c-426c-4823-bfbe-0dd2f300440c
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We encountered an issue when using your API key: Gemini early stop\n\nAPI Error:\n\n\nUnexpected gemini finish reason: function_call_filter: MALFORMED_FUNCTION_CALL\n”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:true}

Still no response?

This is from me using my OWN Gemini API Key:

We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.

Request ID: 55297ca2-6730-4e94-8f46-ff5e017508b6
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}

This has been the case for me as well! Can someone fix this please?

1 Like

1 Like

I got no response even from contacting support numerous times. I asked for credit due to the issues and wasting of API tokens but they denied

I’m still getting the same error myself, I don’t think there is any demand on their end to fix this problem

1 Like

Hey all (folks other then OP posting on this thread)

This is a fairly generic error that can mask a lot of different issue.

When this happens, can you share the request ID?

It’s also generally helpful to create new topics and fill out the bug report template. “Fix this” doesn’t get us very far! Details like what version you’re using are really important here.

I’ve created a new topic: Again We're having trouble connecting to the model provider. This might be temporary - please try again in a moment