O3 - pro user - not connecting to model provider

Describe the Bug

Getting this error whilst using o3 – posted about this last week too.. are you low key throwing errors to prevent usage for pro users using slow requests or is this a legitimate ongoing issue with openai?

‘‘We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.’’

Steps to Reproduce

Using o3 model

Screenshots / Screen Recordings

Operating System

Windows 10/11

Current Cursor Version (Menu → About Cursor → Copy)

Version: 1.2.4 (user setup)
VSCode Version: 1.99.3
Commit: a8e95743c5268be73767c46944a71f4465d05c90
Date: 2025-07-10T17:09:01.383Z
Electron: 34.5.1
Chromium: 132.0.6834.210
Node.js: 20.19.0
V8: 13.2.152.41-electron.0
OS: Windows_NT x64 10.0.26100

Additional Information

Request ID: 78ee5798-aff4-4c77-9c21-fe91938008e7

Does this stop you from using Cursor

Yes - Cursor is unusable

1 Like

I am experiencing the same thing… been like this for hours

Hey, I just checked, and the model works fine for me. Can you confirm if the issue still exists? Also, try starting a new chat.

Still no joy from me. but I am chatting with hi@cursor to see why its not working. Its really delaying critical work

1 Like

This isn’t a single‑provider blip or a handful of confused users; it’s a repeating platform issue that’s been popping up in forum history for quite a while. Any official word? Incident ID? Status page note? Even “we’ve reproduced it and are digging” would help. Right now we just keep getting scattered “works for me / try again/ it’s an anthropic bug” replies while the error persists across all models. Happy to supply timestamps, logs, whatever. I’ve practically memorized the banner—might print it on a mug soon. Could we consolidate these threads and get a central update? Thanks.

1 Like

Seems to happen with some MCPs. Try to deactivate one by one.

The strange thing is that MCPs worked until some days ago

Thanks for the input, but I have no mcps active and this happens anyway. Yesterday I was unable to use cursor from 5pm until 02 am with this constant message.

Here are only a few of the forum threads I’ve found in the last few days with posts related to this issue:

This doesn’t feel like something solved by “have you tried starting a new chat?” That keeps being the default reply, but new threads with the same symptoms keep appearing. It’s clearly broader than isolated user error.

I’d really appreciate someone from the Cursor team acknowledging this with at least a “we’re looking into it.” For me, Cursor has been nearly unusable the last two weeks.

1 Like

I am also facing same issue since last 24+ hours. Neither O3 nor Auto working for me.

We're having trouble connecting to the model provider. This might be temporary - please try again in a moment.
Request ID: 57f3183a-9ccb-4d42-8918-82aa157d3e84

@deanrie @condor Can you please look into it, I think lot of users are facing similar issue.

1 Like

Completely agree, it feels like the responses are a brush-off for something that is clearly a widespread issue.

No questions are answered. Models are being shut down, purchased packages are unusable. If the cursor is down, there’s no need to inconvenience thousands of people. We can’t even use the services we paid for.

Questions are going unanswered, and support calls are being ignored. Forum posts are blocked.

Hi everyone please a bit patience. It takes time to review bug reports and to triage causes. It’s not correct that nobody answers or ignores support requests.

While some users are affected it’s not affecting all. Now its important to see what is causing it and find how to make it reproducible then fix it.

Please focus on OPs topic. if you have comments for other topics post them in other threads.

@AKC thanks for tagging me.

@seanc-dev no this isn’t a brush off but checking on possible causes.

We understand your concerns and are offering support, but sudden outages, unanswered questions, and insincere AI communications are impacting our business and processes. These outages and problems have been increasing rapidly for a week.

You told us to pay for more and uninterrupted use. We bought the Ultra package.

You told us to update the cursor. We did.

You imposed usage restrictions. We complied.

The result: Now we can’t do anything. The cursor auto mode is deleting our codes. It’s giving fake and false answers to questions. It even lied four times when asked what model number it was.

What should have been our favorite brand is now turning into a nightmare.

have the sane issue with all models at the moment

I am also facing the issue. requests are failing , timeouts.

Hi @deanrie @condor

its 100% still not working/ still getting errors – question: are you limiting the length of chats on purpose?

To clarify:

  • this is across ALL models (o3, claude, etc)
  • its has been happening for the past 2 weeks (at least on my end.. but i can see others hae complained about this also)
  • no, im not using any custom setting or setup - default settings, vanilla as it can get.

When this first occured, yesterday: I was knee deep in a session and thats when i first got the error (the chat itself was not ‘long’ - mid at best..)

I tried the following:

– 1. Restarting pc and ide quit

To test:

– 2. I connected my api key and it momentarily seemed to be working but again it error’d out after a couple seconds. (the first error message i got was view of the no. of context tokens being used to process the request before timing out before again thowing the same error - ** i’ll come to this point later)

– 3. I then tried turning on api usage pricing (toggling off api) and retried sending the request (as i thought it may be related to your limiting of ‘pro users’ usage on slow requests), still nothing.

– 4. I then opened a new chat… working perfectly fine. So i decided to bite the bullet and lose that context and continue in the new chat, a few hours later (again, not a ‘long’ chat history by any means), after SEVERAL failed tool calls, again I got the same error.

That was yesterday.

Today, again tried same session incase it was temporary, same error - I then opened a new chat and working fine.

So per your new tier/ pricing system are you covertly limiting context token usage in a request / essentially limiting the length of a chats - hence its throwing out an error? (point 2) because it seems like it. I mean because it cant be a model connection issue as im able to connect in a new chat right?

Im working on a cognitive neuro project and i require ‘x’ amount of context to ensure adequate assistance, so starting a new chat as a (painful) temp measure and taking the hit of failed tool calls in agent mode whilst we get any feedback or resolution on this issue is literally costing me $$$ - not to mention the effects on productivity… starting third chat in under 24 hours.

Maybe the lack of transparency about the recent changes has made me pessimist but i dont see what else could explain this. But perhaps your technical team will prove me wrong.

T.

Thanks for the feedback, Cursor Team is still looking into it.

@TK94 chat length and amount of context sent in matters, as too long context with a lot of varying info does confuse AI eventually.

The context limit as shown on Models page in documentation is for several issue prevention purposes. That’s why there is a max mode for cases where you need longer context

1 Like

Bro relax u are exaggerating massively it’s annoying but its not that deep ffs

@condor, why are we persistently not getting any response regarding our packages, outgoing tokens, and non-working accounts? So many people are voicing their complaints here. Emails are being sent. No response, no explanation!!!

Explain to me logically why I can’t use the cursor at all, despite purchasing Ultra and paying for more credits.

Are you going to ignore this and pretend it doesn’t exist?

I don’t understand, so many people are saying the same thing. But you don’t respond to them and insist on not switching models while still in a chat window.

We don’t do this anyway. We have our setup installed and haven’t had any issues so far. Even though we have the same number of requests, when we upgrade to the ultra plan, you make it seem like we’ve used five times more. You’re restricting usage even though it says it’s unlimited.

Unless we stop being respectful, you’re being disrespectful.