As seen in the screenshot, Claude Sonnet 4.5 is selected, but it gives an error. When I click the “Copy Request Details” button, the message that appears says open-ai. I think I’m using Claude. What is open-ai? Is someone trying to fool us?
Steps to Reproduce
Request ID: bd8fb4a2-ae14-4a24-a444-ef92f9ce1c90
{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.”,“additionalInfo”:{},“buttons”:,“planChoices”:},“isExpected”:false}
Expected Behavior
As seen in the screenshot, Claude Sonnet 4.5 is selected, but it gives an error. When I click the “Copy Request Details” button, the message that appears says open-ai. I think I’m using Claude. What is open-ai? Is someone trying to fool us?
Hey, thanks for the report. To clarify your concern: ERROR_OPENAI is just a generic error code in Cursor’s system, and it appears for all model providers (Claude, Gemini, OpenAI, etc.), not just OpenAI. You are indeed using Claude Sonnet 4.5 as selected.
This “Unable to reach the model provider” error has been affecting multiple users over the past 2 days and is currently being investigated.
Please try these diagnostic steps:
Network diagnostics: Cursor Settings → Network → Run Diagnostics (please share the results if there are any errors)
Test in a new chat: create a fresh Agent chat and see if the issue persists
Network setup: are you using a VPN, corporate network, firewall, or antivirus? If yes, try disabling it temporarily
Could you please share:
Network diagnostic results
Whether you’re using a VPN or corporate network
Whether the issue persists after trying these steps
I’m not using any vpn and can’t connect to DNS, HTTP/2 or SSL, it just spins forever. Cursor just hangs saying “Planning next moves” indefinitely. New chat seems to have no effect, or changing agents.
Commands and custom modes are not inter-changable. You can’t restrict specific tool use in commands. You also can’t focus the model with instructions with commands.
What’s the point, my friend? It’s still terrible. It constantly interrupts, ruining the code, and still consuming hundreds of thousands of tokens. I think we need to look for alternatives. It’s getting worse every day, Claude’s cursor.
What’s the point in providing the support team with the feedback they need to rectify the issue that a number of people are having? So they know what the common denominator is in order to debug and provide you with the vibe coding tools you need. How would you expect to improve if your teachers gave you the feedback on a test: “What’s the point” anytime you got something wrong. They ask so they can improve the product.