I keep getting this error when using my own OpenAI API Key. I am not running out of credits. This keeps happening and at random times, which is really annoying.
Steps to Reproduce
Add your own OpenAI API key and try running the Agent with GPT 5.2
Hey, thanks for the report. The “Request failed with status code 400” error with the “invalid_encrypted_content” message is a known issue that happens when using a custom OpenAI API key with GPT-5.2 and other reasoning models.
The screenshot shows a typical problem: the request content can’t be processed by the API due to a conflict between parameters that Cursor sends for reasoning models. This can happen irregularly due to different context parameters or request handling.
Request ID: 9c1b5a69-8378-4637-bf49-1237afb9dbcd
{"error":"ERROR_OPENAI","details":{"title":"Unable to reach the model provider","detail":"We encountered an issue when using your API key: Streaming error\n\nAPI Error:\n\n```\n{\"error\":{\"type\":\"provider\",\"reason\":\"provider_error\",\"message\":\"Provider returned 400\",\"retryable\":false,\"provider\":{\"status\":400,\"body\":\"{\\n \\\"error\\\": {\\n \\\"message\\\": \\\"Item 'fc_02ddc9b6ece5fea70069459eb5643c81968863a4280ff246de' of type 'function_call' was provided without its required 'reasoning' item: 'rs_02ddc9b6ece5fea70069459eae45748196a24fd4d5ad2deb26'.\\\",\\n \\\"type\\\": \\\"invalid_request_error\\\",\\n \\\"param\\\": \\\"input\\\",\\n \\\"code\\\": null\\n }\\n}\"}}}\n```","additionalInfo":{},"buttons":[],"planChoices":[]},"isExpected":true}
Request ID: 8cacd6cf-e645-463d-a1f9-cf97f49514ba
ConnectError: [unavailable] getaddrinfo ENOTFOUND agentn.api5.cursor.sh
at oou.$streamAiConnect (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:12706:472564)
at async vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:566:27936
at async Object.next (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:10366:5535)
at async Object.run (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:10378:11111)
at async o (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9113:1269)
at async Promise.allSettled (index 0)
at async Sro.run (vscode-file://vscode-app/Applications/Cursor.app/Contents/Resources/app/out/vs/workbench/workbench.desktop.main.js:9113:6519)
I understand how frustrating random errors are, especially when they disrupt your workflow. Since this issue is intermittent and you are using your own key, my first thought is that you are hitting a Request Per Minute (RPM) rate limit on the OpenAI side, even if you have sufficient credits. I suggest checking your OpenAI account dashboard to see if specific 429 errors (rate limit) are being logged when the Cursor Agent fails. If the issue persists, I would try regenerating the API key entirely, just in case of a token validation error.
Request ID: 26ca38a2-8aaa-486f-b610-c3798993b13f
{"error":"ERROR_OPENAI","details":{"title":"Unable to reach the model provider","detail":"We encountered an issue when using your API key: [unavailable] Error\n\nAPI Error:\n\n```\nModel temporarily unavailable due to unsupported Responses API event.\n```","additionalInfo":{},"buttons":[],"planChoices":[]},"isExpected":true}