I need to give up for today with gemini 3. It is too popular at the moment. Constantly erroring with high demand. Too bad, because sonnet is something I cannot trust anymore at all.
Very excited to see if it lives up to a hype bar, I was hoping Google would swing back with its own variant of a “bona-fide” A.I.
First impressions: Gemini changed 200 lines in my TypeScript project and CI did not find a single error. Impressive!
Besides the massive tokens being used, I understand it’s a thinking model but it does take a considerable time to complete anything? Hopefully, the model is tuned in the future or these issues
This error continues for me
Request ID: db7e3a2a-1b69-49ef-b898-158ba8602559
“{“error”:“ERROR_OPENAI”,“details”:{“title”:“Unable to reach the model provider”,“detail”:“We encountered an issue when using your API key: Provider was unable to process your request\n\nAPI Error:\n\n\\nRequest failed with status code 400: {\\\"error\\\":{\\\"message\\\":\\\"gemini-3-pro-preview is not a valid model ID\\\",\\\"code\\\":400}}”},“isExpected”:true}”
ConnectError: [invalid_argument] Error
at QJc.$endAiConnectTransportReportError (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:4989:399336)
at Ywo._doInvokeHandler (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:36007)
at Ywo._invokeHandler (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:35749)
at Ywo._receiveRequest (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:34514)
at Ywo._receiveOneMessage (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:33336)
at GDt.value (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:31429)
at _e._deliver (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:49:2962)
at _e.fire (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:49:3283)
at ggt.fire (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:4974:12156)
at MessagePort. (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:7498:18439)
its ■■■■.
It seems much better.
Isn’t the Genimi 3 pro come with larger context window? I think their docuemnt said 1m but in curosr still show 200k, same as sonet4.5
Same here, I don’t see it in v1.7 either.
Is the Gemini 3.0 pro that’s provided high or low? It doesn’t say.
I’m honestly really impressed so far. I spent 3 days working on a problem with different models and they kept making up the causes, but Gemini 3.0 was able to tell me what was happening after doing an investigation and applying the fixes. I’ll keep using it, but I’m truly impressed.
is the model free in preview version? you guys should be clearer when the models are free when they are newly released because not sure where we can find information of what periods models are free till they become paid
Go to Cursor Settings > Models , click on the refresh icon, and it’ll appear. That made it for me.
Blah, I woke up to the brave new morning expecting high demand message to be only nightmare from last night. But noo, it is worse than when I went sleep. Couldn’t even get first reply from it.
I tried it with my own Gemini API key, because the Cursor’s own is useless at the moment. It was working just fine for asking questions and planning, but when I tried to press Build (many times), then it would throw this error:
Request failed with status code 404: [{
“error”: {
“code”: 404,
“message”: “models/gemini-3-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.”,
“status”: “NOT_FOUND”}
}]
Blessed with Gem3Pro. Debugging lot’s of GPT5 code ![]()
I tried it out for a day today. Although it’s a bit slow, it completed my requirements in one go without any bugs. In terms of performance, it’s on par with GPT-5 in terms of intelligence. Currently, I can’t compare which one is better because both are excellent.
Unfortunately, I was only able to interact with the service on a limited basis. From 09:00 UTC onwards, the service either reached the provider’s rate limit and responded with a message that the provider was overloaded, or the prompt call simply shut down unexpectedly.
Same here. It is like winning in lottery if you can do anything now.
I hit to limits even with my own API key. Increasing tier 2 is not easy or fast.



