I need to give up for today with gemini 3. It is too popular at the moment. Constantly erroring with high demand. Too bad, because sonnet is something I cannot trust anymore at all.
Very excited to see if it lives up to a hype bar, I was hoping Google would swing back with its own variant of a ābona-fideā A.I.
First impressions: Gemini changed 200 lines in my TypeScript project and CI did not find a single error. Impressive!
Besides the massive tokens being used, I understand itās a thinking model but it does take a considerable time to complete anything? Hopefully, the model is tuned in the future or these issues
This error continues for me
Request ID: db7e3a2a-1b69-49ef-b898-158ba8602559
ā{āerrorā:āERROR_OPENAIā,ādetailsā:{ātitleā:āUnable to reach the model providerā,ādetailā:āWe encountered an issue when using your API key: Provider was unable to process your request\n\nAPI Error:\n\n\\nRequest failed with status code 400: {\\\"error\\\":{\\\"message\\\":\\\"gemini-3-pro-preview is not a valid model ID\\\",\\\"code\\\":400}}ā},āisExpectedā:true}ā
ConnectError: [invalid_argument] Error
at QJc.$endAiConnectTransportReportError (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:4989:399336)
at Ywo._doInvokeHandler (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:36007)
at Ywo._invokeHandler (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:35749)
at Ywo._receiveRequest (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:34514)
at Ywo._receiveOneMessage (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:33336)
at GDt.value (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:492:31429)
at _e._deliver (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:49:2962)
at _e.fire (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:49:3283)
at ggt.fire (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:4974:12156)
at MessagePort. (vscode-file://vscode-app/usr/share/cursor/resources/app/out/vs/workbench/workbench.desktop.main.js:7498:18439)
its ā ā ā ā .
It seems much better.
Isnāt the Genimi 3 pro come with larger context window? I think their docuemnt said 1m but in curosr still show 200k, same as sonet4.5
Same here, I donāt see it in v1.7 either.
Is the Gemini 3.0 pro thatās provided high or low? It doesnāt say.
Iām honestly really impressed so far. I spent 3 days working on a problem with different models and they kept making up the causes, but Gemini 3.0 was able to tell me what was happening after doing an investigation and applying the fixes. Iāll keep using it, but Iām truly impressed.
is the model free in preview version? you guys should be clearer when the models are free when they are newly released because not sure where we can find information of what periods models are free till they become paid
Go to Cursor Settings > Models , click on the refresh icon, and itāll appear. That made it for me.
Blah, I woke up to the brave new morning expecting high demand message to be only nightmare from last night. But noo, it is worse than when I went sleep. Couldnāt even get first reply from it.
I tried it with my own Gemini API key, because the Cursorās own is useless at the moment. It was working just fine for asking questions and planning, but when I tried to press Build (many times), then it would throw this error:
Request failed with status code 404: [{
āerrorā: {
ācodeā: 404,
āmessageā: āmodels/gemini-3-pro is not found for API version v1main, or is not supported for generateContent. Call ListModels to see the list of available models and their supported methods.ā,
āstatusā: āNOT_FOUNDā}
}]
Blessed with Gem3Pro. Debugging lotās of GPT5 code ![]()
I tried it out for a day today. Although itās a bit slow, it completed my requirements in one go without any bugs. In terms of performance, itās on par with GPT-5 in terms of intelligence. Currently, I canāt compare which one is better because both are excellent.
Unfortunately, I was only able to interact with the service on a limited basis. From 09:00 UTC onwards, the service either reached the providerās rate limit and responded with a message that the provider was overloaded, or the prompt call simply shut down unexpectedly.
Same here. It is like winning in lottery if you can do anything now.
I hit to limits even with my own API key. Increasing tier 2 is not easy or fast.



