We've hit a rate limit with gemini-openai

We’ve hit a rate limit with gemini-openai. Please switch to the ‘auto-select’ model, another model, or try again in a few moments.

(Request ID: 292e2698-cfe7-48ac-a82e-c8f04741cb15)

I get this error when using Gemin-2.5 pro .

4 Likes

Got this with only my third-ever request. What is going on?

Hmm, I wonder if this happens if your request comes too soon after your previous one because I seem to be able to send requests now.
edit nope, it’s not that. It just happens and you just need to try again.

As the model has just come out, we had relatively low capacity yesterday, but should be improving in the coming hours and days!

1 Like

Thanks for the update @danperks

Why can’t we add your own API key either. It keeps saying error, i’ve tested the api key inside python etc and it works fine so its a Cursor AI issue.

Hi Dan, still seeing lots of no capacity errors with Gemini-pro, are you guys still ramping or is this a bug?

I’m using my gemini API with gemini-2.5-pro-exp-03-25
in edit mode, and it is great. And I’m using pretty huge contex. But, yes, cursor included version is not working reliably.

Is this free or payed api?

I’m using the paid api with my own key

has anyone had success with this? I have my own api key through which you can see your usage in the admin dashboard on Google and rate limits aren’t being hit at all but for some reason inside of Cursor the Gemini rate limits are hit immediatly. Could it be a lack of rate limiting of successive api calls?

I do not believe it is a Cursor issue; I believe it is a Google issue.

I got similar rate limiting issues with attempts today to use a paid API with Gemini 2.0 and 2.5 using several LLM wrapper apps.

It’s typical Google.. they market an LLM with a huge context window then turn away a paying customer who wants to use it to analyze large files.

Google appears to prefer offering mediocre service for free rather than offering quality service as a paid product.