What changes if I enable my own API key for OpenAI models?

The #1 advantage is not hitting rate limits. I was chatting with Gemini 1206 and ran into this:

Hit Google rate limit
We’ve hit a rate limit with Google. Please try again in a few moments.

I’m trying to understand what’s going on here…

  1. The Cursor IDE was using Gemini for the chat. Not its own custom model, right? That’s what the top red box in the screenshot indicates, gemini-exp-12016.
  2. Cursor’s backend proxies my chat prompt to Google, and hits an API rate limit. With so many users, totally understandable.
  3. I’m happy to provide my own Google AI Studio API key. Why would that disable anything else?

If the IDE was using gemini-exp-12016 for the chat with Cursor’s backend API key, can it use it with my own Google API key? Isn’t the magic in the prompt?