Please support the latest Gemini 1.5 Experimental 0827 ASAP

I don’t care if I have to attach my own API, really I don’t.

Claude 3.5 in Cursor ran me in circles for two days and I dumped to my last commit over 15 times. I read a solid review on the latest 0827 Gemini and thought “hell, what could possibly go worse?”

Solved the issue in under 3 hours with a major refactor and it didn’t try to “use client” on a server file even once.

The new Gemini needs to be part of Cursor because manually copying over git diffs and attaching files to Google Drive is the most necessary tedium I’ve experienced since Claude went batsh*t crazy a few weeks back.

Please make this happen, like, yesterday.

7 Likes

It worked for me yesterday with gemini API key.

4 Likes

How does one get an API key? Is Google AI Studio sufficient or does one need a Google Cloud/Vortex account?

Here is the link to create one.
https://aistudio.google.com/app/apikey

2 Likes

Holy **** where is the setting to add my own model in Cursor? I was rage clicking around the UI trying to find it.

1 Like

Never mind.

I should have tried the big gear that has been a standard settings icon since 1832.

2 Likes

Thanks! I’m going to try tomorrow. I’m wondering if it’s possible to use it in the long context chat? Usually adding “-200k” or something to the model makes it available in the long context chat. I wish there were no restrictions on the number of tokens. It’s been long asked of Cursor team to let us choose arbitrary token limits when using own API, but it’s been ignored so far. Hopefully at least 200k works in the long context chat. Or perhaps it’s possible to append any value like 500k or 2000k? If so I’ll take my words back, I’m gonna test this tomorrow.

1 Like

Yeah it’s working for me in long context chat. But I don’t know what’s the maximum token for it gemini flash maximum is 500k token maybe it’s the same for gemini pro.

3 Likes

I’m glad this post is here! I tried Gemini 1.5 pro the latest version yesterday in the AI Studio and on Vertex AI and I think they’ve might some there! If I understand correctly, every time you have a chat, it adds tokens to the overall conversation. I have not gotten near 2 million tokens but with only building up to around 30,000 to 40,000, it just seemed to get better and better on every prompt. I’m excited to build it up to 2 million tokens and then see if it can work with that and create new, original code related to the entire project. The speed is insanely fast as well. I concur it would be amazing to have the 2 million token chat in Cursor. In the Ai Studio there is an option to add more tokens up to it seems the 2 million, and Vertex AI seems to allow only 8196 tokens output at a time.

4 Likes

Looks like Google might be finally pulling out the big AI guns with fine tuned gemini, gems (personally instructed Gemini’s) agent armies, and the whole bit. I’m too tired now, headed to bed! Looks like we have a fun weekend ahead of us!

3 Likes

EDIT: Make sure you create your API key after creating your paid account. I was using the same key I generated weeks ago. “Turning it off and on again” still works.

“The model gemini-1.5-pro-exp-0827 does not work with your current plan or API key”

I have a paid plan with like $300 in credits. What haven’t I turned on?

1 Like

Immediate report: oh my god this is so targeted and efficient.

Claude "Oh that onClick prop isn’t passing right? Let’s redefine three modules and make this one “use client”

Gemini 0827 “Hey dummy, here’s the one line of code you forgot to define properly”

Unfortunately even paid accounts appear to be rate limited. It’s absolutely painful going back to Gemini Flash and Sonnet

2 Likes

When I tried to add the Gemini model and use it in the chat (both normal and long), I got this error:
image

I added gemini-1.5-pro-exp-0827 and gemini-1.5-pro-exp-0827-500k to models in Cursor and enabled Google API key.

Does anyone know what the problem could be?


Edit: i’ve just double checked and my API key definitely works outside Cursor.

Could someone from Cursor team please check this? As we want to use this Google API so badly.

Also “verify” doesn’t return any errors when I’m using it, I assume it’s testing that correctly. Only the chat tries to use OpenRouter instead.


Edit:
This was the solution, I had openrouter url added for OpenAI


Now I’m wondering what context length it has. gemini-1.5-pro-exp-0827-500k doesn’t work in Long Context Chat, but gemini-1.5-pro-exp-0827 does.

The context is 2mil in AI Studio but the API may be different. For example, there’s no rate limiter in the studio but the API cuts me off after 1-2 hours of usage in Cursor.

I see. Were you able to determine if it allows for a huge number of tokens, like 200k during that time by any chance?

Not specifically, though after several days I have feedback.

  1. Claude is still the best at coding new features for Next 13+, if you are using React. I find it curious that Gemini consistently codes for an outdated version of React. I wonder if this extends to other languages.

  2. Gemini is vastly better at complex refactors, which isn’t surprising given it’s context window. It’s also far better at thinking through application structure on a grander scale than “I need a new view attached to this section of the app”

  3. Gemini and Claude are about equally likely to get stuck in a logical loop and give me code that doesn’t actually change anything. Claude is slightly more likely to loop back and forth between mistakes if it’s context gets poisoned.

  4. Claude almost causally deletes key features in order to “fix” a bug. It’s also vastly more likely to overcomplicate a refactor.

  5. ChatGPT is the least likely to get caught in a logical loop, which makes it useful for breaking out of a funk when Claude and Gemini can’t find the root of an issue. It’s also much better at conversing around error logs and thinking through logical puzzles to get back a few steps in a process. Claude and Gemini don’t like to go backwards and ask if they’re even solving the right problem.

All in all, it’s incredibly useful to have all three available.

4 Likes

Really?

for me it doesnt show any models and doesnt accept any single one :frowning:

Cmon, add Gemini 1.5 PRO to the long context models

3 Likes