Gemini 2,5 performance: Great yesterday, terrible today

@asherhunt

Thanks for the detailed feedback here!
I think this cleanly splits into two points here:

The model performed better, and had better context prior to Cursor officially adding support

This feedback is valuable, and while we have tried to improve the prompting for Gemini to make it work better within the boundaries Cursor provides (e.g. how to call a tool, how to output a codeblock, etc), there is a risk this has an adverse affect.

Once all the hard bugs are ironed out, there will be a more fine-grained evaluation of how Gemini 2.5 performs generally, touching on the areas you feel have worsened, to try to optimise the prompt for maximum performance in these areas.

A “transparent” mode may be useful, but could run the risk of Gemini not functioning in a way Cursor expects, and therefore ends up with a worse experience. There is a middle ground here for sure, but we will prioritise stability over top-end performance first.

Why am I paying for a model that is free?

This is a good question, but the answer is that the model is not free, it’s just someone else footing the bill. When you use your own API key (which I would highly recommend while it is free!), Google is absorbing the cost of running the model to allow individual users to try it/

For Cursor, we have to pay Google their usual costs, just as we do any other model, so unfortunately have to pass that cost on to the users.

As I mentioned, I would recommend using your own API keys while Google offers this, and with a Pro subscription, everything should work as expected here. Max mode is also free when using an API key, and uses all the context available!

Please do share bad Gemini experiences with us, as we are really working to improve this experience right now. It is proving to be a very capable model, and we want to make sure Cursor is the best client to use this model with!

Thanks for taking the time to write this up!

1 Like