In my initial tests with Gemini 2.0 Flash I’ve found it to be more capable that Claude at solving a straightforward to explain (yet complicated to execute task). I’m disappointed that it doesn’t work in the Agent mode though. I use it extensively in a few custom agents that I have developed for work, and it does great with function and tool calling, and complying with prompt instructions. I expect it would work well in the agent tool.
Is the limitation of Anthropic/GPT4o in the agent driven by performance, configuration of schemas/prompts, or by costs?
I would prefer a warning that the agent ‘may’ not work with a given model, instead of not allowing me to try and find out.
Unfortunately, it seems that the Cursor team doesn’t seem to pay enough attention to Gemini’s model, but I would say it should be a mistake. In some aistudio tests on my frontend tasks, Gemini’s model is good enough, but it seems to be a bit problematic in its cooperation with Agent.
So, to your question “why cant you use it? gemini 2.0 Flash AND Pro exp are available in cursor? it works perfect in composer i didnt had 1 time a refuse to edit the file”
You’ve got the answer in the post title, and the messages above yours?
First of all, you should be careful how you write. Be respectful.
Disrespectful behavior is primitive.
Kevin wrote, “Of course not. I can’t use it either. I mean I test this model in Google AI Studio,” so he explicitly says he tests it in Google AI Studio. Why test it in AI Studio when you can use it in Composer?
I have never mentioned the term “Agent” for AI Studio. What I said was that I couldn’t use the Gemini Agent feature in Cursor, but it works well, and Cursor should consider supporting it in the Agent.
We’re currently working on DeepSeek v3 for Agent mode as it offers great performance for the price point. That said, I get the desire for Gemini in Agent - I’ll make sure to pass this feedback to the team.
For now Gemini works great in normal Composer mode, but Agent requires specific model capabilities that we need to carefully test and implement
You have done a great job, and I am also looking forward to the support of the DeepSeek R1. Anything that can provide better performance and pricing is worth looking forward to.
Gemini works great! I like to switch between models (from sonnet or O3 to another) when it gets stuck in a poor implementation approach. The issue is that I have to use composer normal mode and it would be great to use agent mode.
Thanks for the response. I don’t know what your code looks like, but Gemini works via the OpenAi library now. Its still in Beta, but it should allow basically a drop-in replacement of an OpenAi Model. I can appreciate that Gemini doesn’t respond to prompting in quite the same way that OpenAi does though. I’ve found it to make far fewer mistakes when “copying” something, but it follows some unusual reasoning paths sometimes. Thanks for all the work that you guys do, I’m excited to see where the product goes. I think we are all just excited for the next leap forward, given that Sonnet is now getting old by AI standards. Shame o3 has been such a letdown.