Gemini 3.1 Pro Experiences

Yesterday, Google released Gemini 3.1 Pro and it has been quickly made available in Cursor.
Pricing is nearly identical to GPT-5.3 Codex and benchmarks look promising.

I am curious if you have tried it out, how did it perform and behave?

1 Like

I’m trying it with a bit of a snowball project that went on between four or five different models, and its just unresponsive for now. Don’t mean to neg but… that happens to be my first encounter.

2 Likes

So far my experience is that it is pretty slow. I am not looking for a positive feedback specifically, more like whether I should waste my tokens on trying it out myself, or if somebody already did.

3 Likes

i ran the same refactor test i used for a model comparison. gemini 3.1 pro finished in about 2 minutes, which is roughly half the time i saw from the older gemini version (~4.5 minutes on the same task). output was clean, followed the .mdc rules, added an error handler middleware but nothing weird. it’s faster but still slower than sonnet or codex on the same job.

1 Like

I am experiencing issues with 3.1 Pro. I can’t connect, just getting timeouts. Other models work at the same time

 Cursor Agent v2026.02.13-41ac335                                                                                      

  test

  --- Connection lost. Retry attempted. ---

  --- Connection lost. Retry attempted. ---

Upd: it started to work, but remains flaky. Sometimes it will take multiple attempts before outputting anything

5 Likes

agree with this, i think Google API Models really bad.

  1. Cant call MCP even after point out, Codex able to do this
  2. Always failed call model in the middle of conversation
2 Likes

is 3.1 pro working. mine is always giving error during inference

1 Like

Gemini seems smart but its not reliable when calling tools (at least in the Cursor harness). Anthropic models are rock solid, stick with them. To get real utility from a model in Cursor it MUST be able to handle tool calling or it will make choices with poor context and won’t behave as expected generally.
If you have a single-page query and Sonnet is stuck, its worth a try switching to Gemini 3.1 to get a different opinion/suggestion, but other then that, steer clear for now.

Gemini 3.1 saved my life, analyzing complex setup in a tech I am not familiar with (telecom).
…or maybe it was only me asking very detailed and thoughful question to find issues with my code that don’t follow industry practices. It delivered list of 10 issues, and the first one was enough to fix problem I’ve been messing around for over a month.

However, if I ask it to analyze code without providing extensive context, or create detailed plans… Looks like Geminini does not sarch codebase at all and only uses files explicitely attached.
So, it is VERY lazy and not that useful - certainly not fire & forget development.