Gemini 2.5 Pro, failed tool call request ID

Created a brand new chat and asked for a simple feature implementation.

It said it’s going to proceed with creating the file, however I got an error message that there was a problem connecting with the model provider (AI error).

It presented the option to resume, which I pressed, but it resulted in the same failure when the AI mentioned it’s going to now create the file.

Request ID: 08c73e2b-48c2-4802-a54a-bda62307a013

4 Likes

This has been happening with me, all day, with Gemini Pro 2.5.

2 Likes

same

Hey, I don’t see this issue on my end. It was also noted earlier that some MCP servers might cause this error. Try disabling them, and if it works, share your list of servers with me so we can debug it.

Same for me. Unusable at this point - all day long, now with every response.

It was also noted earlier that some MCP servers might cause this error. Try disabling them, and if it works, share your list of servers with me so we can debug it.

I have no servers or extra features enabled. also tried it on multiple devices and networks, same issue. works for thinking phase and gets to around implementation then fails. message is:

We’re having trouble connecting to the model provider. This might be temporary - please try again in a moment.
(Request ID: (request id is here - string))

I’d be happy to hop on a call or give any info needed to replicate if we can get this taken care of @deanrie!

All of my MCP servers are already disabled while getting this error constantly.

1 Like

same for me too not a single one working request with 2.5 pro

same

same here I’m going insane!

1 Like

Yup, been dealing with this since yesterday. Getting very annoying. I’m spending a ton using these MAX mode agents.

Between having to baby sit the generation to keep clicking on “resume” and dealing with it stopping mid generation, this is not acceptable if I am going to be charged for these expensive models. They should work without these issues.

Not happy with this experience. I don’t mind paying more, BUT not if I cant get stability and consistency with the product.


these MAX mode or “frontier” models are NOT living up to their expectations…

Hi, sorry about the difficulty with Gemini 2.5 Pro here.
Just want to clear a couple things up:

1. Model Error w/ Resume button

In this case, if the model has errored and we offer you a resume button, you will not be charged for this request unless you choose to resume, which should let you continue where you left off. This situation is caused when, for whatever reason, there is an error in the generation of the response, but we can detect it and re-trigger it for you, without charging you double.

2. Model “stops” cleanly without any visible error

This is an Gemini-specific problem that we have pushed Google to fix for a few weeks now, but they seem to still be having issues. This happens when Google tells us (via the response their API sends) that the model has finished generating, but does not tell us that any error occurred during the process, even though it’s clear when reading the response that the model stopped midway.

This is unfortunately almost impossible for us to detect, as Google tells us it was a successful generation, and the only way to know if by a human reading the conversation.

We believe a fix should be available very soon (this week or next I would guess), but if you are seeing this regularly, I’d recommend switching to another model.

You will know when this specific issue occurs, because the usual icons at the end of the message will show with no errors when specifically using Gemini 2.5 Pro, as shown in @KamilTheDev’s screenshot.

2 Likes

It also seems like the issue is not fully isolated to just editing files, but also it seems to have stopped abruptly after searching files.

Then prompt it deterministically to infer it or don’t offer the product. Are you insane?

We believe a fix should be available very soon (this week or next I would guess), but if you are seeing this regularly, I’d recommend switching to another model.

If this variability is acceptable, you need to remove the MAX paid model 2.5 from your UI in its current state. It implies it is production ready. Your response indicates it is not fit for offering. This is your responsibility.

Many users do not see issues, and therefore would want to keep MAX as an option for Gemini 2.5. If you are seeing this frequently, for now I’d recommend avoiding Gemini until this is fully resolved.

same, when the model first released it, has been broken and doesn’t respond