Hey, thanks for the report. This is a known issue with Gemini 3.0 Pro that the team is already actively working on.
Related reports:
In your case, it looks like a more extreme version of the issue at very large context sizes (around 300k tokens in MAX mode).
Could you share the Request ID for this chat? (Chat context menu → Copy Request ID) This will help the team investigate the problem at extreme context lengths.
The good news is that code generation should still be working correctly, and the repeated text is mostly visual noise in the output.
As a temporary workaround, you can start a new chat.