I have been using Gemini 2.5 05-06 for an entire day with good success, performance matched Claude 3.7, just faster, cheaper (also chattier). Then, last night, it suddenly insisted it couldn’t touch the back end, nudging me toward front-end fixes instead. When I pressed, it claimed back-end edits were off-limits and only grudgingly complied after I named the exact backend file. Opened a fresh chat this morning and hit the same glitch. Anyone else run into this?
wasn’t sure if i was the only one. went from extremely good yesterday to hot trash today - unbelievably hot trash.
by this i mean it can barely even maintain any coherence across reading a handful of small files.
same here
I felt it especially strongly today. In my NodeJs project, Gemini actually thought it was Python and gave me a Python answer.
My experience yesterday was not good, I was working on creating a frontend UI for my app and I gave Gemini Explicit instructions on UI design, functionality and the one key thing was that the UI and Content should dynamically scale based upon the resolution, aspect ratio and window resizing etc basically the one thing you’d expect of anything made it the last 15+ years and Gemini refused to do it.
I was even giving it screenshots of the issue, telling it the issue and what needed fixing, it would swear it fixed it and it was the exact same, I even gave it examples of existing UI to show it what I wanted and it just couldn’t understand.
Used Sonnet 3.7 Thinking and it figured out the solution and suggested fixes to the problem immediately.
It was so obvious to me today, that I have told it I was putting him (the model) in my ban list. No kidding. Also, that might have to do with a bug with cursor which is cutting the documents I send to the AI models. Wasted all day. Never happened before.
I would if this is due to Google or Cursor…
Google, their server compute allocation keeps crashing out, no idea why now but they seem to just have times where the models go super bad, it happens both in Cursor and Google AI Studio, if it’s great, it’s great but as soon as they have resource or performance issues for whatever reason it goes into a dumb dumb mode where it just sort of craps out for no reason.
You have to re-prompt it or start a new chat then often wait until it decides ‘hey, i’m a competent model again.’
I see. Make sense.
Maybe caused by AlphaEvolve, who knows
.
Yes, I encountered the following problems when using gemini 2.5 pro 0506: 1. Sometimes cannot use the edit_file tool to edit files 2. Sometimes the language suddenly changes from English to Arabic during analysis. But in most cases it works well and is faster. If you don’t often involve multi-file operations, I think it is better than claude 3.7 thinking
Surprisingly, Claude3.7+t has also been hallucinating with unwanted tool calls. Gemini went from the best model to barely usable.
Yeah this has become absurd how bad it is now. I’ve added it to my post here that includes unaddressed complaints on the forums to help consolidate and see if we can get some improvement on various issues:
feel free to vote, reply, or contribute in anyway - its appreciated.
I got 1 request through to Gemini Jules earlier today and it blew me away to a degree that I’m convinced this is my last month with Cursor. I’ll pay 5x as much for a single model coding assistant with an ecosystem around it that accomplishes my goals. For ~ 6 weeks now cursor has had constant regression in performance, crazy hallucinations, intentionally vague pricing changes pushed on us while they figure out how to turn a profit. Tired of paying to beta test a product that changes day by day.
Agree with your diagnostics of Cursor. The recent price change in v0.50 seems poorly considered and shady. The rush to monetization appears aggressive and desperate.