Composer getting dumb on Slow request?

While I’m still within my request threshold (e.g., Pro, 500 req / month), things work fantastically. Composer takes my files, doc, codebase, notepad, basically, everything I throw in, and it just gets the job done without breaking a sweat.

But recently, I’ve run out of monthly fast requests (rarely, due to this month’s workload), and I noticed it getting really … dumb. It is the same or even a much smaller amount of context input, but it is still unable to do right and sometimes breaks stuff.

Things get done as usual with simple requests or maybe just 1 or 2 short notepads.

Does slow request reduce context, too?

Has anyone felt the same?

I am having the exact same feeling! It feels like a joke.
I just wasted an hour with back and forth with it trying to simple accept the supabase invite link and set a password and got nowhere.

To the point that it starts writing mcp commands which seem to be wrong so its not calling the tool and just sending it to shell instead.

Then had an issue with it loading a table from supabase, went through multiple iterations with the slow requests when it was working then it broke it and then 5 iterations and still didnt fix it.

Added anthropic api key first set of responses i got was a lot more positive, but then constant api key rate limit exceeded.

Very frustrating

I didn’t experience a drop in quality with slow requests

They are just getting slower and slower:

Well here is another data point.

I enabled PAYG and in the following 2 prompts claude 3.7 fixed the problem that the 6 prior prompts got no where with.

It also started running MCP tools correctly after enabling the payg.

Yeah, I was searching for similar posts and I wonder if the claim that the agents are dumber in slow mode isn’t just biases from our part. If I wait longer for something, my expectations for that thing get higher, so it’s easier to get frustrated with the inability of the model to accomplish a request.

If you look through model providers subreddits you will see a lot of people complaining about models getting dumber over time after the release, so this may be part of the same experience, just in another context.

That being said, I also feel like they get dumber and I wonder if this is because the cursor prompt is instructing the models to be more concise and less verbosity = less thinking = dumber. I would be very surprised if the instructions on cursor behalf would include things like “act dumb” or “be less capable”, that seems very unethical. I hope that’s not the case.