Is it just me or has the Composer 1 model gotten dumber lately? I have had to correct almost everything it’s done today but in the past I didn’t have that many issues, it usually one-shot most of my requests. Now it’s not following my direct instructions like when I tell it to compare previous commits in the repo or how to use mcp tools and it just makes a lot more assumptions about how different components in my project work instead of reviewing them before using them like it did before.
i use Composer almost for execution and never for plan.What problem or issues you face when using Composer? Bug related into logic? or other?
i think that composer not good enough if work on complex task since Composer-1 aim for speed and “good enough”
YES! It seems to forget/ignore instructions and repeats basic mistakes. I’m working with Ai so I don’t have to constantly check and recheck the work of “C” level coders. The composer model has slowed my progress substantially.
I switched to GPT-5.2 this week, and it’s going better so far. It’s helped refactor the sloppy mess Composer made.
It feels less about being “dumber” and more about changes in defaults or context-handling behavior. In my case, being extra explicit (e.g. asking it to inspect before acting or reference specific files/commits) helps, but I agree it used to do that more reliably without needing so much guidance.
Composer-1 result depends on the plan – have you changed your LLM for the planning process?
I am having these issues when using a plan or not. As Rao mentioned, composer-1 used to be good at handling context without detailed prompting, now it not only ignores context like cursor rules but also even when I give it explicit instructions. The quality of code it produces and the amount of tokens I burn while it tries and retries or edit the same file 4 times in a row with a different solution is crazy. Not only is cursor charging more, but they are giving less.
I agree Composer-1 has seemed a bit iffy recently and really not that much better than the Auto feature. As usual with these tools it’s hard to definitely say or prove.
With the planning stuff I agree - Composer-1 used to do a pretty good job without having to explain things to it a lot.
I am also not much of a plan mode guy - I don’t really do anything that complicated to be honest and it takes 2 prompts instead of 1. I don’t like being plansplained either !
I loved Composer-1 for a bit; you might see me recommending it on some forum threads.
But now I’m through with it. I have no specific examples, but the overall experience has changed while using it daily for months.
It’s less snappy, doesn’t understand context well, and uses less friendly language. It has turned from a fast horse to a donkey-like model like GPT-5 (not 5.2 - 5.2 is great).
Its deja-vu: this same thing happened with Gemini 2.5 pro and Sonnet 4. They seemed so smart when they came out, but lost lustre as time went by, got slowed down, and dumber. Composer-1 is facing the same fate.
My guess is that the model precision has been lowered, and the resources allocated to the models have been drastically reduced.
Way dumber, used to one shot everything, now it needs a lot of correction. Its doesn’t follow existing pattern, miss a lot of things that it used to catch what other LLMs weren’t, now its worse. Looks like it doesn’t explore codebase a much as before, and make more assumptions instead of looking for existing patterns. I am loosing trust…
this! it instantly starts coding and implementing