Composer 1.5 seems to go above and beyond in a bad way

I explicitly asked composer 1.5 to add a behavior to two out of three branches of code, but it gladly added the behavior to all three, even when it didn’t make sense to add it to the third.
I reverted the code, switched to gpt 5.3 codex and copy pasted my exact prompt and gpt coded it exactly and completely the way I was intending.

I do notice I know how to talk to chat gpt more than other models so there is some hidden bias involved in how I structure my questions and prompts, but it seems like a logical issue if it is messing up like that.

Does anyone have any advice on how to write a better prompt?
I can post an example if anyone is interested.

1 Like

Interesting—what’s your main use case?
I’m currently working with both Codex and Composer at the same time, and honestly, I don’t notice much difference in raw intelligence between them.

For my workflow, Composer feels noticeably more productive. Its generation speed is blazing fast, which makes token output and code appear almost instantly—huge boost when you’re iterating quickly.

For better prompting in my setup, I always structure it like this:

  • User Story — the goal/outcome I want to achieve

  • Success / Edge Cases — what good looks like + potential gotchas

Adding Superpower Skill also enhance Models

I second this. 1.5 is so much worse that I am going back to 1.0 (given you guys reduced the API usage on the non-composer model). It almost behaves like GPT 4o. Always trying to patch problematic outcomes instead of actually understanding the logic

2 Likes

the “touches things you didn’t ask it to” problem is real. i’ve hit this with composer 1.5 too, especially when multiple branches or files are involved.

one thing that helped me: be explicit about scope boundaries in the prompt. instead of “add X behavior to branches A and B” try “add X behavior to branches A and B only. do not modify branch C.” sounds redundant but the negative constraint seems to help.

also, if you’re working with files that have related but distinct logic, @ referencing only the specific files you want changed (instead of letting it discover the whole codebase) reduces the chance of it “helpfully” editing things nearby.

curious if anyone else has noticed this being worse with 1.5 specifically vs other models, or if it’s more of a general agent-mode thing. i’ve seen similar over-eager behavior with claude too when context gets large.

2 Likes

In my use case, e.g. explaining how something in a project works or why does something happen-or-not, composer 1.5 is very bad.
It just reads the code and repeats it back to me.
I mainly use Cursor for research in codebase and research online about certain behaviors. Opus/Sonnet/Codex summarizes web links I provide and explains them to me in relevance to my codebase. Composer 1.5 just takes text from the websites and repeats it back to me. Well great, but I can read myself. Very unhappy with it.

2 Likes

I have to agree with some of these comments. Composer 1.5 often will not comply with manditory rules or follow the rules of a model. Even simple tasks will suddenly go off the rails. It’s ability to follow instructions as simple as pull the latest version of the code from github results in a pull then code changes suddently without asking. I’ve defined very clear rules that opus has no problem following everytime, composer 1.5 seems to ignore it consistantly. So to save tokens, I’ve tried to write plans with composer then have Opus audit the plan against the rules. 100% of the time, composers plan is filled with violations that don’t follow basic MVP or other defined mandiory rules. I think it’s logic to follow defined rules is broken. it’s a rebel. The auto option should be limited to models that are trust worth.