Over the last couple of weeks AI codegen has really seemed to cross the line from “mostly useful” to “sometimes useful if you undo 90% of what it generates”. I’m used to asking it to retry or refine its answers but it’s gotten so enthusiastic in general about making changes that are so far out of scope for the request that I’ve taken to trying to only ask for things that can be implemented in up to about ten lines of code so then it’s easy to revert the rest of the stuff that it does.
E.g. just now I asked it to create me a (Python) script to do a task. It did that and it wasn’t great but it generally was what I wanted, but then it noticed that there was a pyproject.toml so it thought it would be helpful to modify that in ways that broke my project (and which had no impact on the script it just wrote). Then I made some modifications to the code and asked it to pull it all together and it did that, and then restored a bunch of unrelated comments that I’d edited also to be incorrect.
I’ve added to the Cursor rules (and similar for Claude, as I’m seeing the same enthusiastically creating non-working code to solve a different problem than what was asked) instructions to only make the modifications that are requested, to not make modifications if there are question marks in my prompt, etc., and it appears that these “config” files for lack of a better term are just completely ignored anymore.
It really does feel that we’ve tipped over and that ongoing enshittification of AI codegen is only going to get worse. It’s more apparent every day that what it does is amazing if you don’t really care what the results are… And now not only does it give you basically-working code that you can fix if you can find what you need in the mess it leaves, it also really seems to go out of its way to do other work that it believes you’ll want in the future, basically ■■■■■■■■ up your repo and making it harder to actually add that future functionality because its “architected” something completely useless but complicated nonetheless.
“You’re absolutely right, I should have only implemented what was asked and not tried to anticipate future needs,” if only it actually learned from these sorts of mis-bulldozering.