Right now, as I build out a project, Claude is build test cases that I run to validate our checkpoints. Claude writes it, I run it, I return to composer the errors and we iterate.
These require very little input from me and are mostly cleanup. If we could set parameters for small scale run-edit-run-edit iteration, it would help get through these quickly, whereas i’m the bottleneck.
Yes!
Our AI use is like fusion experiments, a few second runs, because the shoe hasn’t dropped on how to incorporate self-correction.
This should be possible with Agent mode! You can set up a workflow where Agent runs the tests, analyzes the output, and makes corrections automatically. Try using agent with a prompt like “Run these tests and fix any failures you find, then rerun the tests - keep going till they work” - it can handle multiple iterations without needing input each time