Currently, AI-powered tools are able to help junior developers with code explanation and completions, but with more complex problems, they fall short.
As a senior developer, I expect much more from AI-powered tools. Most software has complex codebases and tests to ensure they are production-ready.
My dev workflow is to write my tests first. These tests describe the behavior of my application and define its features. A killer feature for me would be the ability to write a test, and then have the IDE/AI tool run my test and generate the files and code to make the test pass. Inside the TDD feedback loop, the AI would be able to evaluate its code generation and improve it until the test passes.
and went into more detail about it in one of the replies.
I really like your suggestions about making it better integrated into the workflow, making the iterations more automated. Instead of feeling like you’re repeating half a dozen steps and context-switching between them (and manually pasting back test results for the model to read), it would be great to feel like you’re repeating the same one overarching step and having the automation take care of the mechanics. It would be a much better way to stay in ‘flow’.
Nice ! I didn’t know this project. But it’s not exactly what I’m looking for…
Here what they say in the doc : “This project won’t install modules, read and write multiple files, or do anything else that is highly likely to cause havoc when it inevitably fails.
It’s a micro agent. It’s small, focused, and does one thing as well as possible: write a test, then produce code that passes that test.”
I want to write my test, and then the ai agent must be able to create or modify multiple files inside my project to pass the test