hope that the new vomit is marginally different
The rest of your comment is basically correct, if unnecessarily negative, but this isn't. Traditional tools like diff make it very easy to see exactly what has changed. In practice, I rely on git, staging all of the iteration's changes ("git add .") before telling the AI to fix whatever needs fixing, then "git diff" to see what it did (or use the equivalent git operations in your IDE if you don't like the command line and unified diffs).
I also find it's helpful to make the AI keep iterating until the code builds and passes the unit tests before I bother taking a real look at what it has done. I don't even bother to read the compiler errors or test failure messages, I just paste them in the AI chat. Once the AI has something that appears to work, then I look at it. Normally, the code is functional and correct, though it's often not structured the way I'd like. Eventually it iterates to something I think is good, though the LLMs have a tendency to over-comment, so I tend to manually delete a lot of comments while doing the final review pass.
I actually find this mode of operation to be surprisingly efficient. Not so much because it gets the code written faster but because I can get other stuff done, too, because I mostly don't mentally context switch while the AI is working and compiles and tests are running.
This mode is probably easier for people who are experienced and comfortable with doing code reviews. Looking at what the AI has done is remarkably similar to looking at the output of a competent but inexperienced programmer.