Even if we had full-on human level AGI (which we don't), you'd still need to iterate and correct.
You wouldn't expect to give a non-trivial programming task to another human without having to iterate on understanding of requirements and corner cases, making changes after code review, bugs needing to be caught by unit/system tests, etc.
If you hired a graphic artist to create you a graphic to illustrate an article, or book, or some advertising campaign, then you also wouldn't expect to get it right first time. You're going to iterate and work with the artist until you get something close to what you were hoping for (or maybe different, but better).
How much iteration and feedback you need with today's "AI" depends on what kind of AI you are talking about (just LLMs, or also things like image generators), what you are using it for, and how skilled you are in using it.
If you are using an LLM to learn about something then you will have a conversation with it and probably not regard this as "iteration" anymore than you would with a human, even though it really is.
If you are using an LLM to write code or find bugs, then a large part of it is going to be how much context specific to your project have you provided. If you are just relying on what is baked into the LLM (which is not entire software projects - it's the content of the internet put through a blender and chopped/mixed into training fragments), then all bets are off.