It is not at all deterministic. That's because all of the tools I am using are under constant development. If everything in my environment stopped changing I suspect it would be deterministic, but that's never going to happen.
Hallucination is somewhat under your control there are three main sources 1) you exceed the context window of the AI model. For that one you just need to learn the limits of your model and not exceed them. Don't ask it to do a refactor which is going to touch two million lines of code. Two million lines of code won't fit into the context window so it's going to hallucinate substitutions for what doesn't fit. 2) It will hallucinate when your prompt is not specific enough and it fills in the blanks on its own. You can also control that by making very specific prompts. 3) Sometimes it just goes off the rails. For that one you have to closely monitor the terminal tracking what the agent is doing and if it goes off into the weeds, stop it and explain how to get back on track. That's similar to what you need to do with junior programmers.
I would say I am not ending up with any hallucinations (that I am aware of) in the code I am generating, but.. that's because I am very closely watching everything it is doing and I test and review everything manually before committing. If you're a vibe coder who gives a prompt and then goes off to lunch while it works, you're going to have problems and end up throwing away massive amounts of generated code. The second you stop reviewing what it is doing, you are doomed because you will accumulate piles of code you don't understand and can't fix.