Comment Re:Fear is the appropriate response. (Score 1) 89
The hallucination problem _cannot_ be fixed. It is a fundamental part of the mathematical model.
I think it can. I've been working on getting an LLM (Claude Sonnet 3.7) to add missing type annotations to python code. When I naively ask it "please add types" then like you said it has about a 60% success rate and 40% hallucination rate as measured by "would an expert human have come up with the same type annotations and did they pass the typechecker".
But when I have a much more careful use of the LLM, micromanaging what sub-tasks it does, then it has a 70% success rate, and 30% rate of declining because it didn't have confidence to come up with an answer. Effectively there were no more hallucinations. (I got these numbers by spot-checking 200 cases).
So I think hallucination can be solved for some tasks, by the right kind of task-specific micromanagement and feedback loops.