we're taking away agency of people by implying that an LLM will make them do bad things. People need to understand LLM hallucinations are an issue. If they don't understand that then it doesn't matter which one they use. the workplace should not provide them if their employees don't understand this. I see constant stories about lawyers submitting fake LLM generated court cases, those lawyers should be punished because they are still accountable for their work. if you're talking about how much the model hallucinates, that's important about the quality of the model, if you're "red teaming" and getting it to say "naughty" things. It's not relevant and its just a parlor trick.