Comment Re:Safety from What? (Score 1) 11
... basically it seems to focus on 'transparency,' which of course is good for any government organization, but why do we need it for private AI models?
Transparency around the training process and sources. Transparency around guardrails, the history of successes and failures thereof, bad outcomes that might otherwise be swept under the rug, and specific details that allow comprehensive testing by third parties. Disclosure of all 'hallucinations' so that independent parties can look for repeat misbehaviour, repeated patterns, etc. I think all of these, and probably more, would be useful from a safety point of view.
Also, unfortunately the way these models work doesn't allow for any transparency, it's basically a black box that does statistical trial and error to vastly oversimplify.
Although they're "private AI models", they can have larger public consequences. So even though they're black boxes, they should still undergo testing and qualification processes equivalent to those of foods, drugs, automobiles, line-powered electrical devices, child seats in cars, etc. We didn't have to know all the specific details and mechanisms of how various drugs vaccines work and affect the body, in order to test them and to give or withhold approval based on the results. AFAICT, the same is true of LLMs.