Comment Policies should specify "whats" not "hows" (Score 2) 52
Just like project requirements, policies should specify "whats" and not "hows". What outcomes are desirable, and what are not. Also like project requirements, the views of all stakeholders should be reflected in the policies.
Regulatory frameworks that specify the "hows" are more likely to result in meaningless compliance as game-playing organizations seek to maximize returns under the rules. Regulatory frameworks created mainly using input from major players (because they are "experienced") are more likely to align with how those major players want to do business than what concerns really need to be addressed.
One major concern I see with "AI" is the potential for harmful behavior that is excused because "the AI did it". Fortunately, there have been some legal rulings where that defense didn't hold water. A policy that makes it clear that organizations can't escape claims of harm just because a computational system judged to be using "AI" is involved would clarify that the organization is responsible for what the organization does, whether through its people or its systems.
Another major concern I see with "AI" is the creation of dramatically unequal juxtapositions of people/human effort against human-like effort that is really computationally driven in situations where expectations are based on human-effort versus human-effort. An "AI" LLM, for example, can spout vast quantities of human-like output (some percentage of which is bullshit) which can overwhelm the abilities of a real human to understand and respond to in real time. Behavioral norms that are based on real humans interacting with real humans will be upset by real humans interacting with computational systems unless it is made clear that those norms cannot be upheld in those circumstances.
I'm sure that a group of people could identify more potential harms than just these two. I've cited them here as examples and not an enumeration of all concerns.
If someone is going to really develop a policy framework or even policies, then a substantial amount of original thinking based on first principles and identification of the "whats" of actual harms needs to be undertaken. Telling an organization clearly that "if your AI kills someone (or produces outcomes of lesser but still significant harm) you will be held responsible" is much better than telling that organization "you must reduce risk by using red teams to evaluate systems before putting them into production".