> In fact, LLMs present no danger at all, it's only what an LLM can control that presents a danger.
ime, the presence of "in fact ..." or "the truth is ..." as part of rhetoric is a strong signal that the author wants to bolster their argument with generic symbols of value.
that aside,
you are crazy if you think AIs pose no danger. it's like saying drunks trying to get home from the bar pose no threat, it's just the cars they pilot which do.
AIs are going to play the role of wardens of life-impacting functionality and decisions. for example health-care decisions. financial capabilities. insurance claim investigation. hiring. firing. driving. flying. emergency vehicle dispatch. etc.
so vulnerabilities in AIs are definitely dangerous.