The responsibility for the actions of he user is on the user, not on the tool.
Nobody said it was on the tool, but sometimes, it is factually also on the provider of the tool. Pretending otherwise doesn't change the law. If the provider is negligent, they can share in responsibility. This is how things other than LLMs work, why not LLMs too?
Guns have safeties even though they can get in your way, for safety's sake. Equipment has lockouts. Most things come with warnings. Automobiles are starting to get automated guardrails like automatic braking and eventually won't allow you to e.g. steer into another vehicle, because it's feasible to prevent and there is a public safety interest. There's simply zero justification for the multi-billion dollar corporations producing and selling access to these LLMs to not institute some guardrails of their own.
I can agree with this. As we learn a tool we can learn how to make it better and safer. And we can also force the manufacturers to implement these measures by finding them negligent if they have not and fining them. Yes, this is how it works.
If you think about it, it was the computer that allowed that person to interact with the chatbot...
Really, if you do not understand something, it does not mean it is bad, it means you do not understand it.
Regarding this specific case, even a hammer can be used to kill a person, it does not mean that the person who designed or made the hammer is a bad guy. The decision to attack people was on that person, let us stop blaming tools and start holding the bad guys accountable. So simple.
Really, this question is absurd.
"So why don't you make like a tree, and get outta here." -- Biff in "Back to the Future"