I think the problem that the two of you are having is that you do not understand how these "AI" algorithms work and are actually seeing them as "intelligent" when they are really not - they are complex text predictive engines: all they do is calculate the best "next word" in a sentence based on their training data. They have no idea or understanding of what they are saying hence, if they say they are a doctor, therapist, lawyer etc. it is merely because, based on their training data, that's the "best" set of words, given their training, to respond to a query with.
Now, if everyone unerstood that there would be no issue with people using them since anytime they made a claim that they were some sort of expert we'd all know they were completely bullshitting us - like an actor in a film where we all know they are not a real doctor, lawyer ot whetever the script says they are. Clearly some people, such as yourselves, do not understand that about "AI" chatbots so a potential solution is to clearly label the chatbot as such so we are all on the same page when it comes to understanding its output.
If we all know that the output cannot be treated as true then it prevents harm. Nobody should follow the advice of a chatbot claiming it is a doctor in the same way that we do not follow the advice of a actor in a film claiming to be a doctor. So it is not that I want to protect any "superintelligence" it is that I know it is no such thing, indeed that it lacks intelligence -- although it can pretend very convincingly, but nevertheless it can be a useful tool to copy edit documents, summerize content etc as long as you treat the output with care and stick to things the "AI" is good at which is basically text manipulation.