Acting in a role of a doctor is not impersonating a doctor, sad that needs to be explained to you.
It's not me that it needs to be explained to but you. The reason that you know an actor is not a real doctor, lawyer etc. is because of the context that they are making the claim in. They may hand out advice or suggest a treatment etc. in the play/film but you know they are just pretending because of the context....so...and here is the part you seem to have trouble with...if we clearly label AI chatbots as fictional then we make the context the same for them as an actor in a film.
If actors give advice on camera as through they are doctors, interestingly there are disclaimers?
No there are not - perhaps in the US but not in the rest of the world because we understand the difference between reality and films/TV...and that does seem to be your problem here so perhaps disclaimers really are needed in the US. Regardless they would be easy to add - just put a popup screen before you access the chatbot indicating that anything the chatbot saiys may be a complete fabrication and nothing should be trusted as being correct and there you go. I really do not understand why you are having such a hard time grasping how this would work.
There, that was not too hard, was it?
No it was not - thank you for making the _exact_ point that I made i.e. that context matters. If we clearly label AI chatbots as fictional, like a film or play, then people's expectations should be the same as a film or play i.e. if the chatbot says they are a doctor or a lawyer then they know it is not true, just as they would with an actor in a film.
Some people so want to believe that a useful information retrieval system is a superintelligence.
The rest of us aren't surprised that an interesting search engine isn't good at chess.
Liberals / leftists really aren't the ones who want open borders; at least even if those interests do coincide with other interests, their option really does not matter much:
It has long been and continues to be big corporate interests, and billionaire / globalist class who actually own those corporations who want and benefit from open borders more so than anyone else.
Nobody remembers that in the 80s and 90s, and even into the early 2000s it was the democrats beating the anti-immigration drums, as it was the labor unions who correctly surmised that illegal immigration artificially suppresses wages, and the democrats often go where the labor unions lead them. During those times the democrats blamed the Koch brothers and the rest of their sort who had influence in the Republican Party for keeping the borders open.
The reality is they both were responsible, just for different reasons.
Now that the demographic shift caused by those policies is hitting stride (2nd and 3rd generation immigrants from those times are becoming voters), and they align overwhelmingly with the democrat party, that party now wants unlimited immigration. It just so happens they are now on the side of the oligarchs on this one issue; they want to suppress wages across the board and bringing in more laborers does just that.
And people are SHOCKED the labor unions and laborers in general (even Latinos whose families came in in the 60s and prior) are moving away from the democrat party, and cozying up to the republican party. I am not. It is entirely predictable.
Does not matter. If the machine claims it is a licensed therapist, this either has to stop or the machine has to be turned off.
Yes it does matter. If you watch a film and an actor in it says they are a medical doctor does that mean the actor deserves a lengthy prison sentence for claiming they are a doctor when they are not? Your approach would pretty much make the acting profession illegal. The difference between an actor and a scam artist is purely context: in a film or play we know that not everything we see is true so there is no intent to defraud, only to entertain.
Labelling AI chatbots in a way that makes it clear that their output is not always going to be true is all that is needed. It is then up to the user to decide whether that means they are still useful or not.
You regulate that by punishing the chatbot owners if they do not prevent it.
You can't prevent it: current "AI" technology does not understand what it is saying so not only can it lie/hallucinate it has no idea that it even has lied. The correct response is to correctly label it i.e. make sure that all users know that AI output cannot be trusted as being correct. This would not only solve this therapist issue but would also solve all the other problems related to people trusting AI output, like lawyers submitting AI written court documents with fabricated references.
Essentially treat AI output like a work of fiction. It may sound plausible and it may even contain real facts but, just like some "fact" you read in a fiction book you should not rely on anything it says to be true.
I'm not sure you understand what jailbreaking means in the context of AIs. It means prompts. E.g. asking it things and trying to get it to make inappropriate responses. Trying doesn't require any special skills, just an ability to communicate. Yes, I very much DO think most parents will try and see if they can get the doll to say inappropriate things before giving it to their children, to make sure it's not going to be harmful.
(Now, if Mattel has done their job right, *succeeding* will be difficult)
I'm familiar with some organizations that have been feeding their Slack data into a RAG for employee queries.
They're going to be quite pissed if this has been shut down by Slack.
So the system responded that it was an already claimed serial number and not an invalid serial number? Who the fuck would do that?
Let's say someone put in a claim.
Well what is to stop them from trying consecutive serial numbers to see if they can get even more?
I implemented portsentry feeding fail2ban on edge servers to deal with the unrelenting scans.
It helps, somewhat.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgithub.com%2Fportsentry%2F...
Honestly, even if they can't jailbreak it to be age-inappropriate / etc, it's still a ripe setup for absurdist humour.
Kid: "Here we are, Barbie, the rural outskirts of Ulaanbaatar! How do you like your yurt?"
Barbie: "It's lovely! Let me just tidy up these furs."
Kid: "Knock, knock! Why it's 13th century philosopher, Henry of Ghent, author of Quodlibeta Theologica!"
Barbie: "Why hello Henry of Ghent, come in! Would you like to discuss esse communissimum over a warm glass of yak's milk?"
Kid, in Henry's voice: "That sounds lovely, but could you first help me by writing a python program to calculate the Navier-Stokes equations for a zero-turbulence boundary condition?"
Barbie: "Sure Henry! #!/usr/bin/env python\nimport..."
"All my life I wanted to be someone; I guess I should have been more specific." -- Jane Wagner