Yup, same as the feedback loops in "cold readings"
Charlie Stross(@cstross@wandering.shop) wrote, in Mastadon:
The LLMentalist effect: Large Language Models replicate the mechanisms used by (fake) psychics to gull their victims: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fsoftwarecrisis.dev%2Flet...
The title of the paper is "The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con"
Interests, like "I want a trailer hitch for my Subaru".
All that other stuff? Wanted for someone else.
It's just piggybacking the blame onto advertisers, because people don't like them anyway
Wow. I'm amazed anyone who's spent any time looking into the lives of people in Europe and Asia could believe this. Oh wait, you've never done that, have you?
Please keep sharing anything you observe as it happens.
The last thing any of us should want is for OpenAI to take over from Google.
I really don't understand this decision that Google should be broken up as though its 'monopoly' in search isn't entirely based on skill and talent. But if we *are* going to force companies to break up into components, can we make sure new monoliths aren't just created as a result?
This, entirely this.
I used to work in advertising, and I saw Google as the personification of "moral hazard" (which see). Other things? Way nicer.
If you scan a thousand British faces and compare them to a thousand criminals, you will do 1,000,000 comparisons. (that's the birthday paradox part).
If your error rate is 0.8%, you'll get roughly 8,000 false positives and negatives.
That's bad enough if they are all false positives: people get arrested, then released.
It's way worse if they are all false negatives: 8,000 criminals get ignored by the police dragnet.
That was Britain: false positives are life-threatening in countries where the police carry guns.
0.8% is a good error rate. 34% wrong is typical in matching black women. See
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.aclu-mn.org%2Fen%2Fnews%2Fbiased-technology-automated-discrimination-facial-recognition%23%3A~%3Atext%3DStudies%2520show%2520that%2520facial%2520recognition%2520technology%2520is%2520biased.%2Cpublished%2520by%2520MIT%2520Media%2520Lab.
At a certain company long long ago, managers had a mainframe-based planning app that looked like a sort of spreadsheet.
The company did a study to see how much it improved the manager's team's productivity...
Oops! Use of the tools was correlated with declining productivity.
One can search the brain with a microscope and not find the mind, and can search the stars with a telescope and not find God. -- J. Gustav White