Yup, same as the feedback loops in "cold readings"
Charlie Stross(@cstross@wandering.shop) wrote, in Mastadon:
The LLMentalist effect: Large Language Models replicate the mechanisms used by (fake) psychics to gull their victims: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fsoftwarecrisis.dev%2Flet...
The title of the paper is "The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con"
Interests, like "I want a trailer hitch for my Subaru".
All that other stuff? Wanted for someone else.
It's just piggybacking the blame onto advertisers, because people don't like them anyway
I used to work in advertising, and I saw Google as the personification of "moral hazard" (which see). Other things? Way nicer.
If you scan a thousand British faces and compare them to a thousand criminals, you will do 1,000,000 comparisons. (that's the birthday paradox part).
If your error rate is 0.8%, you'll get roughly 8,000 false positives and negatives.
That's bad enough if they are all false positives: people get arrested, then released.
It's way worse if they are all false negatives: 8,000 criminals get ignored by the police dragnet.
That was Britain: false positives are life-threatening in countries where the police carry guns.
0.8% is a good error rate. 34% wrong is typical in matching black women. See
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.aclu-mn.org%2Fen%2Fnews%2Fbiased-technology-automated-discrimination-facial-recognition%23%3A~%3Atext%3DStudies%2520show%2520that%2520facial%2520recognition%2520technology%2520is%2520biased.%2Cpublished%2520by%2520MIT%2520Media%2520Lab.
At a certain company long long ago, managers had a mainframe-based planning app that looked like a sort of spreadsheet.
The company did a study to see how much it improved the manager's team's productivity...
Oops! Use of the tools was correlated with declining productivity.
Steven Rostedt wrote
-
"I played a little with [Rust] in user space, and I just absolutely hate the cargo concept... I hate having to pull down other code that I do not trust. At least with shared libraries, I can trust a third party to have done the build and all that..."
The various crate-like things are a fad. The arguably correct way of using shared libraries was reinvented independently by the Gnu libc team and by Solaris, from a first use in Multics. You remember, Unix's papa and Linux's grandpa?
Give it a few years, the hype bubble for importing static libraries will burst, and shared libraries with updaters and downdaters will be re-re-invented.
From the Canadian Government page cited below:
Constructive dismissal is sometimes called "disguised dismissal" or "quitting with cause". This is because it often occurs in situations where the employer offers the employee the alternative of:
- leaving, or
- submitting to a unilateral and substantial alteration of a fundamental term or condition of their employment.
A person given a "quit or return to the office" has been fired, and can sue the pants off the employer. The lawyer involved may well offer a good price on a suit to everyone the employer fired, thus increasing the risk to the employer.
See https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.canada.ca%2Fen%2Femplo..., or google for "lawyer constructive dismissal" if you're not in Canada
The company is testing humans for their ability to do something they are inherently bad at.
Filtering programs, such as the one at spamcop.net, do it well:
- I haven't had a false positive for about three years.
- I get a false negative about once a month.
Whenever I get an email at a customer's, I run it through the spamcop filter. That reliably identifies the phishing-test emails,
I prefer to report those on the equivalent of the IT slack channel, so others aren't caught out by them (;-))
More than 10 years ago, my company tried to do facial recognition in an airport in Europe, for their security service. Alas, they didn't know about the "birthday paradox", and tried to match about 1,000 criminals against several thousand passengers. They shut it down when the system identified someone's grandmother as a male member of the Baader-Meinhof gang.
The (birthday) paradox is caused by trying to match each passenger against 1,000 criminals, not just one. Even with only a 1% error rate, there will be 10 false positives and negatives per passenger. And we don't have anything like a 1% error rate.
We need a 1/infinity error rate (:-)) Otherwise innocent grandmothers will be pulled aside, while actual criminals will breeze on through.
On a paper submitted by a physicist colleague: "This isn't right. This isn't even wrong." -- Wolfgang Pauli