Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Could we "pull the plug" on networked computers (Score 1) 68

Good point on the "benevolent dictator fantasy". :-) The EarthCent Ambassador Series by E.M. Foner delves into that big time with the benevolent "Stryx" AIs.

I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fduckduckgo.com%2F%3Fq%3Dexam...

Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.techopedia.com%2Ftim...

Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.co.uk%2Fnews%2Ftec...

Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.buzzfeed.com%2Fmikes...

Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.

Of course there are example of robots killing people with guns, but they are still unusual:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheconversation.com%2Fan...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.npr.org%2F2021%2F06%2F01...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2FFutur...
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/story/07/...

These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."

But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).

Is there money to be made by fear mongering? Yes, I have to agree you are right on that.

Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?

I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.

Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life. :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...

Two other USSR citizens we can thank for our current life in the USA: :-)

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."

There is even a catchy pop tune related to the last item: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."

If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?

Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.

In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.infoq.com%2Fpresenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."

Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fprincet...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fsunrise...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...

Example of related fears from my reading too much sci-fi: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"

So, AI out of control is just one of those concerns...

So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).

Comment That doesn't matter. (Score 5, Insightful) 69

We all know that working from the office was once the norm. That fact by itself tells us nothing about how much the workers liked it. Nor is it relevant to the modern day which includes excellent technological solutions for remote work and widespread evidence that it does not harm productivity.

So, the suffering that some people face, today, in dealing with a work-from-office mandate are not in any way addressed by saying "well people used to have to work from the office regardless." We don't live in the past, and the tribulations of the past aren't relevant to the present.

Of course, I don't expect Amazon to show any compassion. Why would they? They succeed, in part, by exploiting workers, so they don't care if there is some suffering involved. They believe (right or wrong doesn't matter) that their bottom line benefits from this policy, so they will push it. Workers who don't like it can push back if they choose, risks and all.

Personally, I approve of worker pushback and wish we had more of it, because power is not in balance and being a worker sucks in general.

Comment Re:Fund education, too. (Score 0) 104

Russia easily had 10:1 kill ratios over your Nazi pals and that's when the AFU was a real military you fascist fuck. Now its easily 20:1 as Ukraine is reduced to kidnapping grandpas off the street to force them to the front with little ammo or training. You lost this war a long time ago, sucker of McCarthy's rotting cock. All you're doing is costing Ukraine more lives and land.

Comment Captain obvious called... (Score 0) 55

...and asked what the media is smoking.

OBVIOUSLY his machine costs one fifth of the SELLING price of a commercial unit... Because the commercial unit doesn't cost the selling price to make either.

Why would anyone care? The MIT especially could easily do the same, better probably, and the DoD could as well but the point is the DoD is not in the manufacturing business, is it? And it doesn't want to be and if they had this dude manufacture his drone for them, well, guess what, he'd have to raise the price to five times as well because then he'd have to warranty the damn thing.

Comment Re:ChatGPT is not a chess engine (Score 1) 117

A lot of the 'headline' announcements, pro and con, are basically useless; but this sort of thing does seem like a useful cautionary tale in the current environment where we've got hype-driven ramming of largely unspecialized LLMs as 'AI features' into basically everything with a sales team; along with a steady drumbeat of reports of things like legal filings with hallucinated references; despite a post-processing layer that just slams your references into a conventional legal search engine to see if they return a result seeming like a pretty trivial step to either automate or make the intern do.

Having a computer system that can do an at least mediocre job, a decent percentage of the time, when you throw whatever unhelpfully structured inputs at it is something of an interesting departure from what most classically designed systems can do; but for an actually useful implementation one of the vital elements is ensuring that the right tool is actually being used for the job(which, at least in principle, you can often do since you have full control of which system will process the inputs; and, if you are building the system for a specific purpose, often at least some control over the inputs).

Even if LLMs were good at chess they'd be stupid expensive compared to ordinary chess engines. I'm sure that someone is interested in making LLMs good at chess to vindicate some 'AGI' benchmark; but, from an actual system implementation perspective, this is the situation where the preferred behavior would be 'Oh, you're trying to play chess; would you like me to set "uci_elo" or just have Stockfish kick your ass?" followed by a handoff to the tool that's actually good at the job.

Comment Why is dueling CEO quotes a story? (Score 5, Insightful) 32

Why do we even consider it a story when there are a couple of CEO quotes to mash together?

Even leaving aside the notrivial odds that what a CEO says is flat out wrong and the near certainty that what the CEO says is less well informed than what someone at least a layer or two closer to the technology or the product rather than to vague, abstract, 'management'; unless a C-level is being cleverly ambushed when away from their PR handlers with a few drinks in them or actively going off script in the throes of some personal upset, why would you expect their pronouncements to be anything but their company's perceived interests restated as personal insights?

Surprise, surprise, the AI-company guy is here to tell us that the very large, high barrier to entry, models are like spooky scary and revolutionary real soon now; even if you wouldn't know it from the quality of the product they can actually offer at the present time; while the AI-hardware guy is here to tell you that AI is friendly and doesn't bite but everyone needs even more than they thought they did, ideally deployed yesterday; because the AI-company people need to hype up the future value of throwing more cash and more patience at money-losing LLMs; and the AI-hardware people need to juice the total addressable market by any means necessary.

Slashdot Top Deals

What's the difference between a computer salesman and a used car salesman? A used car salesman knows when he's lying.

Working...