Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Echoing your complex systems & adaptations poi (Score 1) 45

Good points on complex systems and adaptations. Thanks! I hope you are right overall on that in practice -- especially about AI-co-designed biowarfare agents not being a huge issue versus Eric Schmidt's point on the "Offense Dominant" nature of such.

And advanced computing can otherwise indeed do a lot of good in health care. For example:
"Taking the bite out of Lyme disease: New studies offer insight into disease's treatment, lingering symptoms"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fnews.northwestern.edu%2F...
"Northwestern scientists identified that piperacillin, an antibiotic in the same class as penicillin, effectively cured mice of Lyme disease at 100-times less than the effective dose of doxycycline. At such a low dose, piperacillin also had the added benefit of "having virtually no impact on resident gut microbes," according to the study.
        The team screened nearly 500 medicines in a drug library, using a molecular framework to understand potential interactions between antibiotics and the Borrelia bacteria. Once the group had a short list of potentials, they performed additional physiological, cellular and molecular tests to identify compounds that did not impact other bacteria.
        The authors argue that piperacillin, which has already been FDA-approved as a safe treatment for pneumonia, could also be a candidate for preemptive interventions for those potentially exposed to Lyme (with a known deer tick bite).
        They found that piperacillin exclusively interfered with the unusual cell wall synthesis pattern common to Lyme bacteria, preventing the bacteria from growing or dividing and ultimately leading to its death."

Tangential on dealing with antibiotic resistance using phages as an are people have explored in the past and are re-exploring now in response to current issues: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

And rising CO2 levels and related climate change might be limited compared to worse case predictions if we continue to switch over to solar power and so on.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheroundup.org%2Fsolar-p...
"The world's current solar energy capacity is 850.2 GW (gigawatts). This is the maximum amount of energy that all global solar installations combined can produce at any one time. This figure has increased every year for the last decade and is more than ten times higher than it was in 2011, according to the latest data from IRENA and Ember."

If we have just one more decade or so of a 10X increase then much of our power will come from renewables instead of fossil fuels. And part of that growth is no doubt a response to earlier worries like about peak oil and global climate change -- to echo your point on ongoing responsive adaptations in complex systems.

So, a bunch of hopeful news is out there too when we look for it reflecting your point on responsiveness. Thanks for sharing your optimism about humanity's resilience.

Comment Re:It almost writes itself. (Score 1) 32

This is obviously much harder to do under controlled experimental conditions; probably more of a cohort study; but I'd be curious if the result is more of a 'you learn significantly less' or 'your existing skill degrades'.

Either way it will at least be a problem; since the current reliability of bots basically requires knowledgeable and experienced people to supervise them and know when to just give their output a look and pass it along, when to prod them on errors to try to get them fixed, and when to just do it themselves; and you only get knowledgeable and experienced people through learning and experience; which are going to be done few favors by enhanced cheating tools and automation of entry level jobs where people historically gained experience under the supervision of senior people; but it will be a shade uglier if it turns out that using senior people to herd bots actually degrades them over time rather than just causing them to not learn nearly as much as they otherwise might.

It's not like every task is a learning experience; some are already pretty well inside your skillset and that's fine at least in moderation; but if the impact of bots is to make something like writing or programming an exercise that does to your brain what heavy construction work does to your knees and spine the future of the 40+ 'knowledge worker' looks brighter than ever!

Comment Re:I'm sure... (Score 1) 94

That's why I was proposing it as one of the embarrassing failure modes. If someone at the State Department gets the wrong idea about the sincerity and consistency of the policy there will hardly be anybody for Turning Point USA to invite across the Atlantic to tell us about European race suicide without getting flagged. Awkward.

Obviously a solvable problem if you've got someone who knows how to carry out the quiet part without saying it and can do some cross referencing; but even if your social media text-munging/sentiment analysis bot is actually fit for purpose, and that's an if, it's going to be a lot of fiddly corrections both for jews who aren't frothing hard right lunatics and sufficiently pale non-jews who are.

Comment Re:I'm sure... (Score 1) 94

Depending on how many competent people they've got left post-purges; and how ill-explained the criteria are, I suspect that there will be some room for embarrassing mistakes. Ethnicy-looking muslims are a nope by design, of course; but not being suitably careful about jewish Yesh Atid voters risks making it obvious that it's about being so far up Netanyahu's ass you are asking Mike Huckabee to make room; not about jews particularly; while being too sincere about looking for antisemitism could really complicate our beautiful friendship with Reform UK and and AfD; some members of which may have made enthusiastic and somewhat intemperate observations about international Jewry; but in the good, honest, Anglo-Saxon and/or Teutonic fashion that certainly doesn't suggest backing the wrong semites.

Comment Was SARS-CoV-2 an example of "harms"? (Score 1) 45

What do you think of the idea that SARS-CoV-2 was perhaps engineered in the USA as a DARPA-funded self-spreading vaccine intended for bats to prevent zoonotic outbreaks in China but then accidentally leaked out when being tested by a partner in Wuhan who had a colony of the bats the vaccine was intended for? More details on the possible "who what when how and why" of all that:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdailysceptic.org%2F2024%2F...

If true, it provides an example of how dangerous this sort of gain-of-function bio-engineering of viruses can be (even if perhaps well-intended by the people involved). Also on that theme from 2014:
"Threatened pandemics and laboratory escapes: Self-fulfilling prophecies"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fthebulletin.org%2F2014%2F0...
"Looking at the problem pragmatically, the question is not if such escapes will result in a major civilian outbreak, but rather what the pathogen will be and how such an escape may be contained, if indeed it can be contained at all."

My worry with SARS-CoV-2 from the start (working in the biotech industry at the time, including by chance helping track the evolution of SARS-CoV-2) was that so much effort would go into researching the virus and understanding why it was so transmissible and dangerous (mainly to older people) that such knowledge could be misused by humans to make worse viruses. Sadly, AI now accelerates that risk (as in the video I linked to).

Here is Eric Schmidt recently saying essentially the same thing as far as the risk of AI being used to create pathogens for nefarious purposes and how he and others are very worried about it:
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Comment AI and job replacement (Score 1) 75

I don't know if "enthusiastic about the tech" completely describes my feelings about a short-term employment threat and a longer-term existential threat, but, yeah, neat stuff. Kind of maybe like respecting the cleverness and potential dangerous of a nuclear bomb or a black widow spider?

A related story I submitted: "The Workers Who Lost Their Jobs To AI "
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fnews.slashdot.org%2Fstor...

Sure maybe there is some braggadocio in Hinton somewhere which we all have. But he just does not come across much that way to me. He seems genuinely somewhat penitent in how he talks. Hinton at age 77 sounds to me more like someone who had an enjoyable career building neat things no one thought he could working in academia who just wants to retire from the limelight (as he says in the video) but feels compelled to spread a message of concern. I could be wrong about his motivation perhaps.

On whether AI exists, I've seen about four decades of things that were once "AI" no longer being considered AI once computers can do the thing (as others before me have commented on first). That has been everything from answering text questions about moon rocks, to playing chess, to composing music, to reading printed characters in books, to recognizing the shape of 3D objects, to driving a car, to now generating videos now, and more.

Example of the last of a video which soon will probably no longer being thought of as involving "AI":
""it's over, we're cooked!" -- says girl that literally does not exist..."
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fsingu...

On robots vs chimps, robots have come a long way since, say, I saw one of the first hopping robots in Marc Raibert's Lab at CMU in 1986 (and then later saw it visiting the MIT Museum with my kid). Example for what Marc Raibert and associates (now Boston Dynamics) has since achieved after forty years of development:
"Boston Dynamics New Atlas Robot Feels TOO Real and It's Terrifying!"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Some examples from this search:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fduckduckgo.com%2F%3Fq%3Drobo...

"I Witnessed the MOST ADVANCED Robotic Hand at CES 2025"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

"Newest Robotic Hand is Sensitive as Fingertips"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

"ORCA: An Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic Hand - IROS 2025 Paper Video"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

What's going on in China:
"China's First Robot With Human Brain SHOCKED The World at FAIR Plus Exhibition"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Lots more stuff out there. So if by "long time" on achieving fine motor control you mean by last year, well maybe. :-) I agree with you though that the self-replicating part is still quite a ways off. Inspired by James P. Hogan's "Two faces of Tomorrow" (1978) and NASA's Advanced Automation for Space Missions (1980) I tried (grandiosely) to help with that self-replicating part -- to little success so far:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fprincet...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fsunrise...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...

On who will buy stuff, it is perhaps a capitalist version of "tragedy of the commons". Every company thinks they will get the first mover advantage by firing most of their workers and replacing them with AI and robots. They don't think past the next quarter or at best year. Who will pay for products or who will pay unemployed workers to survive for decades is someone else's problem.

See the 1950s sci-fi story "The Midas Plague" for some related humor on dealing with the resulting economic imbalance. :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
""The Midas Plague" (originally published in Galaxy in 1954). In a world of cheap energy, robots are overproducing the commodities enjoyed by humankind. The lower-class "poor" must spend their lives in frantic consumption, trying to keep up with the robots' extravagant production, while the upper-class "rich" can live lives of simplicity. Property crime is nonexistent, and the government Ration Board enforces the use of ration stamps to ensure that everyone consumes their quotas. ..."

Related on how in the past the Commons were surprisingly well-managed anyway:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"In Governing the Commons, Ostrom summarized eight design principles that were present in the sustainable common pool resource institutions she studied ... Ostrom and her many co-researchers have developed a comprehensive "Social-Ecological Systems (SES) framework", within which much of the still-evolving theory of common-pool resources and collective self-governance is now located."

Marc Andreessen might disagree with some of those principles and have his own?
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The "Techno-Optimist Manifesto" is a 2023 self-published essay by venture capitalist Marc Andreessen. The essay argues that many significant problems of humanity have been solved with the development of technology, particularly technology without any constraints, and that we should do everything possible to accelerate technology development and advancement. Technology, according to Andreessen, is what drives wealth and happiness.[1] The essay is considered a manifesto for effective accelerationism."

I actually like most of what Marc has to say -- except he probably fundamentally misses "The Case Against Competition" and why AI produced through capitalist competition will likely doom us all:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.alfiekohn.org%2Farti...
"Children succeed in spite of competition, not because of it. Most of us were raised to believe that we do our best work when weâ(TM)re in a race -- that without competition we would all become fat, lazy, and mediocre. Itâ(TM)s a belief that our society takes on faith. Itâ(TM)s also false."

I agree AI can be overhyped. But then I read somewhere so were the early industrial power looms -- that were used more as a threat to keep wages down and working conditions poor using the threat that otherwise the factory owners would bring in the (expensive-to-install) looms.

Good luck and have fun with your project! Pessimistically, it perhaps it may have already "succeeded" if just knowing about it has made company workers nervous enough that they are afraid to ask for raises or more benefits? Optimistically though, it may instead mean the company will be more successful and can afford to pay more to retain skilled workers who work well with AI? I hope for you it is more of the later than the former.

Something I posted a while back though on how AI and robotics and provide an illusion of increasing employment by helping one company grow while its competitors shrink: https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments....

Or from 2021:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fj...
"According to a new academic research study, automation technology has been the primary driver in U.S. income inequality over the past 40 years. The report, published by the National Bureau of Economic Research, claims that 50% to 70% of changes in U.S. wages, since 1980, can be attributed to wage declines among blue-collar workers who were replaced or degraded by automation."

But in an (unregulated, mostly-non-unionized) capitalist system, what choice do most owners or employees (e.g. you) really have but to embrace AI and robotics -- to the extent it is not hype -- and race ahead?

Well, ignoring owners and employees could also expand their participation in subsistence, gift, and planned transactions as fallbacks -- but that is a big conceptual leap and still does not change the exchange economy imperative. A video and an essay I made on that:
"Five Interwoven Economies: Subsistence, Gift, Exchange, Planned, and Theft"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Beyond a Jobless Recovery: A heterodox perspective on 21st century economics"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-...

Anyway, I'm probably just starting to repeat myself here.

Comment Re:Could we "pull the plug" on networked computers (Score 1) 75

Thanks for the conversation. On your point "And why want it in the first place?" That is a insightful point, and I wish more people would think about that. Frankly, I don't think we need AI of any great sort right now, even if it is hard to argue with the value of some current AI systems like machine vision for parts inspection. Most of the "benefits" AI advocates trot out (e.g. solving world hunger, or global climate change, or cancer ,or whatever) are generally issues that have to do with politics and economics (e.g. there is enough food to go around but poor people can't pay for its broad distribution, renewable energy can power all our needs and is cheaper if fossil fuels had to pay true costs up front including defense costs, most cancer is from diet and lifestyle and toxins which we are problems because of mis-incentives like subsidizing ultraprocessed foods, etc). I am hard-pressed to think of any significant benefits from AI that could not be replaced by just having better social policies (including ones that fund more human-involved R&D).

=== Some other random thoughts on all this

I just finished watching this interview of Geoffrey Hinton which touches on some of the points discussed here:
"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

It is a fantastic interview that anyone interested in AI should watch.

Some nuances missed there though:

* My sig is a huge piece of his message on AI safety: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." (As a self-professed socialist Hinton at least asks for governments to be responsible to public opinion on AI safety and social safety nets -- but that still does not quite capture the idea in my sig which concerns a more fundamental issue than just prudence or charity)

* That you have to assume finite demand in a finite world by finite beings over a finite time for all goods and services for their to be job loss (which I think is true, but was not stated, as mentioned by me here: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-... ).

* That quality demands on products might go up with AI and absorb much more labor (i.e. doctors who might before reliably cure 99% of patients might be expected to cure 99.9%, where that extra 0.9% might take ten or a hundred times as much work)

* That his niece he mentioned who used AI to go from answering one medical complaint in 35 minutes to only 5 minutes could in theory now be be paid 5 times more but probably isn't -- so who got the benefits (Marshall Brain;s point on wealth concentration) -- unless quality was increased.

* That while people like him and the interviewer may thrive on a "work" purpose (and will suffer in that sense if ASI can do everything better), for most people the purpose of raising children and being a good friend and neighbor and having hobbies and spending time making health choices might be purpose enough.

* (Hinton touches on this, but to amplify) That right now there is room for many good enough workers in any business because there is only one of the best worker and that one person can't be everywhere doing everything. But (ignoring value in diversity) if you can just copy the best worker, and not pay the copies, then there is no room for any but the best worker. And worse, there is not room for even the best human worker if you can just employ the copies without employing the original once you have copied them. As Hinton said, digital intelligence means you can make (inexpensive) copies of systems that have already learned what they need to know -- and digital intelligence can share information a billion times faster than humans.

My undergrad advisor in cognitive psychology (George A. Miller) passed around one of Hinton's early papers circa 1984. And George (who like puns) joked "What are you hintin' at?" when I used "hinton" as part of a password for a shared computer account. Hinton and I must have overlapped when I was visiting CMU circa 1985-1986 but I don't recall offhand talking with him then. I think I would have enjoyed talking to him though (more so in a way than Herbert Simon who as Nobel Prize winner then was hard to get a few short meetings with -- one reason winning the Nobel Prize tends to destroy productivity). Hinton seems like a very nice guy -- even if he worries his work might (unintentionally) spell doom for us all. Although he does say how raising two kids as a single parent changed him and made him essentially more compassionate and so on -- so maybe he is a nicer guy now than back then? In any case, I can be glad my AI career (such as it was) took a different path than his, with me spending more time thinking about the social implications than the technical implementations (in part out of concerns of robots replacing humans that arose from talking with people in Hans' labs -- where I could see that, say, self-replicating robotic cockroaches deployed for military purposes could potentially wipe out humanity and then perhaps collapse themselves, instead of our successors being Hans' idealized "mind children" exploring the universe in our stead, like in his writings mentioned below).

While Hinton does not go into it in detail in that interview, there is why his intuition on neural networks was ultimately productive -- because of a Moore's Laws increase of capacity that made statistical approaches to AI more feasible:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.datasciencecentral...
" Recently I came across an explanation by John Launchbury, the Director of DARPA's Information Innovation Office who has a broader and longer term view. He divides the history and the future of AI into three ages:
1. The Age of Handcrafted Knowledge
2. The Age of Statistical Learning
3. The Age of Contextual Adaptation."

Also related to that by Hans Moravec from 1999 (whose lab I was a visitor at CMU over a decade earlier):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffaculty.umb.edu%2Fgary_z...
"By 2050 robot "brains" based on computers that execute 100 trillion instructions per second will start rivaling human intelligence"
"In light of what I have just described as a history of largely unfulfilled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the offing? My confidence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other living things over the past 30 years.
        The single best reason for optimism is the soaring performance in recent years of mass-produced computers. Through the 1970s and 1980s, the computers readily available to robotics researchers were capable of executing about one million instructions per second (MIPS). Each of these instructions represented a very basic task, like adding two 10-digit numbers or storing the result in a specified location in memory. In the 1990s computer power suitable for controlling a research robot shot through 10 MIPS, 100 MIPS and has lately reached 1,000 in high-end desktop machines. Apple's new iBook laptop computer, with a retail price at the time of this writing of $1,600, achieves more than 500 MIPS. Thus, functions far beyond the capabilities of robots in the 1970s and 1980s are now coming close to commercial viability. ...
      One thousand MIPS is only now appearing in high-end desktop PCs. In a few years it will be found in laptops and similar smaller, cheaper computers fit for robots. To prepare for that day, we recently began an intensive [DARPA-funded] three-year project to develop a prototype for commercial products based on such a computer. We plan to automate learning processes to optimize hundreds of evidence-weighing parameters and to write programs to find clear paths, locations, floors, walls, doors and other objects in the three-dimensional maps. We will also test programs that orchestrate the basic capabilities into larger tasks, such as delivery, floor cleaning and security patrol. ...
        Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. They will result from melding powerful reasoning programs to third-generation machines. These reasoning programs will be the far more sophisticated descendants of today's theorem provers and expert systems, which mimic human reasoning to make medical diagnoses, schedule routes, make financial decisions, configure computer systems, analyze seismic data to locate oil deposits and so on.
        Properly educated, the resulting robots will be come quite formidable. In fact, I am sure they will outperform us in any conceivable area of endeavor, intellectual or physical. Inevitably, such a development will lead to a fundamental restructuring of our society. Entire corporations will exist without any human employees or investors at all. Humans will play a pivotal role in formulating the intricate complex of laws that will govern corporate behavior. Ultimately, though, it is likely that our descendants will cease to work in the sense that we do now. They will probably occupy their days with a variety of social, recreational and artistic pursuits, not unlike today's comfortable retirees or the wealthy leisure classes.
        The path I've outlined roughly recapitulates the evolution of human intelligence -- but 10 million times more rapidly. It suggests that robot intelligence will surpass our own well before 2050. In that case, mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!"

Comment Re:when do we get co-pilot for co-pilots (Score 2) 48

I'm not sure that their bean counters trust LLMs quite enough to let them issue quotes; but they could honestly use an expert system of some kind to cut through their SKU nonsense.

I had just the worst meeting some time back where, despite there being a total of 6 'licensing people' between MS and the VAR, there were a number of points where they were unable to determine(or came to different determinations) of what license you needed to do certain things and how much it would cost(and not 'different' in the 'MS thinks we can do X% off list, VAR thinks we can do Y% of list; totally different alleged list prices, different SKUs in different quantities, and different alleged discounts).

For a company that sells both ERP and CRM software it seems like a bad look to not be able to; y'know, tell a customer who is asking about one of your product lines which model he needs and how much it will run him; and from a bean-counting perspective it seemed wild that at least tens of man hours worth of confusion were actually cost effective.

Maybe I just don't understand the psychology; and some 80k/yr sales person is totally worth it if the customer is in 'fuck it, I want this to be over' mode rather than 'hard nosed negotiator' mode when a premier licensing deal is signed; but it's always kind of a weird experience how the guys who sell consumer widgets can just give me a spec sheet and a price; but 'enterprise' means a couple of chirpy reps, a mandatory reseller, and a huge amount of manual attention.

Comment Re:Could we "pull the plug" on networked computers (Score 1) 75

Good point on the "benevolent dictator fantasy". :-) The EarthCent Ambassador Series by E.M. Foner delves into that big time with the benevolent "Stryx" AIs.

I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fduckduckgo.com%2F%3Fq%3Dexam...

Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.techopedia.com%2Ftim...

Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.co.uk%2Fnews%2Ftec...

Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.buzzfeed.com%2Fmikes...

Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.

Of course there are example of robots killing people with guns, but they are still unusual:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheconversation.com%2Fan...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.npr.org%2F2021%2F06%2F01...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2FFutur...
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/story/07/...

These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."

But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).

Is there money to be made by fear mongering? Yes, I have to agree you are right on that.

Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?

I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.

Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life. :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...

Two other USSR citizens we can thank for our current life in the USA: :-)

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."

There is even a catchy pop tune related to the last item: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."

If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?

Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.

In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.infoq.com%2Fpresenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."

Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fprincet...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fsunrise...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...

Example of related fears from my reading too much sci-fi: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"

So, AI out of control is just one of those concerns...

So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).

Comment Re:ChatGPT is not a chess engine (Score 1) 138

A lot of the 'headline' announcements, pro and con, are basically useless; but this sort of thing does seem like a useful cautionary tale in the current environment where we've got hype-driven ramming of largely unspecialized LLMs as 'AI features' into basically everything with a sales team; along with a steady drumbeat of reports of things like legal filings with hallucinated references; despite a post-processing layer that just slams your references into a conventional legal search engine to see if they return a result seeming like a pretty trivial step to either automate or make the intern do.

Having a computer system that can do an at least mediocre job, a decent percentage of the time, when you throw whatever unhelpfully structured inputs at it is something of an interesting departure from what most classically designed systems can do; but for an actually useful implementation one of the vital elements is ensuring that the right tool is actually being used for the job(which, at least in principle, you can often do since you have full control of which system will process the inputs; and, if you are building the system for a specific purpose, often at least some control over the inputs).

Even if LLMs were good at chess they'd be stupid expensive compared to ordinary chess engines. I'm sure that someone is interested in making LLMs good at chess to vindicate some 'AGI' benchmark; but, from an actual system implementation perspective, this is the situation where the preferred behavior would be 'Oh, you're trying to play chess; would you like me to set "uci_elo" or just have Stockfish kick your ass?" followed by a handoff to the tool that's actually good at the job.

Slashdot Top Deals

BASIC is to computer programming as QWERTY is to typing. -- Seymour Papert

Working...