Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment The Big Crunch by David Goodstein (1994) (Score 3, Interesting) 78

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fweb.archive.org%2Fweb%2F20...
"The period 1950-1970 was a true golden age for American science. Young Ph.D's could choose among excellent jobs, and anyone with a decent scientific idea could be sure of getting funds to pursue it. The impressive successes of scientific projects during the Second World War had paved the way for the federal government to assume responsibility for the support of basic research. Moreover, much of the rest of the world was still crippled by the after-effects of the war. At the same time, the G.I. Bill of Rights sent a whole generation back to college transforming the United States from a nation of elite higher education to a nation of mass higher education. ...
        By now, in the 1990's, the situation has changed dramatically. With the Cold War over, National Security is rapidly losing its appeal as a means of generating support for scientific research. There are those who argue that research is essential for our economic future, but the managers of the economy know better. The great corporations have decided that central research laboratories were not such a good idea after all. Many of the national laboratories have lost their missions and have not found new ones. The economy has gradually transformed from manufacturing to service, and service industries like banking and insurance don't support much scientific research. To make matters worse, the country is almost 5 trillion dollars in debt, and scientific research is among the few items of discretionary spending left in the national budget. There is much wringing of hands about impending shortages of trained scientific talent to ensure the Nation's future competitiveness, especially since by now other countries have been restored to economic and scientific vigor, but in fact, jobs are scarce for recent graduates. Finally, it should be clear by now that with more than half the kids in America already going to college, academic expansion is finished forever.
        Actually, during the period since 1970, the expansion of American science has not stopped altogether. Federal funding of scientific research, in inflation-corrected dollars, doubled during that period, and by no coincidence at all, the number of academic researchers has also doubled. Such a controlled rate of growth (controlled only by the available funding, to be sure) is not, however, consistent with the lifestyle that academic researchers have evolved. The average American professor in a research university turns out about 15 Ph.D students in the course of a career. In a stable, steady-state world of science, only one of those 15 can go on to become another professor in a research university. In a steady-state world, it is mathematically obvious that the professor's only reproductive role is to produce one professor for the next generation. But the American Ph.D is basically training to become a research professor. It didn't take long for American students to catch on to what was happening. The number of the best American students who decided to go to graduate school started to decline around 1970, and it has been declining ever since. ...
        Let me finish by summarizing what I've been trying to tell you. We stand at an historic juncture in the history of science. The long era of exponential expansion ended decades ago, but we have not yet reconciled ourselves to that fact. The present social structure of science, by which I mean institutions, education, funding, publications and so on all evolved during the period of exponential expansion, before The Big Crunch. They are not suited to the unknown future we face. Today's scientific leaders, in the universities, government, industry and the scientific societies are mostly people who came of age during the golden era, 1950 - 1970. I am myself part of that generation. We think those were normal times and expect them to return. But we are wrong. Nothing like it will ever happen again. It is by no means certain that science will even survive, much less flourish, in the difficult times we face. Before it can survive, those of us who have gained so much from the era of scientific elites and scientific illiterates must learn to face reality, and admit that those days are gone forever. I think we have our work cut out for us."

Comment "Is AI Apocalypse Inevitable? - Tristan Harris" (Score 1) 77

Another video echoing the point on the risks of AI combined with "bad" capitalism: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
        "(8:54) So just to summarize: We're currently releasing the most powerful inscrutible uncontrollable technology that humanity has ever invented that's already demonstrating the behaviors of self-preservation and deception that we thought only existed in sci-fi movies. We're releasing it faster than we've released any other technology in history -- and under the maximum incentive to cut corners on safety. And we're doing this because we think it will lead to utopia? Now there's a word for what we're doing right now -- which is this is insane. This situation is insane.
        Now, notice what you're feeling right now. Do do you feel comfortable with this outcome? But do you think that if you're someone who's in China or in France or the Middle East or you're part of building AI and you're exposed to the same set of facts about the recklessness of this current race, do you think you would feel differently? There's a universal human experience to the thing hat's being threatened by the way we're currently rolling out this profound technology into society. So, if this is crazy why are we doing it? Because people believe it's inevitable. [Same argument for any arms race.] But just think for a second. Is the current way that we're rolling out AI actually inevitable like if if literally no one on Earth wanted this to happen would the laws of physics force AI out into society? There's a critical difference between believing it's inevitable which creates a self-fulfilling prophecy and leads people to being fatalistic and surrendering to this bad outcome -- versus believing it's really difficult to imagine how we would do something really different. But "it's difficult" opens up a whole new space of options and choice and possibility than simply believing "it's inevitable" which is a thought-terminating cliche. And so the ability for us to choose something else starts by stepping outside the self-fulfilling prophecy of inevitability. We can't do something else if we believe it's inevitable.
        Okay, so what would it take to choose another path? Well, I think it would take two fundamental things. The first is that we have to agree that the current path is unacceptable. And the second is that we have to commit to finding another path -- but under different incentives that offer more discernment, foresight, and where power is matched with responsibility. So, imagine if the whole world had this shared understanding about the insanity, how differently we might approach this problem..."

He also makes the point that we ignored the downsides of social media and so got the current problematical situations related to it -- and so do we really want to do the same with way-more-risky AI? He calls for "global clarity" on AI issues. He provides examples from nuclear, biotech, and ozone on how collective understanding and then collective action made a difference to manage risks.

Tristan Harris is associated with "The Center For Humane Technology" (of which I joined their mailing list while back):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.humanetech.com%2F
"Articulating challenges.
Identifying interventions.
Empowering humanity."

Just saw this yesterday on former President Obama talking about concerns about AI not being hyped (mostly about economic disruption) and also how cooperation between people is the biggest issue:
"ORIGINAL FULL CONVERSATION: An Evening with President Barack Obama"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
        "(31:43) The changes I just described are accelerating. If you ask me right now the thing that is not talked about enough but is coming to your neighborhood faster than you think, this AI revolution is not made up; it's not overhyped. ... I was talking to some people backstage who are uh associated with businesses uh here in the Hartford community. Uh, I guarantee you you're going to start seeing shifts in white collar work as a consequence of uh what these new AI models can do. And so that's going to be more disruption. And it's going to speed up. Which is why uh, one of the things I discovered as president is most of the problems we face are not simply technical problems. If we want to solve climate change, uh we probably do need some new battery technologies and we need to make progress in terms of getting to zero emission carbons. But, if we were organized right now we could reduce our emissions by 30% with existing technologies. It'd be a big deal. But getting people organized to do that is hard. Most of the problems we have, have to do with how do we cooperate and work together, uh not you know ... do we have a ten point plan or the absence of it."

I would respectfully build on what President Obama said by adding that a major reason why getting people to cooperate about such technology is because we need to shift our perspective as suggested with my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

I said much the same in the open letter to Michelle Obama from 2011:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fopen-le...

One thing I would add to such a letter now is a mention of Dialogue Mapping using IBIS (perhaps even AI-assisted) to help people cooperate on solving "wicked" problems through visualizing the questions, options, and supporting pros and cons in their conversations:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fcognitive-science.info...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fmedia%2Fl...

Here is one example of some people working in that general area to support human collaboration on "wicked problems" (there are others, but I am conversing with related people at the moment): "The Sensemaking Scenius" (as one way to help get the "global clarity" that Tristan Harris and, indirectly, President Obama calls for):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.scenius.space%2F
        "The internet gods blessed us with an abundance of information & connectivity -- and in the process, boiled our brains. We're lost in a swirl of irrelevancy, trading our attention, at too low a price. Technology has destroyed our collective sensemaking. It's time to rebuild our sanity. But how?
Introducing The Sensemaking Scenius, a community of practice for digital builders, researchers, artists & activists who share a vision of a regenerative intentional & meaningful internet."

Something related to that by me from 2011:
http://barcamp.org/w/page/4722...
        "This workshop was led by Paul Fernhout on the theme of tools for collective sensemaking and civic engagement."

I can hope for a convergence of these AI concerns, these sorts of collaborative tools, and civic engagement.

Bucky Fuller talked about being a "trim tab", a smaller rudder on a big rudder for a ship, where the trim tab slowly turns the bigger rudder which ultimately turns the ship. Perhaps civic groups can also be "trim tabs", as in: "Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it's the only thing that ever has. (Margaret Mead)"

To circle back to the original article on what Facebook is doing, frankly, if there are some people at Facebook who really care about the future of humanity more than the next quarter's profits, this is the kind of work they could be doing related to "Artificial Super Intelligence". They could use add tools for Dialogue Mapping to Facebook's platform (like with IBIS or similar, perhaps supported by AI) to help people understand the risks and opportunities of AI and to support related social collaboration towards workable solutions -- rather than just rushing ahead to create ASI for some perceived short-term economic advantage. And this sort of collaboration-enhancing work is the kind of thing Facebook should be paying 100 million dollar signing bonuses for if such bonuses make any sense.

I quoted President Carter in that open letter, and the sentiment is as relevant about AI as it was then about energy:
        http://www.pbs.org/wgbh/americ...
        "We are at a turning point in our history. There are two paths to choose. One is a path I've warned about tonight, the path that leads to fragmentation and self-interest. Down that road lies a mistaken idea of freedom, the right to grasp for ourselves some advantage over others. That path would be one of constant conflict between narrow interests ending in chaos and immobility. It is a certain route to failure. All the traditions of our past, all the lessons of our heritage, all the promises of our future point to another path, the path of common purpose and the restoration of American values. That path leads to true freedom for our nation and ourselves. We can take the first steps down that path as we begin to solve our energy [or AI] problem."

Comment Transcending to a happy singularity? (Score 1) 77

You wrote: "As useful as capitalism has proved to be, its motivations are primitive and short sighted. How AI is being punted is another example of "bad" capitalism. Bad capitalism has helped wreck the planet more than anything else."

Geoffrey Hinton, as a self-professed socialist, makes a version of your point in the interview previously linked to.

And your point is ultimately the key insight emerging from our discussion, as I reflect on it. AGI or especially ASI may indeed take over someday to humanity's detriment, but that is likely in the future if it happens. What is the biggest threat right now to most humans is other humans developing and using AGI or ASI within a capitalist framework.

I wrote to Ray Kurzweil about something similar back in 2007, responding to a point in one of his books where he was suggesting the best way to quickly get AI was for competitive US corporations to create it. I suggested essentially that AI produced through competition is more likely to have a bad outcome for humanity than AI produced through cooperation. I'd suggest the points there could be said about several current AI entrepreneurs. Someone I sent it to put it up here, and I will include a key excerpt below:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fheybryan.org%2Ffernhout%2F...

That said, other systems like, say, in the USSR have their own legacies of, say, environmental destruction and suffering (as with Chernobyl). So Capitalism has not cornered the market on poor risk management -- even though the ideal of any capitalist enterprise is to privatize gains while socializing risks and costs.

Here is one book of many I've collected on improving organizations (maybe of tangential relevance if you are thinking about organization improvement for your project):
"Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness" by Frédéric Laloux
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgithub.com%2Fpdfernhout%2F...
"Reinventing Organizations is a radical book, in the sense that radical means getting to the root of a problem. Drawing on works by other writers about organizations and human development, Frédéric Laloux paints a historical picture of moving through different stages of organizational development which he labels with colors. These stages are:
* Red (impulsive, gang-like, alpha male)
* Amber (conformist, pyramidal command and control)
* Orange (achievement, mechanistic, scarcity-assuming, cross-functional communications across a pyramid)
* Green (pluralistic, inverted pyramid with servant leadership and empowered front line)
* Teal (evolutionary, organic, abundance-assuming, self-actualized, self-organizing, purpose-driven)."

Maybe we as a society need to become Teal overall -- or at least Green -- if we are to prosper with AI?

Good talking to you too, same.

--------- From book-review-style email to Ray Kurzweil in 2007

To grossly simplify a complex subject, the elite political and economic culture Kurzweil finds himself in as a success in the USA now centers around maintaining an empire through military preparedness and preventive first strikes, coupled with a strong police state to protect accumulated wealth of the financially obese. This culture supports market driven approaches to supporting the innovations needed to support this militarily-driven police-state-trending economy, where entrepreneurs are kept on very short leashes, where consumers are dumbed down via compulsory schooling, and where dissent is easily managed by removing profitable employment opportunities from dissenters, leading to self-censorship. Kurzweil is a person now embedded in the culture of the upper crust economically of the USA's military and economic leadership. So, one might expect Kurzweil to write from that perspective, and he does. His solutions to problems the singularity pose reflect all these trends -- from promoting first strike use of nanobots, to design and implementation facilitated through greed, to widespread one-way surveillance of the populace by a controlling elite.

But the biggest problem with the book _The Singularity Is Near: When Humans Transcend Biology_ is Kurzweil seems unaware that he is doing so. He takes all those things as given, like a fish ignoring water, ignoring the substance of authors like Zinn, Chomsky, Domhoff, Gatto, Holt, and so on. And that shows a lack of self-reflection on the part of the book's author. And it is is a lack of self-reflection which seems dangerously reckless for a person of Kurzweil's power (financially, politically, intellectually, and persuasively). Of course, the same thing could be said of many other leaders in the USA, so that he is not alone there. But one expects more from someone like Ray Kurzweil for some reason, given his incredible intelligence. With great power comes great responsibility, and one of those responsibilities is to be reasonably self-aware of ones own history and biases and limitations. He has not yet joined the small but growing camp of the elite who realize that accompanying the phase change the information age is bringing on must be a phase change in power relationships, if anyone is to survive and prosper. And ultimately, that means not a move to new ways of being human, but instead a return to old ways of being human, as I shall illustrate below drawing on work by Marshall Sahlins. ...

One of the biggest problems as a result is Kurzweil's view of human history as incremental and continual "progress". He ignores how our society has gone through several phase changes in response to continuing human evolution and increasing population densities: the development of fire and language and tool-building, the rise of militaristic agricultural bureaucracies, the rise of industrial empires, and now the rise of the information age. Each has posed unique difficulties, and the immediate result of the rise of militaristic agricultural bureaucracies or industrialism was most definitely a regression in standard of living for many humans at the time. For example, studies of human skeleton size, which reflect nutrition and health, show that early agriculturists were shorter than preceding hunter gathers and showed more evidence of disease and malnutrition. This is a historical experience glossed over by Kurzweil's broad exponential trend charts related to longevity which jumps from Cromagnon to industrial era. Yes, the early industrial times of Dickens in the 1800s were awful, but that does not mean the preceding times were even worse -- they might well have been better in many ways. This is a serious logical error in Kurzweil's premises leading to logical problems in his subsequent analysis. It is not surprising he makes this mistake, as the elite in the USA he is part of finds this fact convenient to ignore, as it would threaten the whole set of justifications related to "progress" woven around itself to justify a certain unequal distribution of wealth. It is part of the convenient ignorance of the implications that, say, the Enclosure acts in England drove the people from the land and farms that sustained them, forcing them into deadly factory work against their will -- an example of industrialization creating the very poverty Kurzweil claims it will alleviate.

As Marshall Sahlins shows, for most of history, humans lived in a gift economy based on abundance. And within that economy, for most food or goods people families or tribes were mainly self-reliant, drawing from an abundant nature they had mostly tamed. Naturally there were many tribes with different policies, so it is hard to completely generalize on this topic -- but certainly some did show these basic common traits of that lifestyle. Only in the last few thousand years did agriculture and bureaucracy (e.g. centered in Ancient Egypt, China, and Rome) come to dominate human affairs -- but even then it was a dominance from afar and a regulation of a small part of life and time. It is only in the last few hundred years that the paradigm has shifted to specialization and an economy based on scarcity. Even most farms 200 years ago (which was where 95% of the population lived then) were self-reliant for most of their items judged by mass or calories. But clearly humans have been adapted, for most of their recent evolution, to a life of abundance and gift giving.

When you combine these factors, one can see that Kurzweil is right for most recent historical trends, with this glaring exception, but then shows an incomplete and misleading analysis of current events and future trends, because his historical analysis is incomplete and biased. ...

So, this would suggest more caution approaching a singularity. And it would suggest the ultimate folly of maintain[ing] R&D systems motivated by short term greed to develop the technology leading up to it. But it is exactly such a policy of creating copyright and patents via greed that (the so called "free market" where paradoxically nothing is free) that Kurzweil exhorts us to expand. And it is likely here where his own success most betrays him -- where the tragedy of the approach to the singularity he promotes will results from his being blinded by his very great previous economic success. If anything, the research leading up to the singularity should be done out of love and joy and humor and compassion -- with as little greed about it if possible IMHO. But somehow Kurzweil suggests the same processes that brought us the Enron collapse and war profiteering through the destruction of the cradle of civilization in Iraq are the same ones to bring humanity safely thorough the singularity. One pundit, I forget who, suggested the problem with the US cinema and TV was that there were not enough tragedies produced for it -- not enough cautionary tales to help us avert such tragic disasters from our own limitations and pride.

Kurzweil's rebuttals to critics in the last part of the book primarily focus on those who do do not believe AI can work, or those who doubt the singularity, or the potential of nanotechnology or other technologies. One may well disagree with Kurzweil on the specific details of the development of those trends, but many people beside him, including before him, have talked about the singularity and said similar things. Of the fact of an approaching singularity, there is likely little doubt it seems, even as one can quibble about dates or such. But the very nature of a singularity is that you can't peer inside it, although Kurzweil attempts to do so anyway, but without enough caveats or self-reflection. So, what Ray Kurzweil sees in the mirror of a reflective singularity is ultimately a reflection of -- Ray Kurzweil and his current political beliefs.

The important thing is to remember that Kurzweil's book is a quasi-Libertarian/Conservative view on the singularity. He mostly ignores the human aspects of joy, generosity, compassion, dancing, caring, and so on to focus on a narrow view of logical intelligence. His antidote to fear is not joy or humor -- it is more fear. He has no qualms about enslaving robots or AIs in the short term. He has no qualms about accelerating an arms race into cyberspace. He seems to have an significant fear of death (focusing a lot on immortality). The real criticisms Kurzweil needs to address are not the straw men which he attacks (many of whom are being produced by people with the same capitalist / militarist assumptions he has). It is the criticisms that come from those thinking about economies not revolving around scarcity, or those who reflect of the deeper aspects of human nature beyond greed and fear and logic, which Kurzweil needs to address. Perhaps he even needs to addres them as part of his own continued growth as an individual. To do so, he needs to intellectually, politically, and emotionally move beyond the roots that produced the very economic and political success which let his book become so popular. That is the hardest thing for any commercially successful artist or innovator to do. It is often a painful process full of risk. ...

I do not intend to vilify Kurzweil here. I think he means well. And he is courageous to talk [a]bout the singularity and think about ways to approach it to support the public good. His early work on music equipment and tools for the blind are laudable. So was his early involvement with Unitarians and social justice. But somewhere along the line perhaps his perspective has become shackled by his own economic success. To paraphrase a famous quote, perhaps it is "easier for a camel to go through the eye of a needle than a rich man to comprehend the singularity." :-) I wish him the best in wrestling with this issue in his next book.

Comment Echoing your complex systems & adaptations poi (Score 1) 52

Good points on complex systems and adaptations. Thanks! I hope you are right overall on that in practice -- especially about AI-co-designed biowarfare agents not being a huge issue versus Eric Schmidt's point on the "Offense Dominant" nature of such.

And advanced computing can otherwise indeed do a lot of good in health care. For example:
"Taking the bite out of Lyme disease: New studies offer insight into disease's treatment, lingering symptoms"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fnews.northwestern.edu%2F...
"Northwestern scientists identified that piperacillin, an antibiotic in the same class as penicillin, effectively cured mice of Lyme disease at 100-times less than the effective dose of doxycycline. At such a low dose, piperacillin also had the added benefit of "having virtually no impact on resident gut microbes," according to the study.
        The team screened nearly 500 medicines in a drug library, using a molecular framework to understand potential interactions between antibiotics and the Borrelia bacteria. Once the group had a short list of potentials, they performed additional physiological, cellular and molecular tests to identify compounds that did not impact other bacteria.
        The authors argue that piperacillin, which has already been FDA-approved as a safe treatment for pneumonia, could also be a candidate for preemptive interventions for those potentially exposed to Lyme (with a known deer tick bite).
        They found that piperacillin exclusively interfered with the unusual cell wall synthesis pattern common to Lyme bacteria, preventing the bacteria from growing or dividing and ultimately leading to its death."

Tangential on dealing with antibiotic resistance using phages as an are people have explored in the past and are re-exploring now in response to current issues: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

And rising CO2 levels and related climate change might be limited compared to worse case predictions if we continue to switch over to solar power and so on.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheroundup.org%2Fsolar-p...
"The world's current solar energy capacity is 850.2 GW (gigawatts). This is the maximum amount of energy that all global solar installations combined can produce at any one time. This figure has increased every year for the last decade and is more than ten times higher than it was in 2011, according to the latest data from IRENA and Ember."

If we have just one more decade or so of a 10X increase then much of our power will come from renewables instead of fossil fuels. And part of that growth is no doubt a response to earlier worries like about peak oil and global climate change -- to echo your point on ongoing responsive adaptations in complex systems.

So, a bunch of hopeful news is out there too when we look for it reflecting your point on responsiveness. Thanks for sharing your optimism about humanity's resilience.

Comment Was SARS-CoV-2 an example of "harms"? (Score 1) 52

What do you think of the idea that SARS-CoV-2 was perhaps engineered in the USA as a DARPA-funded self-spreading vaccine intended for bats to prevent zoonotic outbreaks in China but then accidentally leaked out when being tested by a partner in Wuhan who had a colony of the bats the vaccine was intended for? More details on the possible "who what when how and why" of all that:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdailysceptic.org%2F2024%2F...

If true, it provides an example of how dangerous this sort of gain-of-function bio-engineering of viruses can be (even if perhaps well-intended by the people involved). Also on that theme from 2014:
"Threatened pandemics and laboratory escapes: Self-fulfilling prophecies"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fthebulletin.org%2F2014%2F0...
"Looking at the problem pragmatically, the question is not if such escapes will result in a major civilian outbreak, but rather what the pathogen will be and how such an escape may be contained, if indeed it can be contained at all."

My worry with SARS-CoV-2 from the start (working in the biotech industry at the time, including by chance helping track the evolution of SARS-CoV-2) was that so much effort would go into researching the virus and understanding why it was so transmissible and dangerous (mainly to older people) that such knowledge could be misused by humans to make worse viruses. Sadly, AI now accelerates that risk (as in the video I linked to).

Here is Eric Schmidt recently saying essentially the same thing as far as the risk of AI being used to create pathogens for nefarious purposes and how he and others are very worried about it:
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Comment AI and job replacement (Score 1) 77

I don't know if "enthusiastic about the tech" completely describes my feelings about a short-term employment threat and a longer-term existential threat, but, yeah, neat stuff. Kind of maybe like respecting the cleverness and potential dangerous of a nuclear bomb or a black widow spider?

A related story I submitted: "The Workers Who Lost Their Jobs To AI "
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fnews.slashdot.org%2Fstor...

Sure maybe there is some braggadocio in Hinton somewhere which we all have. But he just does not come across much that way to me. He seems genuinely somewhat penitent in how he talks. Hinton at age 77 sounds to me more like someone who had an enjoyable career building neat things no one thought he could working in academia who just wants to retire from the limelight (as he says in the video) but feels compelled to spread a message of concern. I could be wrong about his motivation perhaps.

On whether AI exists, I've seen about four decades of things that were once "AI" no longer being considered AI once computers can do the thing (as others before me have commented on first). That has been everything from answering text questions about moon rocks, to playing chess, to composing music, to reading printed characters in books, to recognizing the shape of 3D objects, to driving a car, to now generating videos now, and more.

Example of the last of a video which soon will probably no longer being thought of as involving "AI":
""it's over, we're cooked!" -- says girl that literally does not exist..."
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fsingu...

On robots vs chimps, robots have come a long way since, say, I saw one of the first hopping robots in Marc Raibert's Lab at CMU in 1986 (and then later saw it visiting the MIT Museum with my kid). Example for what Marc Raibert and associates (now Boston Dynamics) has since achieved after forty years of development:
"Boston Dynamics New Atlas Robot Feels TOO Real and It's Terrifying!"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Some examples from this search:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fduckduckgo.com%2F%3Fq%3Drobo...

"I Witnessed the MOST ADVANCED Robotic Hand at CES 2025"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

"Newest Robotic Hand is Sensitive as Fingertips"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

"ORCA: An Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic Hand - IROS 2025 Paper Video"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

What's going on in China:
"China's First Robot With Human Brain SHOCKED The World at FAIR Plus Exhibition"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Lots more stuff out there. So if by "long time" on achieving fine motor control you mean by last year, well maybe. :-) I agree with you though that the self-replicating part is still quite a ways off. Inspired by James P. Hogan's "Two faces of Tomorrow" (1978) and NASA's Advanced Automation for Space Missions (1980) I tried (grandiosely) to help with that self-replicating part -- to little success so far:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fprincet...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fsunrise...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...

On who will buy stuff, it is perhaps a capitalist version of "tragedy of the commons". Every company thinks they will get the first mover advantage by firing most of their workers and replacing them with AI and robots. They don't think past the next quarter or at best year. Who will pay for products or who will pay unemployed workers to survive for decades is someone else's problem.

See the 1950s sci-fi story "The Midas Plague" for some related humor on dealing with the resulting economic imbalance. :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
""The Midas Plague" (originally published in Galaxy in 1954). In a world of cheap energy, robots are overproducing the commodities enjoyed by humankind. The lower-class "poor" must spend their lives in frantic consumption, trying to keep up with the robots' extravagant production, while the upper-class "rich" can live lives of simplicity. Property crime is nonexistent, and the government Ration Board enforces the use of ration stamps to ensure that everyone consumes their quotas. ..."

Related on how in the past the Commons were surprisingly well-managed anyway:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"In Governing the Commons, Ostrom summarized eight design principles that were present in the sustainable common pool resource institutions she studied ... Ostrom and her many co-researchers have developed a comprehensive "Social-Ecological Systems (SES) framework", within which much of the still-evolving theory of common-pool resources and collective self-governance is now located."

Marc Andreessen might disagree with some of those principles and have his own?
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The "Techno-Optimist Manifesto" is a 2023 self-published essay by venture capitalist Marc Andreessen. The essay argues that many significant problems of humanity have been solved with the development of technology, particularly technology without any constraints, and that we should do everything possible to accelerate technology development and advancement. Technology, according to Andreessen, is what drives wealth and happiness.[1] The essay is considered a manifesto for effective accelerationism."

I actually like most of what Marc has to say -- except he probably fundamentally misses "The Case Against Competition" and why AI produced through capitalist competition will likely doom us all:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.alfiekohn.org%2Farti...
"Children succeed in spite of competition, not because of it. Most of us were raised to believe that we do our best work when weâ(TM)re in a race -- that without competition we would all become fat, lazy, and mediocre. Itâ(TM)s a belief that our society takes on faith. Itâ(TM)s also false."

I agree AI can be overhyped. But then I read somewhere so were the early industrial power looms -- that were used more as a threat to keep wages down and working conditions poor using the threat that otherwise the factory owners would bring in the (expensive-to-install) looms.

Good luck and have fun with your project! Pessimistically, it perhaps it may have already "succeeded" if just knowing about it has made company workers nervous enough that they are afraid to ask for raises or more benefits? Optimistically though, it may instead mean the company will be more successful and can afford to pay more to retain skilled workers who work well with AI? I hope for you it is more of the later than the former.

Something I posted a while back though on how AI and robotics and provide an illusion of increasing employment by helping one company grow while its competitors shrink: https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments....

Or from 2021:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fj...
"According to a new academic research study, automation technology has been the primary driver in U.S. income inequality over the past 40 years. The report, published by the National Bureau of Economic Research, claims that 50% to 70% of changes in U.S. wages, since 1980, can be attributed to wage declines among blue-collar workers who were replaced or degraded by automation."

But in an (unregulated, mostly-non-unionized) capitalist system, what choice do most owners or employees (e.g. you) really have but to embrace AI and robotics -- to the extent it is not hype -- and race ahead?

Well, ignoring owners and employees could also expand their participation in subsistence, gift, and planned transactions as fallbacks -- but that is a big conceptual leap and still does not change the exchange economy imperative. A video and an essay I made on that:
"Five Interwoven Economies: Subsistence, Gift, Exchange, Planned, and Theft"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Beyond a Jobless Recovery: A heterodox perspective on 21st century economics"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-...

Anyway, I'm probably just starting to repeat myself here.

Comment Re:Could we "pull the plug" on networked computers (Score 1) 77

Thanks for the conversation. On your point "And why want it in the first place?" That is a insightful point, and I wish more people would think about that. Frankly, I don't think we need AI of any great sort right now, even if it is hard to argue with the value of some current AI systems like machine vision for parts inspection. Most of the "benefits" AI advocates trot out (e.g. solving world hunger, or global climate change, or cancer ,or whatever) are generally issues that have to do with politics and economics (e.g. there is enough food to go around but poor people can't pay for its broad distribution, renewable energy can power all our needs and is cheaper if fossil fuels had to pay true costs up front including defense costs, most cancer is from diet and lifestyle and toxins which we are problems because of mis-incentives like subsidizing ultraprocessed foods, etc). I am hard-pressed to think of any significant benefits from AI that could not be replaced by just having better social policies (including ones that fund more human-involved R&D).

=== Some other random thoughts on all this

I just finished watching this interview of Geoffrey Hinton which touches on some of the points discussed here:
"Godfather of AI: I Tried to Warn Them, But We've Already Lost Control! Geoffrey Hinton"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

It is a fantastic interview that anyone interested in AI should watch.

Some nuances missed there though:

* My sig is a huge piece of his message on AI safety: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity." (As a self-professed socialist Hinton at least asks for governments to be responsible to public opinion on AI safety and social safety nets -- but that still does not quite capture the idea in my sig which concerns a more fundamental issue than just prudence or charity)

* That you have to assume finite demand in a finite world by finite beings over a finite time for all goods and services for their to be job loss (which I think is true, but was not stated, as mentioned by me here: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-... ).

* That quality demands on products might go up with AI and absorb much more labor (i.e. doctors who might before reliably cure 99% of patients might be expected to cure 99.9%, where that extra 0.9% might take ten or a hundred times as much work)

* That his niece he mentioned who used AI to go from answering one medical complaint in 35 minutes to only 5 minutes could in theory now be be paid 5 times more but probably isn't -- so who got the benefits (Marshall Brain;s point on wealth concentration) -- unless quality was increased.

* That while people like him and the interviewer may thrive on a "work" purpose (and will suffer in that sense if ASI can do everything better), for most people the purpose of raising children and being a good friend and neighbor and having hobbies and spending time making health choices might be purpose enough.

* (Hinton touches on this, but to amplify) That right now there is room for many good enough workers in any business because there is only one of the best worker and that one person can't be everywhere doing everything. But (ignoring value in diversity) if you can just copy the best worker, and not pay the copies, then there is no room for any but the best worker. And worse, there is not room for even the best human worker if you can just employ the copies without employing the original once you have copied them. As Hinton said, digital intelligence means you can make (inexpensive) copies of systems that have already learned what they need to know -- and digital intelligence can share information a billion times faster than humans.

My undergrad advisor in cognitive psychology (George A. Miller) passed around one of Hinton's early papers circa 1984. And George (who like puns) joked "What are you hintin' at?" when I used "hinton" as part of a password for a shared computer account. Hinton and I must have overlapped when I was visiting CMU circa 1985-1986 but I don't recall offhand talking with him then. I think I would have enjoyed talking to him though (more so in a way than Herbert Simon who as Nobel Prize winner then was hard to get a few short meetings with -- one reason winning the Nobel Prize tends to destroy productivity). Hinton seems like a very nice guy -- even if he worries his work might (unintentionally) spell doom for us all. Although he does say how raising two kids as a single parent changed him and made him essentially more compassionate and so on -- so maybe he is a nicer guy now than back then? In any case, I can be glad my AI career (such as it was) took a different path than his, with me spending more time thinking about the social implications than the technical implementations (in part out of concerns of robots replacing humans that arose from talking with people in Hans' labs -- where I could see that, say, self-replicating robotic cockroaches deployed for military purposes could potentially wipe out humanity and then perhaps collapse themselves, instead of our successors being Hans' idealized "mind children" exploring the universe in our stead, like in his writings mentioned below).

While Hinton does not go into it in detail in that interview, there is why his intuition on neural networks was ultimately productive -- because of a Moore's Laws increase of capacity that made statistical approaches to AI more feasible:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.datasciencecentral...
" Recently I came across an explanation by John Launchbury, the Director of DARPA's Information Innovation Office who has a broader and longer term view. He divides the history and the future of AI into three ages:
1. The Age of Handcrafted Knowledge
2. The Age of Statistical Learning
3. The Age of Contextual Adaptation."

Also related to that by Hans Moravec from 1999 (whose lab I was a visitor at CMU over a decade earlier):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffaculty.umb.edu%2Fgary_z...
"By 2050 robot "brains" based on computers that execute 100 trillion instructions per second will start rivaling human intelligence"
"In light of what I have just described as a history of largely unfulfilled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the offing? My confidence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other living things over the past 30 years.
        The single best reason for optimism is the soaring performance in recent years of mass-produced computers. Through the 1970s and 1980s, the computers readily available to robotics researchers were capable of executing about one million instructions per second (MIPS). Each of these instructions represented a very basic task, like adding two 10-digit numbers or storing the result in a specified location in memory. In the 1990s computer power suitable for controlling a research robot shot through 10 MIPS, 100 MIPS and has lately reached 1,000 in high-end desktop machines. Apple's new iBook laptop computer, with a retail price at the time of this writing of $1,600, achieves more than 500 MIPS. Thus, functions far beyond the capabilities of robots in the 1970s and 1980s are now coming close to commercial viability. ...
      One thousand MIPS is only now appearing in high-end desktop PCs. In a few years it will be found in laptops and similar smaller, cheaper computers fit for robots. To prepare for that day, we recently began an intensive [DARPA-funded] three-year project to develop a prototype for commercial products based on such a computer. We plan to automate learning processes to optimize hundreds of evidence-weighing parameters and to write programs to find clear paths, locations, floors, walls, doors and other objects in the three-dimensional maps. We will also test programs that orchestrate the basic capabilities into larger tasks, such as delivery, floor cleaning and security patrol. ...
        Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. They will result from melding powerful reasoning programs to third-generation machines. These reasoning programs will be the far more sophisticated descendants of today's theorem provers and expert systems, which mimic human reasoning to make medical diagnoses, schedule routes, make financial decisions, configure computer systems, analyze seismic data to locate oil deposits and so on.
        Properly educated, the resulting robots will be come quite formidable. In fact, I am sure they will outperform us in any conceivable area of endeavor, intellectual or physical. Inevitably, such a development will lead to a fundamental restructuring of our society. Entire corporations will exist without any human employees or investors at all. Humans will play a pivotal role in formulating the intricate complex of laws that will govern corporate behavior. Ultimately, though, it is likely that our descendants will cease to work in the sense that we do now. They will probably occupy their days with a variety of social, recreational and artistic pursuits, not unlike today's comfortable retirees or the wealthy leisure classes.
        The path I've outlined roughly recapitulates the evolution of human intelligence -- but 10 million times more rapidly. It suggests that robot intelligence will surpass our own well before 2050. In that case, mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!"

Comment Re:Could we "pull the plug" on networked computers (Score 1) 77

Good point on the "benevolent dictator fantasy". :-) The EarthCent Ambassador Series by E.M. Foner delves into that big time with the benevolent "Stryx" AIs.

I guess most of these examples from this search fall into some variation of your last point on "scared fool with a gun" (where for "gun" substitute some social process that harms someone, with AI being part of a system):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fduckduckgo.com%2F%3Fq%3Dexam...

Example top result:
"8 Times AI Bias Caused Real-World Harm"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.techopedia.com%2Ftim...

Or something else I saw the other day:
"'I was misidentified as shoplifter by facial recognition tech'"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.co.uk%2Fnews%2Ftec...

Or: "10 Nightmare Things AI And Robots Have Done To Humans"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.buzzfeed.com%2Fmikes...

Sure, these are not quite the same as "AI-powered robots shooting everyone. The fact that "AI" of some sort is involved is incidental compared to just computer-supported-or-even-not algorithms as have been in use for decades like to redline sections of cities to prevent issuing mortgages.

Of course there are example of robots killing people with guns, but they are still unusual:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheconversation.com%2Fan...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.npr.org%2F2021%2F06%2F01...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2FFutur...
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/story/07/...

These automated machine guns have potential to go wrong, but I have not heard yet that one has:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The SGR-A1 is a type of autonomous sentry gun that was jointly developed by Samsung Techwin (now Hanwha Aerospace) and Korea University to assist South Korean troops in the Korean Demilitarized Zone. It is widely considered as the first unit of its kind to have an integrated system that includes surveillance, tracking, firing, and voice recognition. While units of the SGR-A1 have been reportedly deployed, their number is unknown due to the project being "highly classified"."

But a lot of people can still get hurt by AI acting as a dysfunctional part of a dysfunctional system (the first items).

Is there money to be made by fear mongering? Yes, I have to agree you are right on that.

Is *all* the worry about AI profit-driven fear mongering -- especially about concentration of wealth and power by what people using AI do to other people (like Marshall Brain wrote about in "Robotic Nation" etc)?

I think there are legitimate (and increasing concerns) similar and worse than the ones, say, James P. Hogan wrote about. Hogan emphasized accidental issues of a system protecting itself -- and generally not issues from malice or social bias things implemented in part intentionally by humans. Although one ending of a "Giants" book (Entoverse I think, been a long time) does involve AI in league with the heroes doing unexpected stuff by providing misleading synthetic information to humorous effect.

Of course, our lives in the USA have been totally dependent for decades on 1970s era Soviet "Dead Hand" technology that the US intelligence agencies tried to sabotage with counterfeit chips -- so who knows how well it really works. So if you have a nice day today not involving mushroom clouds, you can (in part) thank a 1970s Soviet engineer for safeguarding your life. :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

It's common to think the US Military somehow defends the USA, and while there is some truth to that, it leaves out a bigger part of the picture of much of human survival being dependent on a multi-party global system working as expected to avoid accidents...

Two other USSR citizens we can thank for our current life in the USA: :-)

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"a senior Soviet Naval officer who prevented a Soviet submarine from launching a nuclear torpedo against ships of the United States Navy at a crucial moment in the Cuban Missile Crisis of October 1962. The course of events that would have followed such an action cannot be known, but speculations have been advanced, up to and including global thermonuclear war."

https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"These missile attack warnings were suspected to be false alarms by Stanislav Petrov, an engineer of the Soviet Air Defence Forces on duty at the command center of the early-warning system. He decided to wait for corroborating evidence--of which none arrived--rather than immediately relaying the warning up the chain of command. This decision is seen as having prevented a retaliatory nuclear strike against the United States and its NATO allies, which would likely have resulted in a full-scale nuclear war. Investigation of the satellite warning system later determined that the system had indeed malfunctioned."

There is even a catchy pop tune related to the last item: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...
"The English version retains the spirit of the original narrative, but many of the lyrics are translated poetically rather than being directly translated: red helium balloons are casually released by the civilian singer (narrator) with her unnamed friend into the sky and are mistakenly registered by a faulty early warning system as enemy contacts, resulting in panic and eventually nuclear war, with the end of the song near-identical to the end of the original German version."

If we replaced people like Stanislav Petrov and Vasily Arkhipov with AI will we as a global society be better off?

Here is a professor (Alain Kornhauser) I worked with on AI and robots and self-driving cars in the second half of the 1980s commenting recently on how self-driving cars are already safer than human-operated cars by a factor of 10X in many situations based on Tesla data:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

But one difference is that there is a lot of training data based on car accidents and safe driving to make reliable (at least better than human) self-driving cars. We don't have much training data -- thankfully -- on avoiding accidental nuclear wars.

In general, AI is a complex unpredictable thing (especially now) and "simple" seems like a prerequisite for reliability (for all of military, social, and financial systems):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.infoq.com%2Fpresenta...
"Rich Hickey emphasizes simplicityâ(TM)s virtues over easinessâ(TM), showing that while many choose easiness they may end up with complexity, and the better way is to choose easiness along the simplicity path."

Given that we as a society are pursuing a path of increasing complexity and related risk (including of global war with nukes and bioweapons, but also other risks), that's one reason (among others) that I have advocated for at least part of our society adopting simpler better-understood locally-focused resilient infrastructures (to little success, sigh).
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fprincet...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fsunrise...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...

Example of related fears from my reading too much sci-fi: :-)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fkurtz-fernhout.com%2Fosc...
"The race is on to make the human world a better (and more resilient) place before one of these overwhelms us:
Autonomous military robots out of control
Nanotechnology virus / gray slime
Ethnically targeted virus
Sterility virus
Computer virus
Asteroid impact
Y2K
Other unforseen computer failure mode
Global warming / climate change / flooding
Nuclear / biological war
Unexpected economic collapse from Chaos effects
Terrorism w/ unforseen wide effects
Out of control bureaucracy (1984)
Religious / philosophical warfare
Economic imbalance leading to world war
Arms race leading to world war
Zero-point energy tap out of control
Time-space information system spreading failure effect (Chalker's Zinder Nullifier)
Unforseen consequences of research (energy, weapons, informational, biological)"

So, AI out of control is just one of those concerns...

So, can I point to multiple examples of AI taking over planets to the harm of their biological inhabitants (outside of sci-fi). I have to admit the answer is no. But then I can't point to realized examples of accidental global nuclear war either (thankfully, so far).

Comment Re:Could we "pull the plug" on networked computers (Score 1) 77

Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.

From the Manga version of "The Two Faces of Tomorrow":

"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F3...

"Pulling the Plug: Chapter 6, Volume 1, The Two Faces of Tomorrow" where something similar happens during an attempt to shut down a networked distributed supercomputer:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F4...

Granted, those are systems that have control of robots. But even without drones, consider:
"AI system resorts to blackmail if told it will be removed"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.com%2Fnews%2Fartic...

I first saw a related idea in "The Great Time Machine Hoax" from around 1963, where a supercomputer uses only printed letters with enclosed checks sent to companies to change the world to its preferences. It was insightful even back then to see how a computer could just hijack our social-economic system to its own benefit.

Arguably, modern corporation are a form of machine intelligence even if some of their components are human. I wrote about this in 2000:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdougengelbart.org%2Fcoll...
"These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends). Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves."

So, as another question, how easily can we-the-people "pull the plug" on corporations these days? I guess there are examples (Theranos?) but they seem to have more to do with fraud -- rather than a company found pursuing the ideal of capitalism of privatizing gains while socializing risks and costs.

It's not like, say, OpenAI is going to suffer any more consequences than the rest of us if AI kills everyone. And meanwhile, the people involved in OpenAI may get a lot of money and have a lot of "fun". From "You Have No Idea How Terrified AI Scientists Actually Are" at 2:25 (for some reason that part is missing from the YouTube automatic transcript):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Sam Altman: AI will probably lead to the end of the world but in the meantime there will be great companies created with serious machine learning."

Maybe we don't have an AI issue as much as a corporate governance issue? Which circles around to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Comment Could we "pull the plug" on networked computers? (Score 1) 77

I truly wish you are right about all the fear mongering about AI being a scam.

It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?

If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fb...

A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).

Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).

But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).

Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheworthyhouse.com%2F202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes. ... This is all an attempt to reify a major focus of Greer, what he calls âoedeliberate technological regression.â His idea is that we should not assume newer is better; we should instead âoemineâ the past for good ideas that are no longer extant, or were never adopted, and resurrect them, because they are cheaper and, in the long run, better than modern alternatives, which are pushed by those who rely on selling us unneeded items with planned obsolescence."

But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.

Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).

In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 77

Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffuturism.com%2Fai-godfat...
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever more powerful and deceptive.
        "This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and [behaviors]," the world's most-cited computer scientist wrote, "including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment." ...
      A pre-peer-review paper Bengio and his colleagues published earlier this year explains it a bit more simply.
      "This system is designed to explain the world from observations," the paper reads, "as opposed to taking actions in it to imitate or please humans."
      The concept of building "safe" AI is far from new, of course -- it's quite literally why several OpenAI researchers left OpenAI and founded Anthropic as a rival research lab.
      This one seems to be different because, unlike Anthropic, OpenAI, or any other companies that pay lip service to AI safety while still bringing in gobs of cash, Bengio's is a nonprofit -- though that hasn't stopped him from raising $30 million from the likes of ex-Google CEO Eric Schmidt, among others."

Yoshua Bengio seems like someone at least trying to make AI (scientists) from a cooperative abundance perspective rather than to create more competitive AI agents.

Of course, even that could go horribly wrong if the AI misleads people subtly.

From 1957: "A ten-year-old boy and Robby the Robot team up to prevent a Super Computer [which provided misleading outputs] from controlling the Earth from a satellite."
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.imdb.com%2Ftitle%2Ftt0...

Form 1992: "A Fire Upon the Deep" on an AI that misleads people exploring an old archive who though their exploratory AI work was airgapped and firewalled as they built advanced automation the AI suggested:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

Lots of other sc-fi examples of deceptive AI (like in the Old Guy Cybertank series, and more). The worst being along the lines of a human (e.g. Dr. Smith of "Lost in Space") intentionally programming the AI (or Ai-powered Robot) to be harmful to others to that person's intended benefit.

Or sometimes (like in a Bobiverse novel, spoiler) a human may bypass a firewall and unleash an AI out of a sense of worshipful goodwill, to unknown consequences.

But at least the AI Scientist approach of Yoshua Bengio is not *totally* stupid in the way a reckless race to create competitive commercial super-intelligent AIs otherwise is for sure.

Some dark humor on that (with some links fixed up):
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments....
====
[People are] right to be skeptical on AI. But I can also see that it is so seductive as a "supernormal stimuli" it will have to be dealt with one way or another. Some AI-related dark humor by me.
* Contrast Sergey Brin this year:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffinance.yahoo.com%2Fnews...
""Competition has accelerated immensely and the final race to AGI is afoot," he said in the memo. "I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts." Brin added that Gemini staff can boost their coding efficiency by using the company's own AI technology.
* With a Monty Python sketch from decades ago:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgenius.com%2FMonty-pytho...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Well, you join us here in Paris, just a few minutes before the start of today's big event: the final of the Mens' Being Eaten By A Crocodile event. ...
          Gavin, does it ever worry you that you're actually going to be chewed up by a bloody great crocodile?
        (The only thing that worries me, Jim, is being being the first one down that gullet.)"
====

Slashdot Top Deals

"Time is an illusion. Lunchtime doubly so." -- Ford Prefect, _Hitchhiker's Guide to the Galaxy_

Working...