Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.
From the Manga version of "The Two Faces of Tomorrow":
"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F3...
"Pulling the Plug: Chapter 6, Volume 1, The Two Faces of Tomorrow" where something similar happens during an attempt to shut down a networked distributed supercomputer:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F4...
Granted, those are systems that have control of robots. But even without drones, consider:
"AI system resorts to blackmail if told it will be removed"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.com%2Fnews%2Fartic...
I first saw a related idea in "The Great Time Machine Hoax" from around 1963, where a supercomputer uses only printed letters with enclosed checks sent to companies to change the world to its preferences. It was insightful even back then to see how a computer could just hijack our social-economic system to its own benefit.
Arguably, modern corporation are a form of machine intelligence even if some of their components are human. I wrote about this in 2000:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdougengelbart.org%2Fcoll...
"These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends). Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves."
So, as another question, how easily can we-the-people "pull the plug" on corporations these days? I guess there are examples (Theranos?) but they seem to have more to do with fraud -- rather than a company found pursuing the ideal of capitalism of privatizing gains while socializing risks and costs.
It's not like, say, OpenAI is going to suffer any more consequences than the rest of us if AI kills everyone. And meanwhile, the people involved in OpenAI may get a lot of money and have a lot of "fun". From "You Have No Idea How Terrified AI Scientists Actually Are" at 2:25 (for some reason that part is missing from the YouTube automatic transcript):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Sam Altman: AI will probably lead to the end of the world but in the meantime there will be great companies created with serious machine learning."
Maybe we don't have an AI issue as much as a corporate governance issue? Which circles around to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."