Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Could we "pull the plug" on networked computers (Score 1) 66

Thanks for the insightful replies. You're right that fiction can bee to optimistic. Still, it can be full of interesting ideas -- especially when someone like James P. Hogan with a technical background and also in contact with AI luminaries (like Marvin Minsky) writes about AI and robotics.

From the Manga version of "The Two Faces of Tomorrow":

"The Two Faces of Tomorrow: Battle Plan" where engineers and scientists see how hard it is to turn off a networked production system that has active repair drones:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F3...

"Pulling the Plug: Chapter 6, Volume 1, The Two Faces of Tomorrow" where something similar happens during an attempt to shut down a networked distributed supercomputer:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fmangadex.org%2Fchapter%2F4...

Granted, those are systems that have control of robots. But even without drones, consider:
"AI system resorts to blackmail if told it will be removed"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.bbc.com%2Fnews%2Fartic...

I first saw a related idea in "The Great Time Machine Hoax" from around 1963, where a supercomputer uses only printed letters with enclosed checks sent to companies to change the world to its preferences. It was insightful even back then to see how a computer could just hijack our social-economic system to its own benefit.

Arguably, modern corporation are a form of machine intelligence even if some of their components are human. I wrote about this in 2000:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdougengelbart.org%2Fcoll...
"These corporate machine intelligences are already driving for better machine intelligences -- faster, more efficient, cheaper, and more resilient. People forget that corporate charters used to be routinely revoked for behavior outside the immediate public good, and that corporations were not considered persons until around 1886 (that decision perhaps being the first major example of a machine using the political/social process of its own ends). Corporate charters are granted supposedly because society believe it is in the best interest of *society* for corporations to exist. But, when was the last time people were able to pull the "charter" plug on a corporation not acting in the public interest? It's hard, and it will get harder when corporations don't need people to run themselves."

So, as another question, how easily can we-the-people "pull the plug" on corporations these days? I guess there are examples (Theranos?) but they seem to have more to do with fraud -- rather than a company found pursuing the ideal of capitalism of privatizing gains while socializing risks and costs.

It's not like, say, OpenAI is going to suffer any more consequences than the rest of us if AI kills everyone. And meanwhile, the people involved in OpenAI may get a lot of money and have a lot of "fun". From "You Have No Idea How Terrified AI Scientists Actually Are" at 2:25 (for some reason that part is missing from the YouTube automatic transcript):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Sam Altman: AI will probably lead to the end of the world but in the meantime there will be great companies created with serious machine learning."

Maybe we don't have an AI issue as much as a corporate governance issue? Which circles around to my sig: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

Comment Re:asking for screwups (Score 1) 111

How would an LLM accurately determine which cases were "easy"? They don't reason, you know. What they do is useful and interesting, but it's essentially channeling: what is in its giant language model is the raw material, and the prompt is what starts the channeling. Because its dataset is so large, the channeling can be remarkably accurate, as long as the answer is already in some sense known and represented in the dataset.

But if it's not, then the answer is just going to be wrong. And even if it is, whether the answer comes out as something useful is chancy, because what it's doing is not synthesis—it's prediction based on a dataset. This can look a lot like synthesis, but it's really not.

Comment Could we "pull the plug" on networked computers? (Score 1) 66

I truly wish you are right about all the fear mongering about AI being a scam.

It's something I have been concerned about for decades, similar to the risk of nuclear war or biowarfare. One difference is that nukes and to a lesser extent plagues are more clearly distinguished as weapons of war and generally monopolized by nation-states -- whereas AI is seeing gradual adoption by everyone everywhere (and with a risk unexpected things might happen overnight if a computer network "wakes up" or is otherwise directed by humans to problematical ends). It's kind of like cars -- a generally useful tool -- could be turned turn into nukes overnight by a network software update (which they can't, thankfully). But how do you "pull the plug" on all cars -- especially if a transition from acting as a faithful companion to a "Christine" killer car happens overnight? Or even just all home routers or all networked smartphones get compromised? ISPs could put in place filtering in such cases, but how long could such filters last or be effective if the AI (or malevolent humans) responds?

If you drive a car with high-tech features, you are "trusting AI" in a sense. From 2019 on how AI was then already so much in our lives:
"The 10 Best Examples Of How AI Is Already Used In Our Everyday Life"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.forbes.com%2Fsites%2Fb...

A self-aware AI doing nasty stuff is likely more of a mid-to-long-term issue though. The bigger short-term issue is what people using AI do to other people with it (especially for economic disruption and wealth concentration, like Marshall Brain wrote about).

Turning off aspects of a broad network of modern technology have been explored in books like "The Two Faces of Tomorrow" (from 1979 by James P. Hogan). He suggests that turning off a global superintelligence network (a network that most people have come to depend on, and which embodies AI being used to do many tasks) may be a huge challenge (if not an impossible one). He suggested a network can gets smarter over time and unintentionally develop a survival instinct as a natural aspect of it trying to remain operation to do its purported primary function in the face of random power outages (like from lightning strikes).

But even if we wanted to turn off AI, would we? As a (poor) analogy, while there have been brief periods where the global internet supporting the world wide web has been restricted in some specific places, and also there is some selective filtering of the internet in various nations continuously ongoing (usually to give preference to local national web applications), could we be likely to turn off the global internet at this point even if it was proven somehow to greatly produce harms? We are so dependent on the internet for day-to-day commerce as well as, sigh, entertainment (i.e. so much "news") that I can wonder if such is even possible now collectively. The issue there is not technical (yes, IT server farm administrators and individual consumers with home PCs and smartphones could turn off every networked computer in theory) but social (would people do it).

Personally, I see value in many of the points Michael Greer makes in "Retrotopia" (especially about computer security, and also about chosen levels of technology as a form of technological "zoning"):
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ftheworthyhouse.com%2F202...
"To maintain autarky, and for practical and philosophical reasons we will turn to in a minute, Lakeland rejects public funding of any technology past 1940, and imposes cultural strictures discouraging much private use of such technology. Even 1940s technology is not necessarily the standard; each county chooses to implement public infrastructure in one of five technological tiers, going back to 1820. The more retro, the lower the taxes. ... This is all an attempt to reify a major focus of Greer, what he calls âoedeliberate technological regression.â His idea is that we should not assume newer is better; we should instead âoemineâ the past for good ideas that are no longer extant, or were never adopted, and resurrect them, because they are cheaper and, in the long run, better than modern alternatives, which are pushed by those who rely on selling us unneeded items with planned obsolescence."

But Greer's novel still seems like a bit of a fantasy to suggest that a big part of the USA would willingly abandon networked computers in the future (even in the face of technological disasters) -- and even if it indeed might produce a better life. There was a Simpson's episode where everyone abandons TV for an afternoon and loves it, and then goes back to watching TV. It's a bit like saying a drug addict would willingly abandon a drug; some do of course, especially if the rest of the life improves in various ways for whatever reasons.

Also some of the benefit in Greer's novel comes from choosing decentralized technologies (whatever the form) in preference to more-easily centralized technologies (which is a concentration-of-wealth point in some ways rather than a strictly technological point). Contrast with the independent high-tech self-maintaining AI cybertanks in the Old Guy Cybertank novels who have built a sort-of freedom-emphasizing yet cooperative democracy (in the absence of humans).

In any case, we are talking about broad social changes with the adoption of AI. There is no single off switch for a network composed of billions of individual computers distributed across the planet -- especially if everyone has networked AI in their cars and smartphones (which is increasingly the case).

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 66

Yoshua Bengio is at least trying to do better (if one believe such systems need to be rushed out in any case):
"Godfather of AI Alarmed as Advanced Systems Quickly Learning to Lie, Deceive, Blackmail and Hack
"I'm deeply concerned by the behaviors that unrestrained agentic AI systems are already beginning to exhibit.""
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffuturism.com%2Fai-godfat...
"In a blog post announcing LawZero, the new nonprofit venture, "AI godfather" Yoshua Bengio said that he has grown "deeply concerned" as AI models become ever more powerful and deceptive.
        "This organization has been created in response to evidence that today's frontier AI models have growing dangerous capabilities and [behaviors]," the world's most-cited computer scientist wrote, "including deception, cheating, lying, hacking, self-preservation, and more generally, goal misalignment." ...
      A pre-peer-review paper Bengio and his colleagues published earlier this year explains it a bit more simply.
      "This system is designed to explain the world from observations," the paper reads, "as opposed to taking actions in it to imitate or please humans."
      The concept of building "safe" AI is far from new, of course -- it's quite literally why several OpenAI researchers left OpenAI and founded Anthropic as a rival research lab.
      This one seems to be different because, unlike Anthropic, OpenAI, or any other companies that pay lip service to AI safety while still bringing in gobs of cash, Bengio's is a nonprofit -- though that hasn't stopped him from raising $30 million from the likes of ex-Google CEO Eric Schmidt, among others."

Yoshua Bengio seems like someone at least trying to make AI (scientists) from a cooperative abundance perspective rather than to create more competitive AI agents.

Of course, even that could go horribly wrong if the AI misleads people subtly.

From 1957: "A ten-year-old boy and Robby the Robot team up to prevent a Super Computer [which provided misleading outputs] from controlling the Earth from a satellite."
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.imdb.com%2Ftitle%2Ftt0...

Form 1992: "A Fire Upon the Deep" on an AI that misleads people exploring an old archive who though their exploratory AI work was airgapped and firewalled as they built advanced automation the AI suggested:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F...

Lots of other sc-fi examples of deceptive AI (like in the Old Guy Cybertank series, and more). The worst being along the lines of a human (e.g. Dr. Smith of "Lost in Space") intentionally programming the AI (or Ai-powered Robot) to be harmful to others to that person's intended benefit.

Or sometimes (like in a Bobiverse novel, spoiler) a human may bypass a firewall and unleash an AI out of a sense of worshipful goodwill, to unknown consequences.

But at least the AI Scientist approach of Yoshua Bengio is not *totally* stupid in the way a reckless race to create competitive commercial super-intelligent AIs otherwise is for sure.

Some dark humor on that (with some links fixed up):
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments....
====
[People are] right to be skeptical on AI. But I can also see that it is so seductive as a "supernormal stimuli" it will have to be dealt with one way or another. Some AI-related dark humor by me.
* Contrast Sergey Brin this year:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Ffinance.yahoo.com%2Fnews...
""Competition has accelerated immensely and the final race to AGI is afoot," he said in the memo. "I think we have all the ingredients to win this race, but we are going to have to turbocharge our efforts." Brin added that Gemini staff can boost their coding efficiency by using the company's own AI technology.
* With a Monty Python sketch from decades ago:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fgenius.com%2FMonty-pytho...
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...
"Well, you join us here in Paris, just a few minutes before the start of today's big event: the final of the Mens' Being Eaten By A Crocodile event. ...
          Gavin, does it ever worry you that you're actually going to be chewed up by a bloody great crocodile?
        (The only thing that worries me, Jim, is being being the first one down that gullet.)"
====

Comment Re:"You Have No Idea How Terrified AI Scientists A (Score 1) 66

If people shift their perspective to align with the idea in my sig or similar ideas from Albert Einstein, Buckminster Fuller, Ursula K Le Guin, James P. Hogan, Lewis Mumford, Donald Pet, and many others, there might be a chance for a positive outcome from AI: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."

That is because our direction out of any singularity may have something to do with our moral direction going into it. So we desperately need to build a more inclusive, joyful, and healthy society right now.

But if we just continue extreme competition as usual between businesses and nations (especially for creating super-intelligent AI), then we are likely "cooked":
"it's over, we're cooked!" -- says [AI-generated] girl that literally does not exist (and she's right!)
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Fsingu...

As just as one example, here is Eric Schmidt essentially saying that we are probably doomed if AI is used to create biowarfare agents (which it almost certainly will be if we don't change our scarcity-based perspective on using these tools of abundance):
"Dr. Eric Schmidt: Special Competitive Studies Project"
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F...

Alternatives: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Frecogni...
"There is a fundamental mismatch between 21st century reality and 20th century security [and economic] thinking. Those "security" [and economic] agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all."

And also: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-...
"This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

See also "The Case Against Competition" by Alfie Kohn:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.alfiekohn.org%2Farti...
"This is not to say that children shouldn't learn discipline and tenacity, that they shouldn't be encouraged to succeed or even have a nodding acquaintance with failure. But none of these requires winning and losing -- that is, having to beat other children and worry about being beaten. When classrooms and playing fields are based on cooperation rather than competition, children feel better about themselves. They work with others instead of against them, and their self-esteem doesn't depend on winning a spelling bee or a Little League game."

Comment Re:This isn't necessarily bad (Score 1) 141

That's what I assumed as well. Buy Now Pay Later loans like this have a long history of being predatory. So I took a look at what it would cost to accept Klarna (as an example) as a merchant. The reality is that they have transaction fees that are very similar to credit cards. In other words, these companies do not need to rely on missed payments to make a profit.

These companies are apparently setting themselves up to replace traditional credit card payment systems, which suits me right down to the ground.

The difference is that it is much easier to get a Klarna account, and it isn't (yet) as widely available.

Comment Re:Credit Cards? (Score 2) 141

I felt the same way at first. Traditional BNPL schemes were very predatory. However, Klarna (and others) appear to be playing approximately the same game as the traditional credit card processors. They charge transaction fees that are roughly the same as credit card processors, and like credit cards their customers don't pay extra if they pay their bill on time. Klarna, in particular actually appears to give customers interest free time.

The difference, for consumers, is primarily that a Klarna account is much easier to get, and it isn't universally accepted. From a merchant perspective, depending on your payment provider, you might already be able to accept Klarna, and it appears that it mostly works like a credit card. It's even possible that charge backs are less of an issue, although it does appear that transaction fees are not given back in the case of a refund.

Personally, I am all for competition when it comes to payment networks. Visa and Mastercard are both devils. More competition for them is good for all of us.

Comment Re:I already know the ending (Score 1) 183

Fortunately he's incompetent and has already run Tesla into the ground. The company is basically living off schizoid incels buying the stock. SpaceX's success is largely based on the fact that they keep Musk away from actual management, but with Tesla a smoking ruin he's going to push his way into that and mess it up too.

Comment Security teams usually stop caring when not paid (Score 1) 167

From: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.vice.com%2Fen%2Farticl...
        ""The billionaires understand that they're playing a dangerous game," Rushkoff said. "They are running out of room to externalize the damage of the way that their companies operate. Eventually, there's going to be the social unrest that leads to your undoing."
        Like the gated communities of the past, their biggest concern was to find ways to protect themselves from the "unruly masses," Rushkoff said. "The question we ended up spending the majority of time on was: 'How do I maintain control of my security force after my money is worthless?'"
        That is, if their money is no longer worth anything -- if money no longer means power--how and why would a Navy Seal agree to guard a bunker for them?
        "Once they start talking in those terms, it's really easy to start puncturing a hole in their plan," Rushkoff said. "The most powerful people in the world see themselves as utterly incapable of actually creating a future in which everything's gonna be OK."

Comment Beyond a Jobless Recovery & Externalities (Score 1) 167

What I put together circa 2010 is becoming more and more relevant: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fpdfernhout.net%2Fbeyond-... "This article explores the issue of a "Jobless Recovery" mainly from a heterodox economic perspective. It emphasizes the implications of ideas by Marshall Brain and others that improvements in robotics, automation, design, and voluntary social networks are fundamentally changing the structure of the economic landscape. It outlines towards the end four major alternatives to mainstream economic practice (a basic income, a gift economy, stronger local subsistence economies, and resource-based planning). These alternatives could be used in combination to address what, even as far back as 1964, has been described as a breaking "income-through-jobs link". This link between jobs and income is breaking because of the declining value of most paid human labor relative to capital investments in automation and better design. Or, as is now the case, the value of paid human labor like at some newspapers or universities is also declining relative to the output of voluntary social networks such as for digital content production (like represented by this document). It is suggested that we will need to fundamentally reevaluate our economic theories and practices to adjust to these new realities emerging from exponential trends in technology and society."

Tangentially, since you mentioned coal, coal plants are discussed there as an example of the complex dynamics of technological and social change both creating and destroying jobs given externalities -- including from the laissez-faire capitalist economic imperative to privatize gains while socializing risks and costs :
      "Also, many current industries that employ large numbers of people (ranging from the health insurance industry, the compulsory schooling industry, the defense industry, the fossil fuel industry, conventional agriculture industry, the software industry, the newspaper and media industries, and some consumer products industries) are coming under pressure from various movements from both the left and the right of the political spectrum in ways that might reduce the need for much paid work in various ways. Such changes might either directly eliminate jobs or, by increasing jobs temporarily eliminate subsequent problems in other areas and the jobs that go with them (as reflected in projections of overall cost savings by such transitions); for example building new wind farms instead of new coal plants might reduce medical expenses from asthma or from mercury poisoning. A single-payer health care movement, a homeschooling and alternative education movement, a global peace movement, a renewable energy movement, an organic agriculture movement, a free software movement, a peer-to-peer movement, a small government movement, an environmental movement, and a voluntary simplicity movement, taken together as a global mindshift of the collective imagination, have the potential to eliminate the need for many millions of paid jobs in the USA while providing enormous direct and indirect cost savings. This would make the unemployment situation much worse than it currently is, while paradoxically possibly improving our society and lowering taxes. Many of the current justifications for continuing social policies that may have problematical effects on the health of society, pose global security risks, or may waste prosperity in various ways is that they create vast numbers of paid jobs as a form of make-work. ...
        Increasing mental health issues like depression and autism, and increasing physical health issues like obesity and diabetes and cancer, all possibly linked to poor nutrition, stress, lack of exercise, lack of sunlight and other factors in an industrialized USA (including industrial pollution), have meant many new jobs have been created in the health care field. So, for example, coal plants don't just create jobs for coal miners, construction workers, and plant operators, they also create jobs for doctors treating the results of low-level mercury pollution poisoning people and from smog cutting down sunlight. Television not only creates jobs for media producers, but also for health care workers to treat obesity resulting from sedentary watching behavior (including not enough sunlight and vitamin D) or purchasing unhealthy products that are advertised. ...
      Macroeconomics as a mathematical discipline generally ignores the issue of precisely how physical resources are interchangeable. Before this shift in economic thinking to a more resource-based view, that question of "how" things are transformed had generally been left to other disciplines like engineering or industrial chemistry (the actual physical alchemists of our age). For one thinking in terms of resources and ecology, the question of how nutrients cycle from farm to human to sewage and then back to farm as fertilizer might be as relevant as discussing the pricing of each of those items, like biologist John Todd explores as a form of ecological economics as it relates to mainstream business opportunities. People like Paul Hawken, Amory Lovins, and Hunter Lovins have written related books on the idea of natural capital. For another example, the question of exactly how coal-fired power plants might connect to human health and other natural capital was previously left to the health profession or the engineering profession before this transdisciplinary shift where economists, engineers, ecologists, health professionals, and people with other interests might all work together to understand the interactions. In the process of thinking through the interactions, considerations about creating healthy and enjoyable jobs can be included in the analysis of costs and benefits to various parties including various things that are often ignored as externalities. So, a simple analysis [in the past] might indicate coal was cheaper than solar power, but a more complete analysis, like attempted in the book Brittle Power might indicate the value in shifting economic resources to the green energy sector as ultimately cheaper when all resource costs, human costs, and other opportunities are considered. These sorts of analyses have long happened informally through the political process such as with recent US political decisions moving towards a ban of new coal-fired power plants. Jane Jacobs, in her writings on the economies of cities, is one example of trying to think through the details of how specific ventures in a city affects the overall structure of that city's economy, including the creation of desirable local jobs through import replacement. A big issue of resource-based economics is to formalize this decision making process somehow, where the issue of creating good jobs locally would be weighed as one factor among many. ..."

Slashdot Top Deals

"I think Michael is like litmus paper - he's always trying to learn." -- Elizabeth Taylor, absurd non-sequitir about Michael Jackson

Working...