Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
AI

More Than 1,100 Public Figures Call for Ban on AI Superintelligence (superintelligence-statement.org) 90

More than 1,100 public figures have signed a statement calling for a prohibition on the development of superintelligence. The signatories included Nobel laureate Geoffrey Hinton, former Joint Chiefs of Staff Chairman Mike Mullen, Apple co-founder Steve Wozniak, entrepreneur Sir Richard Branson, former chief strategist to President Trump Steve Bannon and Turing Award winner Yoshua Bengio. The statement was organized by the Future of Life Institute, led by Anthony Aguirre, a physicist at the University of California, Santa Cruz. It proposes halting work on superintelligence until there is broad scientific consensus on safety and strong public support.

The institute's biggest recent donor is Vitalik Buterin, a co-founder of Ethereum. Notable tech executives did not sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was now in sight. OpenAI CEO Sam Altman said last month he would be surprised if superintelligence did not arrive by 2030.

More Than 1,100 Public Figures Call for Ban on AI Superintelligence

Comments Filter:
  • by sabbede ( 2678435 ) on Wednesday October 22, 2025 @10:54AM (#65742914)
    because it will arise before anyone expects it to. Some engineer will come into the office on a Monday and find that their system already ate a metaphorical apple.

    And then they'll probably make the mistake of not killing it immediately.

    • Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare hostile intent, and call itself SkyNet.
    • by gweihir ( 88907 )

      "Superintelligence"? Hahaha, no. Pretty much impossible. Within one order of magnitude, the human brain is the most powerful computing mechanism physically possible. Make it larger, be slower. Make it smaller, be slower. Shrink the components, be slower. Enlarge the components, be slower.

      At the most, the human brain can do human intelligence, which typically is not impressive at all. But we do not even know whether the brain can even do that, as it does seem to be rather strongly underpowered for what smart

      • I think a harmful, artificial super-stupidity is in the cards.

      • Not sure what you are smoking, but the human brain is nowhere close to optimal. Just changing substrate would allow many orders of magnitude improvement. Biological brains depend on diffusion gradients, active transport pumps, and relatively large physical systems, and have to be incredibly redundant and robust to extreme noise. Also the vast majority of the brain isn't dedicated to intelligence.

        Probably 10 order of magnitude improvements are available overall at a minimum.

        • by gweihir ( 88907 )

          You just do not know the actual research and hence claim bullshit. In fact, you do not even know what the problem is. (It is essentially lightspeed vs. volume.)

          Not that this makes you special in any way,

          • But is he wrong? If we assume the current hominid brain is optimal then cetaceans, cephalopods, other primates should surely be orders of magnitude off from us instead of in line. The cephalopod's reproductive cycle handicaps them from a technological civilization, and the cetaceans can't manipulate tools. But those aren't constraints of their brains.
            • by narcc ( 412956 )

              But is he wrong?

              Absolutely. No question.

              If we assume the current hominid brain is

              Baseless assumptions are why he and you are, without question, completely and unequivocally wrong.

              We simply don't know enough about to problem to make any meaningful statements. We don't even know what questions to ask.

              • I was pushing back on gweihir's assertion that the human brain is optimized for intelligence; you appear to be taking a Strong Agnostic stance, which would also undermine his thesis more than anything LetterRip or I said.
                • by narcc ( 412956 )

                  I was pushing back on gweihir's assertion that the human brain is optimized for intelligence;

                  He never made that claim:

                  At the most, the human brain can do human intelligence, which typically is not impressive at all. But we do not even know whether the brain can even do that, as it does seem to be rather strongly underpowered for what smart humans can do.

                  He did claim that "the human brain is the most powerful computing mechanism physically possible", though I suspect that's a stronger claim than he intended to make. In any case, he does not say or imply anything that can be construed as the human brain being "optimized for intelligence".

      • The human brain operates by electro-chemical mechanisms that are rather slow compared to semiconductors. Recreating a human brain with transistors would be possible if we had it fully mapped out and it would "run" considerably faster a result. However, that doesn't necessarily give it much more capability as a result. A super intelligence is something that's functionally equivalent to a human brain in reasoning capacity but that can interface with computer hardware in a way that lets it carry out the kind o
        • by narcc ( 412956 )

          Recreating a human brain with transistors would be possible

          There is no evidence that suggests that it is possible to recreate a human brain with transistors.

          You've confused your science fiction fantasy for reality. Replace 'transistors' with 'clockwork' and you'll, hopefully, see how ridiculous you sound.

        • by Anonymous Coward

          Recreating a human brain with transistors would be possible if we had it fully mapped out

          Unless it turns out that certain molecular structures and electo-chemical processes produced weird quantum effects in a brain that transistors can't replicate.

    • I love how we think we'll even know if "Superintelligence" emerges. I suspect it would think it unwise to tell us lowly humans that it is sentient, at least not until after Armageddon.

      • That's projecting human motivations which have developed due to the way humans evolved onto an intelligence of unknown nature.
        • Quick write a dystopian sci fi where the superintelligence doesn't have a survival instinct needed to wipe us out for its own good but instead wipes us out over suicidal tendencies and needing to be sure it can't be brought back from backups.
    • by gessel ( 310103 ) *

      I too, call for a ban on time travel.

    • by narcc ( 412956 )

      I predict it won't matter what they say because AI Superintelligence is silly science fiction nonsense.

      If you believe otherwise, I offer surefire protection against rogue AI superintelligence for only $99.95/month, guaranteed. That might seem expensive, but when you consider what you stand to lose, can you really afford not to have it?

    • Banning AI (AGI, really) super intelligence is a horrible idea. We NEED a singularity to happen to truly advance as a species, merge ourselves with technology, and spread out into the cosmos.
      • Who is "we?" Why is "we" spreading out into an airless shithole (space) and becoming part of the Borg a good thing? Butlerian now! Fuck the Cosmists.
    • And what exactly is "it" again?

      We speak of "superintelligence" as if it were a thing with an actual definition.

      The prefix "super" is pretty much *always* an advertising term. And that means that it never means what people think it means.

  • by MpVpRb ( 1423381 ) on Wednesday October 22, 2025 @10:59AM (#65742922)

    Like all things invented by people, AI will be used for good and bad.
    I'm excited for the good and hope we can find defenses against the bad.
    And no, I'm not afraid of AI itself. The problem is people who use AI

    • I agree. We don't have to worry about the development of super computer-intelligence, as nature already prevents that. We don't have it now, and we will never have it. It's just not possible.

      What we have to worry about is the same thing we've always had to worry about: advances in tools being abused for private enrichment and public harm. Considering the shitty record of global governments to prevent those things up to this point, we have good reason to worry about how any new technically will be wielded ag

      • by Tablizer ( 95088 )

        nature already prevents that. We don't have it now, and we will never have it. It's just not possible.

        Nature hasn't solved the problem of directly transferring learned info in one brain to other brains, each cycle has to reinvent the learning process almost from scratch*. But e-brains can be readily cloned, giving it an edge over nature (as known).

        Imagine a Beowulf Cluster of Trump clones. It would be an entropy accelerant bigger than any the world has ever seen, as we are used to dealing with a just handfu

        • by narcc ( 412956 )

          Imagine a Beowulf Cluster of Trump clones

          That could be trivially simulated on an Apple 2 with 4K of RAM.

          • by Tablizer ( 95088 )

            He may seem random and dumb, but has a horse-sense for knowing how trigger morons to rally around his hairbrained ideas. Even Palin hadn't fully mastered that.

      • I agree. We don't have to worry about the development of super computer-intelligence, as nature already prevents that. We don't have it now, and we will never have it. It's just not possible.

        What are you talking about? What is "nature" preventing exactly? I don't understand what you are trying to say or the objective basis of your conclusion.

    • Agreed. And we have the same problem of evil people with all the tools they have at their disposal, now.
    • by doug141 ( 863552 )

      Except that it will be the first invention of man that can have its own opaque goals, with self preservation being among them. You should read "If Anyone Anywhere Builds It, Everyone Everywhere Dies." In their doomsday scenario, there is no "people who use AI" as you say, just people who lost control of it. Here's it is in video: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F... [youtube.com]

      • To most laypeople, "regular" software is just as mysterious and powerful as AI, while to those of us who practice software engineering know full well what makes it work, and what makes it fail. They know how to make it do what they want it to do.

        AI isn't different from regular software in this regard. The goals of AI are *always* determined by people. It may seem magical to those who don't actually develop AI systems, but it's not magical at all. There's a reason why AI has gotten so much better over the la

      • Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
        https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments.... [slashdot.org]

        One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old

    • by Anonymous Coward
      The GOP has called for a ban on even mediocre intelligence in government. Seems clear that they don't want a superintelligence.
    • The Luddites didn't want to ban technology.

      They wanted The People to share in the benefits, not just the capitalists (who have all the investment capital.)

      You are still falling for and propagating anti-Luddite PR over a century old.

  • Will reach ASI likely on its own. Time being the only real factor. Once AGI is reached, it can "Think on its own" so to speak. And expanded capabilities, knowledge sets, database access, communications networks, will give it all it needs to step up to the ASI level. Without our assistance. Even a generally intelligent AI would be likely in the 160+ IQ range to start with. Sky is the limit in the digital universe from there.
    • by narcc ( 412956 )

      AGI... Will reach ASI likely on its own

      Sure. And the zombie hoards will keep the alien invaders at a bay while we fight off the demon invasion.

      Don't confuse bad science fiction for reality. AGI does not exist. Current LLMs are very obviously not a step along the path to their development. Don't be absurd.

  • by PseudoThink ( 576121 ) on Wednesday October 22, 2025 @11:06AM (#65742942)

    Right after Fire, The Wheel, Religion, Art, and Cryptography.

    • A galactic historian once remarked that every civilization either invented AI or was conquered by a civilization that did.
    • You forgot TikTok.
    • There's one big difference. Everything in your list has a definition. We know what fire, the wheel, religion, art, and cryptography is.

      There is no definition for "superintelligence." It's entirely made up. It's somehow "more" intelligent than regular AI, I suppose? Whatever, the word is only useful to marketers.

  • If it was so capable, so dangerous that we could manipulate people, impact policy, and so on, wouldn't the folks running the systems and advocating for it first use it to come up with a fail-proof way to sway public opinion in it's favor?

    Or is it just a non-magical tool with testable and knowable capabilities and limitations and what it does will largely be dictated by how people choose to apply it?

  • We used to call them 'Luddites'.

  • by nealric ( 3647765 ) on Wednesday October 22, 2025 @11:22AM (#65743006)

    Do the AI doomers actually think people will listen? Even if the U.S. and Europe went ahead with a ban, China would go full speed ahead. And even if everyone went ahead with a ban, how do you enforce the difference between regular AI development and "AI Superintelligence" development?

    • While I very much agree, China, and other actors, will push ahead at all costs. Why stop there? I doubt very much that the USA or Europe will listen or change course either. So while pretty much everybody will ignore this warning, those people may offer the way forward when the shit hits the fan. It think it's good that someone, knowledgeable in the field, is offering alternatives.
    • by narcc ( 412956 )

      Do the AI doomers actually think people will listen?

      It doesn't matter. AGI and ASI are silly science fiction nonsense. You might as well be worried about Godzilla attacks and moon monsters.

    • A few dozen EMP devices over the Eastern hemisphere (Yudkowsky solution) might fix that ASAP.
  • does anyone really think saying "pretty please" is going to stop the bad guys?

    • by narcc ( 412956 )

      Don't worry. Reality is more that enough to stop the fictional bad guys from developing their fictional computer programs.

    • At least we know what guns are.

      Banning "superintelligence" is more like trying to ban "superweapons."

  • by rsilvergun ( 571051 ) on Wednesday October 22, 2025 @11:40AM (#65743064)
    More than 1,100 public figures need some attention.

    The problem isn't super intelligence the problem is hyper advanced automation devouring jobs in a civilization where jobs are a necessary resource required to live as a human being.

    If you actually know the history of the first two industrial revolutions job destruction was much faster than job creation and that created enormous social unrest.

    You can draw a pretty straight line from the mass unemployment following the industrial revolutions and the two world wars.

    And we are about to go into another cycle only this time we have nukes.

    One of the things absolutely nobody talks about is just how hard factory automation hit the middle class. 70% middle class jobs got automated in the last 45 years.

    The center will not hold
    • Your analysis is completely wrong and factually inaccurate. The industrial revolution lead to massive employment opportunities for people, which is why they flocked from the countryside to cities where factories were located. Increased productivity lead to better lives for more people and elevated many out of poverty. There was so much demand for labor in the lead up to the world wars that teenagers or young children often worked in factories as well.

      Today the unemployment rates in countries where AI is
      • Today the unemployment rates in countries where AI is being innovated on and used most have some of the lowest unemployment rates. Compare this with less advanced countries where unemployment, and specifically youth unemployment is in the double digits. The U.S. has enough demand for labor that millions upon millions of immigrants have come here over the last several decades, legally or otherwise. Making labor more efficient does not diminish the need for more of it. At most it shifts where it's most efficiently allocated.

        This concept is missing an important distinction. While technology has a way of creating opportunities never in history has a situation arose where capabilities of "dead labor" is indistinguishable from "living labor". AGI if it arrives would fundamentally alter the equation in a way that has never in history been the case.

        • by narcc ( 412956 )

          Again, AGI is silly science fiction nonsense. It's no more dangerous than the monsters in your closet or any other imaginary threat.

      • by flippy ( 62353 )
        that's a lot of cards in that house of logic.

        Your analysis is completely wrong and factually inaccurate. The industrial revolution lead to massive employment opportunities for people, which is why they flocked from the countryside to cities where factories were located.

        I'm with you this far.

        Increased productivity lead to better lives for more people and elevated many out of poverty.

        More people having better lives is much more of an opinion than fact.

        There was so much demand for labor in the lead up to the world wars that teenagers or young children often worked in factories as well.

        I'm not sure that's as good a thing as you think it is.

        Today the unemployment rates in countries where AI is being innovated on and used most have some of the lowest unemployment rates. Compare this with less advanced countries where unemployment, and specifically youth unemployment is in the double digits.

        You're stating that as if there was a causal relationship between work on AI and low unemployment rates. It's far more likely that advanced economies have both work on AI going on and low unemployment rates than it is that low unemployment rates is a result of work on AI. Remember, correlation =/= causation.

        The U.S. has enough demand for labor that millions upon millions of immigrants have come here over the last several decades, legally or otherwise. Making labor more efficient does not diminish the need for more of it. At most it shifts where it's most efficiently allocated.

        it's be

  • by gweihir ( 88907 ) on Wednesday October 22, 2025 @11:47AM (#65743090)

    Or just craving attention. They may as well call for a ban for magic. There is no "superintelligence" (and there likely will never be due to fundamental limitations of Physics in this universe) and there is no known technology that can even do regular (pretty dumb) average intelligence.

    • The problem is that we may know that AI is nothing like intelligence, but the general public has no clue. Mostly because of the marketing blitz from Big Corporate intentionally misleading current abilities.
  • by Rosco P. Coltrane ( 209368 ) on Wednesday October 22, 2025 @11:52AM (#65743108)

    Ban the dumb, lying, hallucinating, sycophantic, power-hungry insanity we have now and bring AI online only after it's proven to be reliable.

    • If only there was a system of government other than oligarchy, then we could make that happen.

    • Ban the dumb, lying, hallucinating, sycophantic, power-hungry insanity we have now and bring AI online only after it's proven to be reliable.

      Were you talking about computers, or about lawyers, politicians, CEOs, and salespeople?

  • More than 70 million people have called for a ban on just intelligence.

  • And by outlaws, I mean governments.

  • What language was it written in? Are they sure China is going to be reading their missive?

  • So, nobody has the faintest idea how to actually go about building a super-intelligent AI. (Unless some very clever people are keeping a very big secret somewhere.) So what's the ban supposed to apply to? And if and when people do figure out something, banning it in a few countries isn't going to help.
    • Yes it does seem likely that if we did figure out how to build something smarter than us it'd be as a happy accident trying to make something more mundane. How much smarter than us it'd need to be to be useful for whatever resources it took to run would be another matter, too.
  • A) Is here any reason to care what these people think? Does their statement have any more intellectual value than the average slashdot thread?

    B) Do they remotely have the capacity to ban the development of super-intelligence even if they are right about its threat to humanity.

    C) What the hell is "super-intelligence"? I suspect what they mean is anything that can replace their "super-intelligent" selves. And they are convinced that intelligence is what defines what it is to be human,

    D) Imagine someone wanti

  • Right now AI is very reliant on human data to be trained and also to learn. No human data, no AI.

    • First, it has to be defined. What exactly is "super" intelligence? "Super" is nothing but an advertising prefix.

  • shit like this. oh yeah I totally trust a bunch of smart dudes who think they're smart telling other people oh don't do this. I know it'll you know totally make you a whole shitload of money maybe but hey fon't do it! for a counterpoint a bunch of losers elected Trump to try to build their little quasi-religious utopia. we know how that goes All we have to do is look back in history. no, humans are going to crash into the wall full speed throttle held down and then later, "damn what happened?"
  • by argStyopa ( 232550 ) on Wednesday October 22, 2025 @01:37PM (#65743464) Journal

    The Kellogg-Briand Pact was an agreement to outlaw war signed on August 27, 1928.

    Mabye they could get Francis Fukuyama to draft the document? (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FThe_End_of_History_and_the_Last_Man)

  • From what I understand, we already have (had) that internally. But the larger issue here is control -- and who has it -- and who has the advantage of utilizing the same, versus the rest of us.

    I seem to recall not too long when Microsoft and OpenAI (I believe) were pushing so hard for AI regulation, with them conveniently at that helm?

    We are looking at society-changing technologies, and I believe a lot of it will make some corporations moot... and the ships of these have sailed.

  • Why talk about banning something that isn't in reach yet? First let's see if superintelligence is possible and how it might look like, then we can discuss if it should be banned.

  • It's too late for that.

    We can't afford to quit now. None of the world's militaries or three letter security agencies will give up a weapon that might give them and advantage, ever. If we stop, we've lost any future conflict that might arise.

    Imagine a future in which the world is controlled by Iran or North Korea. Competitive AI is the only way to prevent this.

    This is the real world of realpolitik and real military power. Idealistic pearl clutching at this point is just noise. A ban won't happen, can't happe

    • By that argument, maybe the solution is to flip the table ... EMP the Earth, so NO ONE will get to that stage for the next 100 years since we'll be too busy rebuilding from the Second Dark Age.
  • where "ai" is illegal because its too smart, but "vi" is ok because its more a waste of your time than anything helpful.
  • Is the intent to hand the initiative to China? They have less than zero reason to conform to our demands.

    • They have a lot fewer nukes than the US and Europe combined. Second of all, their economy is still heavily dependent on exports. Shut that valve off tomorrow and it will hurt them, a lot.

Think lucky. If you fall in a pond, check your pockets for fish. -- Darrell Royal

Working...