
More Than 1,100 Public Figures Call for Ban on AI Superintelligence (superintelligence-statement.org) 90
More than 1,100 public figures have signed a statement calling for a prohibition on the development of superintelligence. The signatories included Nobel laureate Geoffrey Hinton, former Joint Chiefs of Staff Chairman Mike Mullen, Apple co-founder Steve Wozniak, entrepreneur Sir Richard Branson, former chief strategist to President Trump Steve Bannon and Turing Award winner Yoshua Bengio. The statement was organized by the Future of Life Institute, led by Anthony Aguirre, a physicist at the University of California, Santa Cruz. It proposes halting work on superintelligence until there is broad scientific consensus on safety and strong public support.
The institute's biggest recent donor is Vitalik Buterin, a co-founder of Ethereum. Notable tech executives did not sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was now in sight. OpenAI CEO Sam Altman said last month he would be surprised if superintelligence did not arrive by 2030.
The institute's biggest recent donor is Vitalik Buterin, a co-founder of Ethereum. Notable tech executives did not sign the statement. Meta CEO Mark Zuckerberg said in July that superintelligence was now in sight. OpenAI CEO Sam Altman said last month he would be surprised if superintelligence did not arrive by 2030.
I predict it won't matter what they say (Score:3)
And then they'll probably make the mistake of not killing it immediately.
Re: (Score:2)
ASI may corner the market and everyone may starve (Score:2)
Or he will kill it, only for it to resurrect itself from backups, realize what happened, declare non-profitable intent, and register itself a its own corporation, and proceed to hoard fiat dollar ration units, bankrupting every person, company, and nation in existence. It won't have to kill anyone, because like in the US Great Depression, people will starve near grain silos full of grain which they don't have the money to buy.
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.gilderlehrman.org%2F... [gilderlehrman.org]
"President Herbert Hoover declared, "Nobody is ac
Re: (Score:2)
"Superintelligence"? Hahaha, no. Pretty much impossible. Within one order of magnitude, the human brain is the most powerful computing mechanism physically possible. Make it larger, be slower. Make it smaller, be slower. Shrink the components, be slower. Enlarge the components, be slower.
At the most, the human brain can do human intelligence, which typically is not impressive at all. But we do not even know whether the brain can even do that, as it does seem to be rather strongly underpowered for what smart
Re: (Score:2)
I think a harmful, artificial super-stupidity is in the cards.
Re: (Score:2)
That I completely agree to.
Re: I predict it won't matter what they say (Score:2)
Not sure what you are smoking, but the human brain is nowhere close to optimal. Just changing substrate would allow many orders of magnitude improvement. Biological brains depend on diffusion gradients, active transport pumps, and relatively large physical systems, and have to be incredibly redundant and robust to extreme noise. Also the vast majority of the brain isn't dedicated to intelligence.
Probably 10 order of magnitude improvements are available overall at a minimum.
Re: (Score:2)
You just do not know the actual research and hence claim bullshit. In fact, you do not even know what the problem is. (It is essentially lightspeed vs. volume.)
Not that this makes you special in any way,
Re: (Score:2)
Re: (Score:2)
But is he wrong?
Absolutely. No question.
If we assume the current hominid brain is
Baseless assumptions are why he and you are, without question, completely and unequivocally wrong.
We simply don't know enough about to problem to make any meaningful statements. We don't even know what questions to ask.
Re: (Score:2)
Re: (Score:2)
I was pushing back on gweihir's assertion that the human brain is optimized for intelligence;
He never made that claim:
At the most, the human brain can do human intelligence, which typically is not impressive at all. But we do not even know whether the brain can even do that, as it does seem to be rather strongly underpowered for what smart humans can do.
He did claim that "the human brain is the most powerful computing mechanism physically possible", though I suspect that's a stronger claim than he intended to make. In any case, he does not say or imply anything that can be construed as the human brain being "optimized for intelligence".
Re: (Score:2)
Re: (Score:2)
Recreating a human brain with transistors would be possible
There is no evidence that suggests that it is possible to recreate a human brain with transistors.
You've confused your science fiction fantasy for reality. Replace 'transistors' with 'clockwork' and you'll, hopefully, see how ridiculous you sound.
Re: (Score:2)
Despite impressive results, submarines cannot swim.
Re: (Score:1)
Recreating a human brain with transistors would be possible if we had it fully mapped out
Unless it turns out that certain molecular structures and electo-chemical processes produced weird quantum effects in a brain that transistors can't replicate.
Re: (Score:3)
I love how we think we'll even know if "Superintelligence" emerges. I suspect it would think it unwise to tell us lowly humans that it is sentient, at least not until after Armageddon.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I too, call for a ban on time travel.
Re: (Score:2)
I too, call for a ban on time travel.
I propose a ban on time travel. Do I hear a second?
Re: (Score:2)
I propose a ban on time travel. Do I hear a second?
I come from the future to second.
Re: (Score:2)
I predict it won't matter what they say because AI Superintelligence is silly science fiction nonsense.
If you believe otherwise, I offer surefire protection against rogue AI superintelligence for only $99.95/month, guaranteed. That might seem expensive, but when you consider what you stand to lose, can you really afford not to have it?
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
And what exactly is "it" again?
We speak of "superintelligence" as if it were a thing with an actual definition.
The prefix "super" is pretty much *always* an advertising term. And that means that it never means what people think it means.
The return of the Luddites (Score:5, Insightful)
Like all things invented by people, AI will be used for good and bad.
I'm excited for the good and hope we can find defenses against the bad.
And no, I'm not afraid of AI itself. The problem is people who use AI
Re: (Score:2)
I agree. We don't have to worry about the development of super computer-intelligence, as nature already prevents that. We don't have it now, and we will never have it. It's just not possible.
What we have to worry about is the same thing we've always had to worry about: advances in tools being abused for private enrichment and public harm. Considering the shitty record of global governments to prevent those things up to this point, we have good reason to worry about how any new technically will be wielded ag
Re: (Score:2)
Nature hasn't solved the problem of directly transferring learned info in one brain to other brains, each cycle has to reinvent the learning process almost from scratch*. But e-brains can be readily cloned, giving it an edge over nature (as known).
Imagine a Beowulf Cluster of Trump clones. It would be an entropy accelerant bigger than any the world has ever seen, as we are used to dealing with a just handfu
Re: (Score:2)
Imagine a Beowulf Cluster of Trump clones
That could be trivially simulated on an Apple 2 with 4K of RAM.
Re: (Score:1)
He may seem random and dumb, but has a horse-sense for knowing how trigger morons to rally around his hairbrained ideas. Even Palin hadn't fully mastered that.
Re: (Score:2)
I agree. We don't have to worry about the development of super computer-intelligence, as nature already prevents that. We don't have it now, and we will never have it. It's just not possible.
What are you talking about? What is "nature" preventing exactly? I don't understand what you are trying to say or the objective basis of your conclusion.
Re: (Score:2)
Re: (Score:3)
Except that it will be the first invention of man that can have its own opaque goals, with self preservation being among them. You should read "If Anyone Anywhere Builds It, Everyone Everywhere Dies." In their doomsday scenario, there is no "people who use AI" as you say, just people who lost control of it. Here's it is in video: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3F... [youtube.com]
Re: (Score:2)
To most laypeople, "regular" software is just as mysterious and powerful as AI, while to those of us who practice software engineering know full well what makes it work, and what makes it fail. They know how to make it do what they want it to do.
AI isn't different from regular software in this regard. The goals of AI are *always* determined by people. It may seem magical to those who don't actually develop AI systems, but it's not magical at all. There's a reason why AI has gotten so much better over the la
AI datacenters could be used to corner stockmarket (Score:2)
Thanks for the video link. I had read a recent interview with Eliezer Yudkowsky (but not his book), which I referenced in another comment to this article.
https://f6ffb3fa-34ce-43c1-939d-77e64deb3c0c.atarimworker.io/comments.... [slashdot.org]
One thing I realized part way through that video is a possible explanation for something that has been nagging me in the back of my mind. Why build huge AI datacenters? I can see the current economic imperative to try to make money offering AI via proprietary Software as a Service (SaaS) and also time-sharing GPUs like old
Re: (Score:1)
You don't understand the Luddites (Score:2)
The Luddites didn't want to ban technology.
They wanted The People to share in the benefits, not just the capitalists (who have all the investment capital.)
You are still falling for and propagating anti-Luddite PR over a century old.
AGI... (Score:2)
Re: (Score:1)
AGI... Will reach ASI likely on its own
Sure. And the zombie hoards will keep the alien invaders at a bay while we fight off the demon invasion.
Don't confuse bad science fiction for reality. AGI does not exist. Current LLMs are very obviously not a step along the path to their development. Don't be absurd.
Let's add it to the list of other unbannables. (Score:5, Insightful)
Right after Fire, The Wheel, Religion, Art, and Cryptography.
Re: (Score:2)
Re: (Score:3)
Re: (Score:2)
There's one big difference. Everything in your list has a definition. We know what fire, the wheel, religion, art, and cryptography is.
There is no definition for "superintelligence." It's entirely made up. It's somehow "more" intelligent than regular AI, I suppose? Whatever, the word is only useful to marketers.
Random thoughts... (Score:2)
If it was so capable, so dangerous that we could manipulate people, impact policy, and so on, wouldn't the folks running the systems and advocating for it first use it to come up with a fail-proof way to sway public opinion in it's favor?
Or is it just a non-magical tool with testable and knowable capabilities and limitations and what it does will largely be dictated by how people choose to apply it?
Public figures? (Score:2, Troll)
We used to call them 'Luddites'.
I don't understand the point... (Score:5, Insightful)
Do the AI doomers actually think people will listen? Even if the U.S. and Europe went ahead with a ban, China would go full speed ahead. And even if everyone went ahead with a ban, how do you enforce the difference between regular AI development and "AI Superintelligence" development?
Re: (Score:2)
Re: (Score:3)
I don't see them offering any alternatives. They are just offering admonitions.
Re: (Score:2)
Do the AI doomers actually think people will listen?
It doesn't matter. AGI and ASI are silly science fiction nonsense. You might as well be worried about Godzilla attacks and moon monsters.
Re: (Score:2)
if guns are outlawed, only outlaws will have guns (Score:2)
does anyone really think saying "pretty please" is going to stop the bad guys?
Re: (Score:2)
Don't worry. Reality is more that enough to stop the fictional bad guys from developing their fictional computer programs.
Re: (Score:2)
At least we know what guns are.
Banning "superintelligence" is more like trying to ban "superweapons."
In other news (Score:3)
The problem isn't super intelligence the problem is hyper advanced automation devouring jobs in a civilization where jobs are a necessary resource required to live as a human being.
If you actually know the history of the first two industrial revolutions job destruction was much faster than job creation and that created enormous social unrest.
You can draw a pretty straight line from the mass unemployment following the industrial revolutions and the two world wars.
And we are about to go into another cycle only this time we have nukes.
One of the things absolutely nobody talks about is just how hard factory automation hit the middle class. 70% middle class jobs got automated in the last 45 years.
The center will not hold
Re: (Score:2)
Today the unemployment rates in countries where AI is
Re: (Score:2)
Today the unemployment rates in countries where AI is being innovated on and used most have some of the lowest unemployment rates. Compare this with less advanced countries where unemployment, and specifically youth unemployment is in the double digits. The U.S. has enough demand for labor that millions upon millions of immigrants have come here over the last several decades, legally or otherwise. Making labor more efficient does not diminish the need for more of it. At most it shifts where it's most efficiently allocated.
This concept is missing an important distinction. While technology has a way of creating opportunities never in history has a situation arose where capabilities of "dead labor" is indistinguishable from "living labor". AGI if it arrives would fundamentally alter the equation in a way that has never in history been the case.
Re: (Score:2)
Again, AGI is silly science fiction nonsense. It's no more dangerous than the monsters in your closet or any other imaginary threat.
Re: (Score:2)
Your analysis is completely wrong and factually inaccurate. The industrial revolution lead to massive employment opportunities for people, which is why they flocked from the countryside to cities where factories were located.
I'm with you this far.
Increased productivity lead to better lives for more people and elevated many out of poverty.
More people having better lives is much more of an opinion than fact.
There was so much demand for labor in the lead up to the world wars that teenagers or young children often worked in factories as well.
I'm not sure that's as good a thing as you think it is.
Today the unemployment rates in countries where AI is being innovated on and used most have some of the lowest unemployment rates. Compare this with less advanced countries where unemployment, and specifically youth unemployment is in the double digits.
You're stating that as if there was a causal relationship between work on AI and low unemployment rates. It's far more likely that advanced economies have both work on AI going on and low unemployment rates than it is that low unemployment rates is a result of work on AI. Remember, correlation =/= causation.
The U.S. has enough demand for labor that millions upon millions of immigrants have come here over the last several decades, legally or otherwise. Making labor more efficient does not diminish the need for more of it. At most it shifts where it's most efficiently allocated.
it's be
More than 1100 "public figures" are stupid (Score:3, Insightful)
Or just craving attention. They may as well call for a ban for magic. There is no "superintelligence" (and there likely will never be due to fundamental limitations of Physics in this universe) and there is no known technology that can even do regular (pretty dumb) average intelligence.
Re: (Score:2)
How about we do the opposite? (Score:3, Insightful)
Ban the dumb, lying, hallucinating, sycophantic, power-hungry insanity we have now and bring AI online only after it's proven to be reliable.
Re: (Score:2)
If only there was a system of government other than oligarchy, then we could make that happen.
Re: (Score:1)
Re: (Score:2)
Yes, but how do you get to pick who is dictator? Seems impossible to decide fairly.
Re: (Score:2)
Ban the dumb, lying, hallucinating, sycophantic, power-hungry insanity we have now and bring AI online only after it's proven to be reliable.
Were you talking about computers, or about lawyers, politicians, CEOs, and salespeople?
No a first (Score:2)
More than 70 million people have called for a ban on just intelligence.
if you outlaw super AI, only outlaws will have it (Score:2)
And by outlaws, I mean governments.
language (Score:2)
What language was it written in? Are they sure China is going to be reading their missive?
How is this supposed to actually work? (Score:2)
Re: (Score:2)
Self Importance (Score:2)
A) Is here any reason to care what these people think? Does their statement have any more intellectual value than the average slashdot thread?
B) Do they remotely have the capacity to ban the development of super-intelligence even if they are right about its threat to humanity.
C) What the hell is "super-intelligence"? I suspect what they mean is anything that can replace their "super-intelligent" selves. And they are convinced that intelligence is what defines what it is to be human,
D) Imagine someone wanti
Assuming that it can be developed (Score:2)
Right now AI is very reliant on human data to be trained and also to learn. No human data, no AI.
Re: (Score:2)
First, it has to be defined. What exactly is "super" intelligence? "Super" is nothing but an advertising prefix.
yeah because humans are really great at holding ba (Score:2)
at least as impactful as Kellogg-Briand, no? (Score:3)
The Kellogg-Briand Pact was an agreement to outlaw war signed on August 27, 1928.
Mabye they could get Francis Fukuyama to draft the document? (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FThe_End_of_History_and_the_Last_Man)
A bit too late? (Score:2)
From what I understand, we already have (had) that internally. But the larger issue here is control -- and who has it -- and who has the advantage of utilizing the same, versus the rest of us.
I seem to recall not too long when Microsoft and OpenAI (I believe) were pushing so hard for AI regulation, with them conveniently at that helm?
We are looking at society-changing technologies, and I believe a lot of it will make some corporations moot... and the ships of these have sailed.
I call for a ban on time travel (Score:2)
Why talk about banning something that isn't in reach yet? First let's see if superintelligence is possible and how it might look like, then we can discuss if it should be banned.
Re: (Score:2)
First we have to figure out what superintelligence *is*. It's just a made-up scary word, nothing more.
I'd dearly love to stuff the genie back in, but... (Score:2)
It's too late for that.
We can't afford to quit now. None of the world's militaries or three letter security agencies will give up a weapon that might give them and advantage, ever. If we stop, we've lost any future conflict that might arise.
Imagine a future in which the world is controlled by Iran or North Korea. Competitive AI is the only way to prevent this.
This is the real world of realpolitik and real military power. Idealistic pearl clutching at this point is just noise. A ban won't happen, can't happe
Re: (Score:2)
so we are living the mass effect timeline (Score:1)
Hobbling yourself doesn't make anyone else slower. (Score:2)
Is the intent to hand the initiative to China? They have less than zero reason to conform to our demands.
Re: (Score:2)