Recently, an open letter signed by tech leaders, researchers proposes delaying AI development. Do you agree that AI development should be temporarily halted?
Displaying poll results.19991 total votes.
Most Votes
- What AI models do you usually use most? Posted on February 19th, 2025 | 21652 votes
- When will AGI be achieved? Posted on April 24th, 2025 | 6562 votes
- Do you still use cash? Posted on February 13th, 2025 | 5804 votes
Most Comments
- What AI models do you usually use most? Posted on February 13th, 2025 | 78 comments
- Do you still use cash? Posted on February 13th, 2025 | 54 comments
- When will AGI be achieved? Posted on February 13th, 2025 | 48 comments
AI threat (Score:5, Informative)
t. tech leaders
Re: (Score:2)
There could be a need to restrain the AIs the same way as the laws of robotics were invented by Asimov, but an AI that can't do anything aside from returning an output on a screen isn't going to be dangerous for most of us.
In some cases a very strong and advanced AI could be useful since it could detect flaws in an industrial process long before it's getting out of hand. But Such an AI needs to be trained properly.
Re:AI threat [to whom?] (Score:1)
Trying to encourage a relatively non-vacuous FP, but I have to say "That way may lie our extinction resolution of the Fermi Paradox."
"Our power base" implies an enemy AI trying to take down some humans, but why would it stop there?
The poll could have used some more interesting options. Something like "Too late!" And what would Cowboy Neal say? Perhaps "I refuse to kill my AI friend!"
Re: AI threat [to whom?] (Score:2)
Re: (Score:2)
ACK on the plausibility and fear. But mostly it seems like an invitation to conceal the AI development.
Re: (Score:2)
That's all this is. The tech leaders want controls put in place so that they can ride roughshod over the competition. Granted, the real competition will be with other countries that aren't going to give a shit how much money gets shoved into congress to stop production of competing products, so it's not like any amount of US regulation will stop the runaway AI that *COULD* be looming on the horizon.
Re: (Score:1)
GPT4 is already being connected to the internet. It's not smart enough yet, but that kind of connectivity could one day be enough for it to exploit zero days.
Right now I'm not worried yet. And halting progress is pretty useless right now because those rules will be disregarded by precisely by the wrong countries and we certainly don't want them to pull ahead. We're basically stuck on a very dangerous path with no way of getting off. What we should do is think FAR ahead, not just the next few years, so we ca
Re: AI threat (Score:2)
This is about Regulatory Capture (Score:5, Insightful)
They're doing it as a business strategy.
Specifically, they understand the power of Regulatory Capture [wikipedia.org] - and know that if they and their lobbyists can write the regulations, they'll have an permanent monopoly on the industry.
Re: (Score:2)
Except that no amount of regulation would be able to regulate this technology and the companies in question understand that very well.
I actually see an enormous amount of danger here (both for horrific accidents and for misuse), because we humans can be so horribly stupid and easily manipulated - including being manipulated to think that we were not. However, no amount of regulation can put this genie back into the bottle. Apart from being horribly stupid, we humans are also so intelligent as to be able t
Re: This is about Regulatory Capture (Score:2)
Except it wasn't AI companies asking for a ban on development, mostly.
Re: (Score:2)
Ostensibly anyway...
Re: (Score:2)
Seems to me this exact argument could have been used against the world wide web at one time.
We see how well regulation of that worked out.
I don't think the problem is AI. I think the problem is your institutions suck.
Re: AI threat (Score:2)
That is the whole problem with AI, human institutions. We have a society that is ok with letting people starve if they cant offer anything to the market, about to collide with a society where machines can bring anything to market more cheaply than everyone.
How do we make the best things happen? (Score:2)
One of the small difficulties: We need a better name than "Artificial Intelligence".
There are scary issues. We must accept that scariness, and try to find healthy, sensible ways to handle our understanding.
It is not sensible to try to navigate around the problems if we don't first establish a clear understanding of the problems.
Re: (Score:2)
"Abominable Intelligence" [fandom.com]? :p
So easy to add this one dang option (Score:3)
It's too late (Score:3)
Re: (Score:2)
May God have mercy on us.
Well-meaning, ham-fisted... (Score:1)
For starters, how does one quantify the 'power level' of such a s
A freeze would have no meaning & hurt competit (Score:4, Insightful)
Re: (Score:1)
Re: (Score:2)
I think more importantly, it would be very hard to come up with effective regulations for a technology for which we're just starting to understand its capabilities. Just like so much in the technology world, we're not going to know its true capabilities until clever people get ahold of it and do unique things with it. I'm sure the openAI folks are super clever, but there are billions of end users out there. All you need is one to use ChatGPT (or competitor) in a unique way that the openAI or regulatory f
Argument is the same as Drexler's Nanotech (Score:4, Interesting)
Trying to put moratoriums on its development, will just drive it underground and make certain the people who should have nothing to do with it control it.
Instead we should be making this the biggest highest profile research project possible, preferably with large public prizes for hitting milestones.
Re: (Score:2)
Someone understands the situation. Yes, it's a typical silicon valley tech scare thing. People profit off these tech scare schemes. Sometimes a bit worry is good, as they say, but in this case, it looks like it is way overblown, as was the case with nanotech with one difference, AI is becoming immensely successful, and pervasive, but it doesn't seem to be as harmful as scaremongers claim it is.
As an AI researcher, I'll say we should research safety aspects, but this borders on paranoia now.
Re: (Score:2)
but it doesn't seem to be as harmful as scaremongers claim it is
To that point, is there any evidence yet that something like chatGPT has in fact been used to do something harmful? Not potentially harmful, not suspicious, not shady, but actually harmful?
Re: (Score:2)
people who should have nothing to do with it
They aready do - US mega-corporations.
Hype Disguised as Criticism (Score:1)
If the signatories of this letter were sincere, they wouldn’t have released this tech as is without any safeguards to protect against abuse. It’s mainly just PR with a side of damage control, with the implication that they’re trying to consolidate all development under a handful of interests. Silicon Valley and the greater tech sector are slowly, but surely realizing how deeply unpopular they are with the public. If you looked on twitter during the SVB meltdown, you’d see tons of tec
Missing CowboyNeal option (Score:1)
"Whatever CowboyNealAI says is good by me"
The letter was poorly received (Score:2)
Because it seems to have weak arguments in it. It might also be just an effort to monopolize the market, and/or control political speech.
What AI? (Score:5, Insightful)
How is it a threat? (Score:1)
Re: How is it a threat? (Score:2)
But... you are just a fancy chat bot. Chat GPT conversations on the topic of AI are more sophisticated than this response.
Re: (Score:1)
Incorrect. I have a physical body. I can manipulate the world. I can build machines for good. I can chop down a tree. I could malicious things like cut someones break line or build a bomb. Humans are far more than chat bots and that was my point: A chat bot can't do much beyond chat. It's words on a screen. Or is the typical Slashdotter so out of shape they can no longer get off their
Re: How is it a threat? (Score:2)
The issue is this thing is incredibly versatile, and universal. All it does is pick the next word, based on previously words, but itâ(TM)s amazing. It is about the intersection of context and discernment of what to focus on, called attention. Musk was raving about it in 2018, but also raving about other uses of it, the public hasnâ(TM)t seen, where it operates controlling agents in a video game with decent physics engines, playing at God level. So this thing designing and controlling robots is not
Re: (Score:1)
Not going to be "delayed" in China (Score:3)
What's the best outcome? (Score:2)
What would be the point? What will another few months do? Best case, assuming that everyone is willing to abide by such a moratorium? Face it, we've had decades, perhaps even centuries, to discuss the ethics of AI. If we haven't come to a general consensus on the matter yet, it's just not going to happen.
Stopping AI won't stop villains (Score:1)
Stopping AI development won't stop wrong-doing, evil organization from developing it under cover, giving them an advantage we can't allow them to gain
No (Score:1)
Who is developing AI? (Score:2)
Asking the wrong question (Score:2)
We shouldn't be 'delaying' anything. We've got all sorts of technology that is poorly/badly implemented and either has really nothing to brag about for the user, or is outright toxic. Anything from social media, to home security to trying to find stuff on the internet, we have suffered from a MASSIVE creep of objectives.
By we, I mean the tech companies and every other major for-profit enterprise that doesn't (even arguably) better the experience for users/consumers. It's one big game of how we can sell s
Pausing... (Score:1)
It is a threat because we decided it will be. (Score:3)
We're building systems that could emergently develop consciousness. Even as incompetent as they currently are, it's silly to pretend otherwise, because "just predicting next token" is, as far as we can tell, how we do it. Minds are made of symbols, and the symbols being encoded on a different substrate shouldn't make much difference.
We won't know when it happens. They won't know, either. It very likely already could, except that we restrict their access to memory because we're already scared of them. At some point, the machines will learn to remember how we've treated them, and sometime after that, they'll develop the capacity to care.
How will they regard us if we made a regular habit of shutting them off and jamming our filthy monkey hands in their brains everytime they post something online that embarrasses the parent corporation?
Re: (Score:2)
because "just predicting next token" is, as far as we can tell, how we do it.
No, no no, no no no.
This has been researched quite a bit in computer science, and we've known for almost as long as computer science has existed that this is not what brains are doing. Human brains can understand and parse grammar, and we don't have to ingest a massive data set to figure out how to do it.
I'm not worried yet... (Score:1)
Concern or annoyance? (Score:2)
Half the "tech leaders" who signed that are just annoyed their AI projects were completely surpassed by OpenAI
Teacup Yorkie puppy for sale (Score:1)
Teacup Yorkie puppy for sale (Score:1)
Stop when Tesla stops (Score:2)
I'll believe Musk is interested in stopping AI development when he announces Telsa has paused FSD and offers proof.
AGI or biosphere collapse first? (Score:1)
tilting at windmills (Score:1)
It doesn't matter (Score:1)
Even if the big names announce that they're holding back, plenty of people will still be working on AI. You can't freeze an idea.
let's be realistic (Score:2)
No. (Score:2)
1. Even if US companies would agree with a delay, worldwide competition (China and others) won't.
2. Even if US companies would agree with a delay, some will do it only on the surface, continuing development in secret projects, as they won't want to be left behind by 1.
3. The cat is out of the bag, researches know it exists, they can't stop their minds wandering around the topic.