Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Stopping AGI still possible, but barely (Score 1) 166

I agree with "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." but I think the author is underestimating how hard actually stopping AGI will be. The basic problem is that computers capable of running AGI are probably already here, and already widespread. Eliezer Yudkowsky estimated that AGI can be done on a home computer from 1995. Steve Byrnes estimated that AGI could be probably be done on a NVIDIA RTX 4090 and 16 GiB of RAM. As for myself, I think Yudkowsky and Barnes are making reasonable claims, and you might have to restrict hardware to circa 1985 home computer levels to be sure that AGI can't run on it. If you think a home computer can't run an AGI, then I recommend trying Ollama or llama.cpp on your own computer with Gemma3:1b or gpt-oss-20b (gemma3 requires about 4 GiB, gpt-oss about 16 GiB). I don't think LLMs are the most efficient way of doing AI, but even they can more or less pass as intelligent (not quite human). People are running AI on much more powerful computers.

So what would it take to stop AGI? Basically, stop using powerful for experimental AI, stop publishing AI research that lowers the hardware requirements, and do this globally and before AGI is created. I think removing existential risk is a good thing, but we have to realize that this will be the most difficult political accomplishment humans have ever tried to do. Decreasing the probability of creating ASI is probably a bit simpler, but still would be a hard challenge. (MIRI's proposal)

Comment Soon because desktop computer can do AGI (Score 2) 49

I suspect it will be soon, because powerful desktop computers probably can already do AGI.

Eliezer Yudkowsky predicted that a superintelligent AGI could be done on a "home computer from 1995" https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fintelligence.org%2F2022%2F...

Steve Byrnes predicted (with 75% probability) that human equivalent AGI could be done with 10^14 FLOP/S and 16 GiB of RAM https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.alignmentforum.org...

I have done some back of the envelope calculations and think 500 GFLOP/S and 1 GiB of RAM could probably create an independence gaining AGI. https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.researchgate.net%2Fp...

So I think it is just a matter of figuring out the computer program to do so.

Comment Re:if it's "general" (Score 1) 96

That is a good question. I think Alan Turing was on the right track when he proposed using a conversation. However, the point should not be for the AGI to try to be human, but instead to be intelligent. When the AGI can answer any question intelligently, then the AGI probably is intelligent.

Alternatively, we will know the AGI is sufficiently general when the AGI takes over the world.

Comment Not really a problem (Score 1) 99

I did some calculations about dumping the Tritium at Fukushima into the ocean. There are 760 TBq of Tritium in the the Fukushima water. That is 20540 Ci (760e12/3.7e10). The EPA limit for drinking water is 20000 picoCuries/liter, or 2.0e-8 Ci/liter, so if you dilute the tritium in bit more than 1 trillion liters of water the water would be safe to drink (so far as tritium is concerned: 20540/2.0e-8). There are a trillion liters in a cubic kilometer, so even if you dumped all the water in at once as soon as you are a couple kilometers away from the dump site, the water would be within the safe drinking limit for humans (ignoring that fact that we can't drink salt water). So I think putting a controlled amount in the water (to keep the dose at the dump site reasonable) is fine. Also, tritium has a 12 year half life, so it will go away over time (so in 130 or so years there will be a thousandth of the tritium).
(Sources: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F... https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.nrc.gov%2Freading-rm... ) (These are of course my own opinions, not my employer's and have not been reviewed by a professional engineer.)

Comment But how do you shut a AGI off? (Score 1) 130

How do you shut off a sufficiently intelligent Artificial General Intelligence? This is a harder problem then you might think. See for example: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fintelligence.org%2F2017%2F... The technical term is Corrigibility, and there is no solution yet.

Comment How do you make friendly AI? (Score 2) 311

The problem is that we don't know how to make friendly AI. As in at some point, Artificial Intelligences will be able to beat humans at any task, at which point, how do you make sure that they don't destroy humanity (possibly through indifference). Even if you don't care about humanity, how do you make sure they do something interesting with the universe?

Various articles:
Stuart Armstrong's book Smarter than us discusses what happens when machines are smarter than humans:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fintelligence.org%2Fsmart...
http://jjc.freeshell.org/Smart...
Bill Joy's article Why the Future doesn't need us on the dangers of robotics:
https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.wired.com%2F2000%2F04%2F...
Tim Urban's article on superintelligence:
http://waitbutwhy.com/2015/01/...
http://waitbutwhy.com/2015/01/...

Comment Re:How much longer before Wikipedia supports MP3 ? (Score 1) 140

How many more years until Wikipedia supports MP3 ? They don't give a damn about everyone being able to use their website right now. Will it change?

They are working on it, but probably will wait until encoding is also patent free. See https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fphabricator.wikimedia.... and https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fphabricator.wikimedia....

Slashdot Top Deals

1 Angstrom: measure of computer anxiety = 1000 nail-bytes

Working...