Comment Stopping AGI still possible, but barely (Score 1) 166
I agree with "When we accept that AGI is inevitable, we stop asking whether it should be built, and in the furor, we miss that we seem to have conceded that a small group of technologists should determine our future." but I think the author is underestimating how hard actually stopping AGI will be. The basic problem is that computers capable of running AGI are probably already here, and already widespread. Eliezer Yudkowsky estimated that AGI can be done on a home computer from 1995. Steve Byrnes estimated that AGI could be probably be done on a NVIDIA RTX 4090 and 16 GiB of RAM. As for myself, I think Yudkowsky and Barnes are making reasonable claims, and you might have to restrict hardware to circa 1985 home computer levels to be sure that AGI can't run on it. If you think a home computer can't run an AGI, then I recommend trying Ollama or llama.cpp on your own computer with Gemma3:1b or gpt-oss-20b (gemma3 requires about 4 GiB, gpt-oss about 16 GiB). I don't think LLMs are the most efficient way of doing AI, but even they can more or less pass as intelligent (not quite human). People are running AI on much more powerful computers.
So what would it take to stop AGI? Basically, stop using powerful for experimental AI, stop publishing AI research that lowers the hardware requirements, and do this globally and before AGI is created. I think removing existential risk is a good thing, but we have to realize that this will be the most difficult political accomplishment humans have ever tried to do. Decreasing the probability of creating ASI is probably a bit simpler, but still would be a hard challenge. (MIRI's proposal)