I remind you that OpenAI's "core values" begin by saying:
AGI focus: We are committed to building safe, beneficial AGI that will have a massive positive impact on humanity's future. Anything that doesnâ(TM)t help with that is out of scope.
We're not stupid. We know AGI doesn't exist. But several companies are in a race to create it. Take OpenAI: Its leader Sam Altman managed to get the entire safety-oriented nonprofit board fired, after the board tried to fire him because they didn't trust him for reasons that are still not public. Now OpenAI is trying to delete its safety-oriented nonprofit part and go full for-profit.
Some people don't think real AGI is possible--you know, the kind of AGI that could learn to drive a car by practicing, much as a human would... or take a course to learn something new and then teach that same course... or use deepfakes to run for office while pretending to be human... or orchestrate a coup. But how sure are you? Because I would draw your attention to how fast AIs are today.
Sure, ChatGPT is less than 2.5 years old, but I don't mean how fast they are built, but how fast they run. I've seen LLMs running on new AI chips that let them write an entire page of text in one second. I've seen hi-res photorealistic images generated in under five seconds. Right now they're just imitation machines, yes. But they're f**king fast imitation machines. They can recite more facts (and bullshit) than ten Ken Jennings, and probably write more in a month than all humans on Earth can write in a year. My daughter is two, and shows no sign of comprehending the word "why". My one-year-old still can't say "up". Meanwhile, a new AI that speaks 30 languages can be created in three weeks, then copied onto a million concurrent instances housed inside a single data center. So what if you're wrong--what if my concerns aren't stupid? What if AGI is built and thinks vastly faster than we do? What if it isn't completely safe and completely benevolent?
All these LLMs saturating the world grew out of a neural net architecture invented less than eight years ago. The difference between then and now is the billions, and billions, and billions of dollars of investment money funneled into this field today with the goal of making AGI as fast as possible.
So what's stupid? Are all these investors stupid to think that AGI can exist at all? Is it stupid to think this is all happening too fast and that not all corporations involved in this race are responsible? A while ago I tried to paint a picture of how and why this could end badly. And I have an unfinished manuscript for a much longer story about this.
For what it's worth I am up thousands of dollars on Polymarket, a site where people bet on the future. But I can't really predict what will happen once real AGI arrives--and not for lack of trying. Total utopia and total extinction seem about equally likely to me, and there are many other potential outcomes besides. I am not dumb enough to think AGI will never be invented. But I do want to delay it long enough for my children to grow up and experience what life was like before, in that time when humans still ruled the world. That's what a ban will do. A ban will not prevent the invention of AGI. It just delays it awhile. I would settle for that.