Comment Re:If all of AI went away today (Score 1) 145
Your scenario is impossible, so try again.
Your scenario is impossible, so try again.
Because we're discussing a scenario where the big AI companies have gone out of business, remember? And the question is whether people just stop using the thing that they found useful, or whether they merely switch to whatever alternative still works.
It's like saying that if Amazon went out of business, people would just stop buying things online because "going to a different website is too hard". It's nonsensical.
They believed you could mimic intelligence with clockwork, etc. Why do you only count if it if it involves computers?
If you want to jump to the era of *modern* literature, the generally first accepted robot in (non-obscure) modern literature is Tik-Tok from the Oz books, first introduced in 1907. As you might guess from the name, his intelligence was powered by clockwork; he was described as no more able to feel emotions than a sewing machine, and was invented and built by Smith and Tinker (an inventor and an artist). Why not electronic intelligence? Because the concept of a programmable electronic computer didn't exist then. Even ENIAC wasn't built until 1945. The best computers in the world in 1907 worked by... wait for it... clockwork. The most advanced "computer" in the world at the time was the Dalton Adding Machine (1902), the first adding machine to have a 10-digit keyboard. At best some adding machines had electric motors to drive the clockwork, but most didn't even have that; they had to be wound. This is the interior of the most advanced computer in the world in the era Tik-Tok was introduced. While in the Greco-Roman era, it might be something like this (technology of the era that, to a distant land that heard of it, probably sounded so advanced that it fueled the later rumours that Greco-Romans were building clockwork humans capable of advanced actions, even tracking and hunting down spies).
I think this is an oversimplification. Musk dreams of a sci-fi future. Isaacman does too (and is friends with Musk). Duffy wants to gut NASA. Hence, Musk strongly supported Isaacman. It's not too complicated; you don't need to search for subtext when what's out in the open makes perfect sense.
They don't have to "understand" anything. They just have to know that "If I go to this website, I can still ask the AI questions, even though ChatGPT shut down". Or that "If I click to install this app, I get an icon on my desktop and I can ask the AI questions there".
This is what got me. Why the hell are they calling a crypto auction something aimed at "the AI generation", when they clearly mean "Cryptobros"?
This is unscientific, but long ago I once conducted a poll on the Stable Diffusion subreddit, and one of the questions asked about peoples' opinions of crypto and NFTs. Only a small percentage liked it. The most popular poll choice by far was one with wording along the lines of "Crypto and NFTs should both go drown in a ditch."
It's an entirely different market segment. Crypto and NFTs appeal to gamblers, criminals, and anarcho-libertarians. AI appeals to those who want to create things, to automate things, and to save time or accomplish more. There's no logical relation between "This high school kid wants to save time on her homework" and "this 42-year-old mechanic thinks this bad drawing of an ape is going to be worth millions some day because a hash somewhere links its checksum to his private key."
The GP's comment wasn't accusing there of being a nuclear waste problem (there isn't). They were talking about how nuclear waste can be burned in a breeder reactor, producing orders of magnitude more than the burning of a couple tenths of a percent of the natural uranium in a conventional reactor does.
Despite the press hype about thorium (which is way more popular among the media and nerds on the internet than with actual nuclear engineers), nuclear power is already basically unlimited, even without breeder reactors (which are very much viable tech, and much more mature than thorium). Only with an incredibly weak definition is it in any meaningfully way "limited" - if you limit yourself to currently quantified reserves, at current fuel prices, with production mining tech, you have a bit over two centuries worth at current burn rates. But this is obviously nonsense. Uranium production tech isn't going to advance in *two centuries*? Nobody is going to explore for more in *two centuries*? And as for "at current prices" - fuel is only a very small percentage of the cost of fission power, so who cares if prices rise? And rising prices or advancing production tech doesn't just put linearly more of a resource onto a market, they put exponentially more onto the market. As an example with uranium: seawater uranium could power the world's current (overwhelmingly non-breeder) reactor fleet for 13000 years, and current lab-scale tech is projected to be nearly as cheap as conventional uranium production at scale.
Also, if you switch to breeder reactors, you don't just extend the amount of fuel you have by two orders of magnitude - the cost of the raw mined uranium also becomes two orders of magnitude less relevant than its already very small percentage of the cost of fission power generation, because you need so much less per kWh.
As for any thoraboos in the comments section: thorium fuel is more complex and expensive to fabricate (fundamentally - thorium dioxide has a higher melting point and is much harder to sinter), it's more complex to reprocess (it's more difficult to dissolve), its waste is much more hazardous over human timescales, the claimed resistance to nuclear proliferation is bunk, the tech readiness level is low and the costs are very high, and it's unclear it'll ever be economically competitive - most in the nuclear industry are highly dubious (due to what's needed to actually burn it vs. uranium). Hence the lack of investment. And I say this with the acknowledgement that nuclear power is already a very expensive form of electricity generation.
Easy for you, a technical person familiar with LLMs and WebAssembly
I'm not talking about how to develop LLM inference servers. You don't have to understand WebAssembly in order to run a WebAssembly program in your browser any more than you have to understand Javascript to run Javascript in your browser. It's *less* technological knowledge than using the Play store. And installing Ollama is no more difficult than installing any other app.
Your difficulty conceptions are simply wrong.
I don't understand your response. Was "life breathed into" the ancient Chinese robotic orchestras and singers, or the Islamic robotic orchestra and mechanical peacocks?
And re: myths, the aforementioned myths literally involved *humans* making the automatons. Ajatasatru for example, the maker of the robots to guard the artifacts of the Buddha, was also famous for using a mechanical war chariot of great complexity with whirling spiked maces, and later one with spinning scythes - not the sort of things you would describe as having "life breathed life into", and actually quite similar to Leonardo Da Vinci's chariot (in some versions he made it/them, in other versions it was a gift from the Indras). As for the robots guarding the Buddha, in one version they're literally powered by water wheels. In another version, Greco-Romans had a caste of robot makers, and to steal the technology, a young Indian man was reincarnated as a Greco-Roman, marries the daughter of a robot-maker, and sews the plans for robots into his thigh, so that when he's murdered by killbots as he tries to flee with the plans, they still make it back to India with his body. Yes, ancient Indian legends literally involved robot assassins.
And as for the robots in the Naravahandatta, they were literally made by a carpenter, and are specifically described as "lifeless wooden beings that mimic life".
Even with Hephestos, a literal god, they're very much not described as merely having life breathed into them - they're literally described as having been crafted (the Greeks were very much into machinery and described it in similar terms), and they behave as if something that were programmed (the Kourai Khryseai are perhaps the most humanlike of Hephaestus's creations, but even they aren't described like you would describe biological beings, they're described for being remarkable for how lifelike they were). Of course it wasn't for-loops and subroutines, people had no conception of such a thing, but his creations behaved in a "programmed" way, not as things with free will.
I don't know why some people are so insistent on imagining that "sci-fi" things have to be recent. They're not. There were literally space operas being written in Roman times. Not scientifically accurate, of course, but sci fi things - including automated things that mimic intelligence - simply is not new.
I'm sorry, but let me repeat: it is an apt description of what it's doing. Nobody says ".... and does so via the exact same mechanism that the human brain uses when paying attention" (which I doubt you could describe anyway, and is in reality very much still a topic of research)
Oh, and also, the resources needed to do a finetune (to update knowledge) - or heck, even just a LoRa - are vastly less than the resources needed to train a foundation. And in any "AI-crash scenario", renting server time becomes cheap.
Installing Ollama is easy, and now there's even WebAssembly inference servers which load models straight in your browser but run on your computer. Literally just have to browse to the page and click.
RAG does not require any meaningful amount of maintenance.
What if perchance the track should bend?
The Trump administration has been broadly declaring anything related to DEI illegal and putting immense pressure against major organizations with DEI policies, when again, like 99% of policies, including those they've gone after, are like they above.
They literally had to turn down a government grant because they had a DEI policy. They're not lying about the liability risk they faced. And this is what the overwhelming majority (like 99%) of real-world DEI policies look like.
"Gotcha, you snot-necked weenies!" -- Post Bros. Comics