Comment Unleashed animal runs into street? (Score 5, Insightful) 167
And?
And?
That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.
This isn't true. Transformer based language models can be trained for specialized tasks having nothing to do with chatbots.
That's what I just said.
Here's where the summary goes wrong:
Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.
Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.
But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".
The illusion of intelligence evaporates if you use these systems for more than a few minutes.
Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.
The state laws could be unconstitutional due to interstate commerce. However, the Feds should regulate it thoroughly. Enviromental, National Security, etc
There should be a national level effort like the Manhattan project. Companies should be working together under goverment oversight, and working toward a common goal. Maybe that will keep a AI apocalyse away for a bit. maybe protect us from it?
I know my opinion is in the minority. The future is starting to scare more than usual.
At this stage, we're all learning. (not that we ever stop), Vibe coding can be great for learning if the dev is taking time to understand what's being done. But I fear too many are taking the easy path and just writing a prompt and shipping. That's not safe for any environment, let alone production. Some time in the future it may be better. We'll have the proper guide rails for AI, the proper testing paths, and overall reviews. Right now isn't that time. If you want stable, efficient code, which you definitely do for Production and kernel maintenance, AI isn't ready. It's not any more ready than self-driving cars. Can they do it? Sure, would you trust them in every situation? probably not. That "Probably, not" is what gets you killed. Or Panics the kernel, or crashes the DB....etc.
Linus is a smart guy, I might not agree with everything he implements, but for the Linux kernel, I can say I never felt like he went in the wrong direction.
I'm sure this discussion will continue. It's not a once-and-done. But as newer and better coding systems come online, they'll need to be tested and verified, and eventually we may get something that passes the test. It's already miles ahead of where we were only 10 years ago. I can't predict how we'll be next year.
Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.
From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.
The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.
The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.
It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.
Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.
I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.
This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.
It probably makes more sense given their scale for them to have their own power generation -- solar, wind, and battery storage, maybe gas turbines for extended periods of low renewable availability.
In fact, you could take it further. You could designate town-sized areas for multiple companies' data centers, served by an electricity source (possibly nuclear) and water reclamation and recycling centers providing zero carbon emissions and minimal environmental impact. It would be served by a compact, robust, and completely sepate electrical grid of its own, reducing costs for the data centers and isolating residential customers from the impact of their elecrical use. It would also economically concentrate data centers for businesses providing services they need,reducing costs and increasing profits all around.
Telescreen monitoring would have required a crazy amount of manpower.
Probably the closest real-world analog was the East German Stasi, which may have accounted for nearly 1 in 6:
The ratio for the Stasi was one secret policeman per 166 East Germans. When the regular informers are added, these ratios become much higher: In the Stasi's case, there would have been at least one spy watching every 66 citizens! When one adds in the estimated numbers of part-time snoops, the result is nothing short of monstrous: one informer per 6.5 citizens. It would not have been unreasonable to assume that at least one Stasi informer was present in any party of ten or twelve dinner guests. Like a giant octopus, the Stasi's tentacles probed every aspect of life.
— John O. Koehler, German-born American journalist, quoted from Wikipedia
In the USA is it common to have self service tills at supermarkets that accept coins?
If it accepts cash, it should accept both coins and bills. Any change I manage to accumulate usually gets fed into the coin slot at a self-checkout before I swipe a card to provide the rest of the payment. It's better than handing it off to a Coinstar machine, as those skim off a percentage of what you feed them.
Let's work with the argument's load-bearing phrase, "exploration is an intrinsic part of the human spirit."
There are so many things to criticise in that single statement of bias. Suffice it to say there's a good case to be made that "provincial domesticity and tribalism are prevalent inherited traits in humans", without emotional appeals to a "spirit" not in evidence.
With a president who solves an existential threat by finding the best expert he can find, and using his own formidable political skills and charisma to run interference for that expert.
Seriously, Comacho was a meathead, but Iâ(TM)d vote for him.
The system will be down for 10 days for preventive maintenance.