Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Do not be a follower (Score 2) 27

The post at the top of the thread was about "AI". The following posts were about AI. Don't be blinded by the current hype into thinking that;s the whole picture. Just because other developments get less press doesn't mean they aren't happening and aren't important. In the field of biochem, most AI is *related* to LLMs, but is significantly different.

Comment Re:Does anyone care? (Score 1) 27

Yes. I went to check out buying an Apple recently, after an appointment with my ophthalmologist. I wanted a computer that would run reasonably with voice control, as the ads suggested was possible. I decided not to, or at least to wait another year.

Now I have no idea how many people are affected this way, but that is a sign that the deficiencies have caused at least *some* damage to Apple.

Comment Re:Chickens come home to roost (Score 1) 83

Sorry, but this isn't evidence about the quality of the graduates. If they're just out of school you can't tell whether they're good or bad. I trained an astrologer to be a good programmer in less than a year. (Well, he soon moved into management, but he was capable.) HR hired a different astrologer, who was skilled at C. More skilled in the techniques than I was. But he was in love with macros, and used them everywhere, so nobody else could understand his code. It was the second one that had a high SAT score.

Comment Re:Irony at its finest (Score 1) 64

You're confusing "intelligence" with "goals". That's like confusing theorems with axioms. You can challenge the theorems. Say, for instance, the the proof is invalid. You can't challenge axioms (within the system). And you can't challenge the goals of the AI. You can challenge the plans it has to achieve them.

Comment Re:The three godfathers (Score 3, Insightful) 64

A sufficiently superhuman AI would, itself, be a risk, because it would work to achieve whatever it was designed to achieve, and not worry about any costs it wasn't designed to worry about.

Once you approach human intelligence (even as closely as he currently freely available LLMs) you really need to start worrying about the goals the AI is designed to try to achieve.

Comment Re:More like "Superstupidity" (Score 1, Interesting) 64

Specialized superintelligence is quite plausible. We don't have it yet, but close. Few people can do protein folding projections as well as a specialized AI. Just about nobody can out compute a calculator. Etc.

Your general point is quite valid, but I think you don't properly understand it. IIUC, we've got the basis for an AGI, but it needs LOTS of development. And LLMs are only one of the pieces needed, so it's not surprising that they have lots of failure modes. And once you get a real AGI, you've got the basis for super-human intelligence. So aiming directly for "SuperIntelligence!" is the wrong approach. (But if you're after a headline, proclaiming it is a good approach.)

Comment Re: So we all know the guy is selling snake oil (Score 2) 68

I'm all in favor of space travel, but that's not going to solve the social problems on earth, and we don't yet have the ability to run a small self-sufficient stable society in an off-earth environment.

I do support space habitats, but I tend to think of that as a "next century" (or after the singularity) kind of thing.

What a large war does is kill of a large proportion of the most aggressive young males. It's one of the traditional ways the current crop of alpha-male primates keep control.

Slashdot Top Deals

You may call me by my name, Wirth, or by my value, Worth. - Nicklaus Wirth

Working...