Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:So something I don't think anyone is asking (Score 1) 40

Your model of an AI is wrong. EVERYTHING an AI says is a "hallucination", it's just that a lot of those hallucinations match reality. Even "Stocastic Parrot" is closer to correct than "An AI just regurgitates what it finds in its data set".

(P.S.: Most of your memories are wrong. They're also "hallucinations" that sort of "match reality". One test of this is to list everything that's in a room you haven't been in for awhile (say at least 10 minutes) and where in the room it was located. Then go to the room and notice all the things you forgot, or added, or misplaced. No excuses allowed like "but that wasn't important".)

Comment Re:Optimistic about layoffs ending (Score 1) 93

You're talking about current AI. There's scant reason to believe it's going to stop improving. AI molecule folding just keeps getting better. Larger and larger systems are being "understood" correctly (i.e., the predictions about how they will act are borne out). In many areas AI is already better than "experts in the field".

You *could* be right, that it won't get better at your job, but you tempt me to call you "John Henry" ("John Henry drove 16 feet, and the steam drill only drove 9" and "he laid down his hammer and he died").

Comment Re:Well... no (Score 4, Insightful) 134

Well, advanced lithography equipment isn't easy to make, so it's not surprising they're having problems. If they solve those problems it will be a permanent benefit to them.

Also, there's no particular reason to believe that "the AI bubble" will pop. Certainly parts of it will, but other parts are already solid successes. The rest is "work in progress", which, of course, may fail...but the odds are that large portions will succeed. (Much of the stuff that's "not ready for prime time" is just being pushed out too quickly, before the bugs have been squashed.)

Comment Re:Correlation is not causality... again ffs (Score 1) 188

When talking about people and environmental effect, the general rule is "your model is too simple". Probably both have a common cause AND there is some direct effect. And also something the study didn't consider (though nobody knows what..perhaps air pollution or micro-plastics).

Comment Re:What about not eating it daily? (Score 2) 188

In a literal sense you are correct...and even understating the case. In common usage, though, "processed food" refers to food that's had a lot more processing that that. The problem is that the term is so vague that it has no precise meaning. Cooking a steak is processing food. So is cutting it off the steer. Even draining the blood before you cut it off is processing. So is washing a carrot.

It's a term that has no precise meaning except as derivable from context...and that limits the precision unless the context is quite explicit.

Comment Re:What about not eating it daily? (Score 1) 188

My guess was that the effect was small enough that at one a day it was hard to disentangle from noise, so they didn't even look at any smaller amount.

OTOH, the headline is clearly not supported by the study. They only tested some kinds of processed meat. If their causal theory is correct, they may not have needed to test a wider range, but it might be wrong.

Food science is complex and difficult. You should always be skeptical of popularizations of it. They always oversimplify. (Actually, that doesn't just apply of "food science", but rather to all science reporting, and probably to all reporting.)

Comment Re:Note study is only about *processed* meat (Score 1) 188

It's not really clear to me what "processed meat" means. (Well, perhaps the article explains, but I'm not that interested.) It clearly means hot dogs (all varieties?), and probably all lunch meats. (It seems to be looking at "sugar added" meat-food products.) So it likely includes bacon. It's not clear to what extent they were looking at nitrite-added processed meat, like ham. But I wouldn't think that hamburger purchased raw would be included.

Comment Don't exactly believe it (Score -1) 52

Hurricanes often hit Florida, so blaming hurricane damage on "climate change" is clearly a gross oversimplification. It probably made the hurricanes worse, but it's not a binary switch. Similarly for a lot of those things. And there are probably some places where climate changes improved things. (A lot fewer, I'll admit.)

This piece strikes me an as oversimplification, probably for political reasons. Yes, a lot of disasters were made worse by climate change. I suspect that pine beetles have continued to spread north, as winter die-offs are curtailed. Etc. But most of the changes are incremental. And much of that "investment" needed to be done anyway.

Comment Re:should be 'CEO doesn't understand tech, is scar (Score 1) 93

Whether it's a "work in progress" or "useful tool" depends on which AI you're talking about, and what task you're considering. Many of them are performing tasks that used to require highly trained experts. Others are doing things where a high error rate is a reasonable tradeoff for a "cheap and fast turn-around". But it's definitely true that for lots of tasks even the best are, at best, a "work in progress. So don't use it for those jobs.

OTOH, figuring out which jobs it can or can't do is a "at this point in time for this system" kind of thing. It's probably best to be relatively conservative. But not to depend on "today's results" being good next month.

Comment Re:should be 'CEO doesn't understand tech, is scar (Score 1) 93

Most of those things are either experimental, or only useful in a highly structured environment.

AI is coming, but the current publicly available crop (outside specialty tasks) makes lots of mistakes. So it's only useful in places where those mistakes can be tolerated. Maybe 6 months from now. I rather trust Derek Lowe's analysis of where biochemical AI is currently...and his analysis is "it needs better data!".

One shouldn't blindly trust news stories. There are always slanted. Sometimes you can figure the slant, but even so that markedly increases the size of the error bars.

OTOH, AI *is* changing rapidly. I don't think a linear model is valid, except as a "lower bound". Some folks have pointed to work that China has claimed as "building the technology leading to a fast takeoff". Naturally details aren't available, only general statements. "Distributed training over a large dataset" and "running on a assembly of heterogeneous computers" can mean all sorts of things, but it MIGHT be something impressive (i.e. super-exponential). Or it might not. Most US companies are being relatively close-mouthed about their technologies, and usually only talking (at least publicly) about their capitalization.

Slashdot Top Deals

Let's organize this thing and take all the fun out of it.

Working...