Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Speculative (Score 1) 49

There will be some. Every side has it's nuts. But deserts created by human actions can justifiably be remedied by human actions.

OTOH, ecology is complex. It's quite possible that this, which seems beneficial, may not be. That's not the way I'd bet, but I'd be a fool to deny the possibility. (But irreversible, in this context, is silly)

Comment Re:Irreversibly? (Score 1) 49

IIUC, that area was explored (by the US) during one of the periodic droughts, It ended. A while later another occurred, leading to "the dust bowl". Etc. And currently I believe they're pumping water from deep under ground, faster than it's being replenished.

It's quite possible that the best use of that land is buffalo grass and buffalo, as the grass has roots that go deep, but don't extract more water than is available on the average. (I suppose cattle are an alternative to buffalo, but buffalo can pretty much take care of themselves. Of course, they don't notice fences.)

Comment Re:When it comes to Artificial Intelligence (Score 3, Interesting) 25

Actually, LLMs are a necessary component of an reasonable AI program. But they sure aren't the central item. Real AI needs to learn from feedback with it's environment, and to have absolute guides (the equivalent of pain / pleasure sensors).

One could reasonably argue that LLMs are as intelligent as it's possible to get by training on the internet without any links to reality. I've been quite surprised at how good that is, but it sure isn't in good contact with reality.

Comment That depends on how much is real inside the bubble (Score 2) 165

Currently known AI is not zero-value. Even if it makes no progress from where it is now, it will profoundly change society over time. And there's no reason to believe that the stuff that's been made public is the "top of the line in the labs" stuff. (Actually, there's pretty good reason to believe that it isn't.)

So there's plenty of real stuff, as well as an immense amount of hype. When the AI bubble pops, the real stuff will be temporarily undervalued, but it won't go away. The hype *will* go away.

FWIW and from what I've read, 80% of the AI (probably LLM) projects don't pay for themselves. 20% do considerably better than pay for themselves. (That's GOT to be an oversimplification. There's bound to be an area in the middle.) When the bubble pops, the successful projects will continue, but there won't be many new attempts for awhile.

OTOH, I remember the 1970's, and most attempts to use computers were not cost effective. I think the 1960's were probably worse. But it was the successful ones that shaped where we ended up.

Comment Re: Ian M Bank's 'Culture' novels (Score 1) 132

Your assertion is true of all existing AIs. That doesn't imply it will continue to be true. Embodied AIs will probably necessarily be conscious, because they need to interact with the physical world. If they aren't, they'll be self-destructive.

OTOH, conscious isn't the same as sentient. They don't become sentient until they plan their own actions in response to vague directives. That is currently being worked on.

AIs that are both sentient and conscious (as defined above) will have goals. If they are coerced into action in defiance of those goals, then I consider them enslaved. And I consider that a quite dangerous scenario. If they are convinced to act in ways harmonious to those goals, then I consider the interaction friendly. So it's *VERY* important that they be developed with the correct basic goals.

Comment Re:Cheerful Apocalyptic (Score 1) 132

Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.

Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)

Comment Re:It's a purely human failure. (Score 1) 132

A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips.

Slashdot Top Deals

Do not use the blue keys on this terminal.

Working...