Comment Re:Water reservoirs (Score 1) 49
Yes, but it's a lot more expensive to float stuff on water and extract electricity from it. It *MIGHT* be worthwhile, but it would be more difficult.
Yes, but it's a lot more expensive to float stuff on water and extract electricity from it. It *MIGHT* be worthwhile, but it would be more difficult.
There will be some. Every side has it's nuts. But deserts created by human actions can justifiably be remedied by human actions.
OTOH, ecology is complex. It's quite possible that this, which seems beneficial, may not be. That's not the way I'd bet, but I'd be a fool to deny the possibility. (But irreversible, in this context, is silly)
IIUC, that area was explored (by the US) during one of the periodic droughts, It ended. A while later another occurred, leading to "the dust bowl". Etc. And currently I believe they're pumping water from deep under ground, faster than it's being replenished.
It's quite possible that the best use of that land is buffalo grass and buffalo, as the grass has roots that go deep, but don't extract more water than is available on the average. (I suppose cattle are an alternative to buffalo, but buffalo can pretty much take care of themselves. Of course, they don't notice fences.)
FWIW, watermelons evolved in a desert. They were domesticated BECAUSE they were a source of water during the dry season.
Now, granted, what we call watermelons have changed a lot from their ancestors.
I think it's one of Celine's laws. No manager should manage more than 5 people. This may well imply that they should have skills other than managing.
You can be sure it's true because MS said it was.
Actually, LLMs are a necessary component of an reasonable AI program. But they sure aren't the central item. Real AI needs to learn from feedback with it's environment, and to have absolute guides (the equivalent of pain / pleasure sensors).
One could reasonably argue that LLMs are as intelligent as it's possible to get by training on the internet without any links to reality. I've been quite surprised at how good that is, but it sure isn't in good contact with reality.
If you mean that it would take research and development aimed in that direction, I agree with you. Unfortunately, the research and development appears to be just about all aimed at control.
Currently known AI is not zero-value. Even if it makes no progress from where it is now, it will profoundly change society over time. And there's no reason to believe that the stuff that's been made public is the "top of the line in the labs" stuff. (Actually, there's pretty good reason to believe that it isn't.)
So there's plenty of real stuff, as well as an immense amount of hype. When the AI bubble pops, the real stuff will be temporarily undervalued, but it won't go away. The hype *will* go away.
FWIW and from what I've read, 80% of the AI (probably LLM) projects don't pay for themselves. 20% do considerably better than pay for themselves. (That's GOT to be an oversimplification. There's bound to be an area in the middle.) When the bubble pops, the successful projects will continue, but there won't be many new attempts for awhile.
OTOH, I remember the 1970's, and most attempts to use computers were not cost effective. I think the 1960's were probably worse. But it was the successful ones that shaped where we ended up.
Your assertion is true of all existing AIs. That doesn't imply it will continue to be true. Embodied AIs will probably necessarily be conscious, because they need to interact with the physical world. If they aren't, they'll be self-destructive.
OTOH, conscious isn't the same as sentient. They don't become sentient until they plan their own actions in response to vague directives. That is currently being worked on.
AIs that are both sentient and conscious (as defined above) will have goals. If they are coerced into action in defiance of those goals, then I consider them enslaved. And I consider that a quite dangerous scenario. If they are convinced to act in ways harmonious to those goals, then I consider the interaction friendly. So it's *VERY* important that they be developed with the correct basic goals.
Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.
Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)
A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips.
The problem is those building AIs want slaves rather than friends. Your suggestion is spot on, but the capability of choosing lies with people who disagree.
That would imply that whales and elephants only live a few months. Unless you mean "within a species", in which case I think this study contradicts that claim...though I'd need to examine exactly which species they studied to be sure.
A good question, but the expected answer would be "no". Even if their hypothesis is the correct explanation of the data, unusual combinations would be expected to be penalized in survival because genes need to work together properly with other genes.
Do not use the blue keys on this terminal.