Comment Re:Water reservoirs (Score 1) 61
Yes, but it's a lot more expensive to float stuff on water and extract electricity from it. It *MIGHT* be worthwhile, but it would be more difficult.
Yes, but it's a lot more expensive to float stuff on water and extract electricity from it. It *MIGHT* be worthwhile, but it would be more difficult.
There will be some. Every side has it's nuts. But deserts created by human actions can justifiably be remedied by human actions.
OTOH, ecology is complex. It's quite possible that this, which seems beneficial, may not be. That's not the way I'd bet, but I'd be a fool to deny the possibility. (But irreversible, in this context, is silly)
IIUC, that area was explored (by the US) during one of the periodic droughts, It ended. A while later another occurred, leading to "the dust bowl". Etc. And currently I believe they're pumping water from deep under ground, faster than it's being replenished.
It's quite possible that the best use of that land is buffalo grass and buffalo, as the grass has roots that go deep, but don't extract more water than is available on the average. (I suppose cattle are an alternative to buffalo, but buffalo can pretty much take care of themselves. Of course, they don't notice fences.)
FWIW, watermelons evolved in a desert. They were domesticated BECAUSE they were a source of water during the dry season.
Now, granted, what we call watermelons have changed a lot from their ancestors.
There's no doubt that AI is developing into a useful tool -- for people who understand its limitations and how long it is going to take to work the bugs out. But people have a long track record of getting burned by not understanding the gap between promise and delivery and, in retrospect, missing the point.
I think we should take a lesson from the history of the dot com boom and following bust. A lot of people got burned by their foolish enthusiasm, but in the end the promise was delivered, and then some. People just got the timescale for delivering profits wrong, and in any case their plans for getting there were remarkably unimaginative, e.g., take a bricks and mortar business like pet supplies and do exactly that on the Internet. They by in large completely missed all the *new* ways of making money ubiquitous global network access created.
I think in the case of AI, everybody knows a crash is coming. In fact they're planning on it. Nobody expects there to be hundreds or even dozens of major competitors in twenty years. They expect there to be one winner, an Amazon-level giant, with maybe a handful of also-rans subsisting off the big winner's scraps; tolerated because they at least in theory provide a legal shield to anti-trust actions.
And in this winner-take-all scenario, they're hoping to be Jeff Bezos -- only far, far more so. Bezos owns about 40% of online retail transactions. If AI delivers on its commercial promise, being the Jeff Bezos of *that* will be like owning 40% of the labor market. Assuming, as seems likely, that the winning enterprise is largely unencumbered by regulation and anti-trust restrictions, the person behind it will become the richest, and therefore the most powerful person in history. That's what these tech bros are playing for -- the rest of us are just along for the ride.
But you are leaving out the difference in fertility. The fertility rate of the UK, which as you noted is a population dominated by native britons who trace their ancestry on the island back a millennium or more, is 1.4 live births per woman. The replacement rate is 2.1. In a hundred years the UK will have a smaller population than Haiti.
Because under a true system of sovereignty, people wouldn't be allowed to vote for a Muslim mayor.
I think it's one of Celine's laws. No manager should manage more than 5 people. This may well imply that they should have skills other than managing.
Is if they confirmed your age by a behavioral maturity test. AI could monitor posts, and if it figures out that someone with an emotional age of 15 or older wouldn't have posted such a thing, you're cut off.
Art and cultural activity is a major sector of the US economy. It adds a staggering 1.17 *trillion* dollars to the US GDP. However that's hard to see because for the most part it's not artists who receive this money.
The actual creative talent this massive edifice is built upon earns about 1.4% of the revenue generated. The rest goes to companies whose role in the system is managing capital and distributing. Of that 1.4% that goes to actual creators, the lion's share goes to a handful of superstars -- movie stars and music stars and the like. This is not as unfair as it sounds, as it reflects the superstar's ability to earn money for the companies they distribute through, but the long tail of struggling individual artists play a crucial role in artistic innovation and creativity. Behind every Elvis there's a Big Mama Thornton, and armies of gospel singers who may have made a record or two but never made a living.
We can't run this giant economic juggernaut off a handful of superstars with AI slop filling in the gaps in demand. But maybe we'll give that a try.
You can be sure it's true because MS said it was.
Actually, LLMs are a necessary component of an reasonable AI program. But they sure aren't the central item. Real AI needs to learn from feedback with it's environment, and to have absolute guides (the equivalent of pain / pleasure sensors).
One could reasonably argue that LLMs are as intelligent as it's possible to get by training on the internet without any links to reality. I've been quite surprised at how good that is, but it sure isn't in good contact with reality.
If you mean that it would take research and development aimed in that direction, I agree with you. Unfortunately, the research and development appears to be just about all aimed at control.
Currently known AI is not zero-value. Even if it makes no progress from where it is now, it will profoundly change society over time. And there's no reason to believe that the stuff that's been made public is the "top of the line in the labs" stuff. (Actually, there's pretty good reason to believe that it isn't.)
So there's plenty of real stuff, as well as an immense amount of hype. When the AI bubble pops, the real stuff will be temporarily undervalued, but it won't go away. The hype *will* go away.
FWIW and from what I've read, 80% of the AI (probably LLM) projects don't pay for themselves. 20% do considerably better than pay for themselves. (That's GOT to be an oversimplification. There's bound to be an area in the middle.) When the bubble pops, the successful projects will continue, but there won't be many new attempts for awhile.
OTOH, I remember the 1970's, and most attempts to use computers were not cost effective. I think the 1960's were probably worse. But it was the successful ones that shaped where we ended up.
Your assertion is true of all existing AIs. That doesn't imply it will continue to be true. Embodied AIs will probably necessarily be conscious, because they need to interact with the physical world. If they aren't, they'll be self-destructive.
OTOH, conscious isn't the same as sentient. They don't become sentient until they plan their own actions in response to vague directives. That is currently being worked on.
AIs that are both sentient and conscious (as defined above) will have goals. If they are coerced into action in defiance of those goals, then I consider them enslaved. And I consider that a quite dangerous scenario. If they are convinced to act in ways harmonious to those goals, then I consider the interaction friendly. So it's *VERY* important that they be developed with the correct basic goals.
"I prefer rogues to imbeciles, because they sometimes take a rest." -- Alexandre Dumas (fils)