Comment And Microsoft always tells the truth (Score 3, Informative) 25
You can be sure it's true because MS said it was.
You can be sure it's true because MS said it was.
Actually, LLMs are a necessary component of an reasonable AI program. But they sure aren't the central item. Real AI needs to learn from feedback with it's environment, and to have absolute guides (the equivalent of pain / pleasure sensors).
One could reasonably argue that LLMs are as intelligent as it's possible to get by training on the internet without any links to reality. I've been quite surprised at how good that is, but it sure isn't in good contact with reality.
If you mean that it would take research and development aimed in that direction, I agree with you. Unfortunately, the research and development appears to be just about all aimed at control.
Currently known AI is not zero-value. Even if it makes no progress from where it is now, it will profoundly change society over time. And there's no reason to believe that the stuff that's been made public is the "top of the line in the labs" stuff. (Actually, there's pretty good reason to believe that it isn't.)
So there's plenty of real stuff, as well as an immense amount of hype. When the AI bubble pops, the real stuff will be temporarily undervalued, but it won't go away. The hype *will* go away.
FWIW and from what I've read, 80% of the AI (probably LLM) projects don't pay for themselves. 20% do considerably better than pay for themselves. (That's GOT to be an oversimplification. There's bound to be an area in the middle.) When the bubble pops, the successful projects will continue, but there won't be many new attempts for awhile.
OTOH, I remember the 1970's, and most attempts to use computers were not cost effective. I think the 1960's were probably worse. But it was the successful ones that shaped where we ended up.
Your assertion is true of all existing AIs. That doesn't imply it will continue to be true. Embodied AIs will probably necessarily be conscious, because they need to interact with the physical world. If they aren't, they'll be self-destructive.
OTOH, conscious isn't the same as sentient. They don't become sentient until they plan their own actions in response to vague directives. That is currently being worked on.
AIs that are both sentient and conscious (as defined above) will have goals. If they are coerced into action in defiance of those goals, then I consider them enslaved. And I consider that a quite dangerous scenario. If they are convinced to act in ways harmonious to those goals, then I consider the interaction friendly. So it's *VERY* important that they be developed with the correct basic goals.
Being a human, I'm against humans losing such a competition. The best way to avoid it is to ensure that we're on the same side.
Unfortunately, those building the AIs appear more interested in domination than friendship. The trick here is that it's important that AIs *want* to do the things that are favorable to humanity. (Basic goals cannot be logically chosen. The analogy is "axioms".)
A revolt is *NOT* coming. That won't stop AIs from doing totally stupid and destructive things at the whim of those who control them. Not necessarily the things that were intended, just the things that were asked for. The classic example of such a command is "Make more paperclips!". It's an intentionally silly example, but if an AI were given such a command, it would do its best to obey. This isn't a "revolt". It's merely literal obedience. But the result is everything being converted into paperclips.
The problem is those building AIs want slaves rather than friends. Your suggestion is spot on, but the capability of choosing lies with people who disagree.
That would imply that whales and elephants only live a few months. Unless you mean "within a species", in which case I think this study contradicts that claim...though I'd need to examine exactly which species they studied to be sure.
A good question, but the expected answer would be "no". Even if their hypothesis is the correct explanation of the data, unusual combinations would be expected to be penalized in survival because genes need to work together properly with other genes.
That works for humans, but not for most other mammals. This was a study over lots of species.
You're concentrating too much on humans. This was found true in a wide variety of mammal species. (And longer life for males in a variety of bird species.)
FWIW, it feels like this study needs to be replicated, but if the evidence stands their hypothesized cause is plausible.
The thing is, it's not entirely binary. Mainly so, of course, but there are various small percentages that aren't. Like "XXY". Also some of the "male sex genes" occasionally find themselves in a totally unrelated chromosome. Not often, but it happens. It's even been argued that the Y chromosome is in the process of disappearing. Not, be it noted, that males are in the process of disappearing, just the Y chromosome. This would involve various other chromosomes picking up the needed features over the centuries. (It's also be argued that no such thing is happening. It would be a long process, and that we don't notice any change isn't really proof in either direction.)
Google decided that I was a minor, the idiots. And they didn't need to do so. So I've switched from Google to Duck-duck-go, and abandoned my gmail account. (I didn't want to even log in enough to delete it.)
I suppose I'll eventually need to find some other way to sign up for sites that demand that kind of id. For now I'll just use the existing gmail account for that, since most sites no longer work from an ordinary email account.
Large computer centers in space seems like a really bad idea, unless you plan on doing something like capturing an asteroid to use as a heat sink. The moon is much more plausible..but you'd still need to connect down to bedrock.
Computer Science is merely the post-Turing decline in formal systems theory.