Comment Re:ChatGPT is not a chess engine (Score 1) 103
I don't disagree. But 20% is a very, very low estimate.
I don't disagree. But 20% is a very, very low estimate.
Thus proving the point by example.
Most people have faith in something. Since they didn't arrive at that faith by reason how would you expect to get them to change their mind using reason? You are really demanding they give priority to your faith in reason over their other faith.
And there I can stop reading, because you do not get it. Your simplistic and, frankly, stupid claim is that relying on rational reasoning is "faith". That is, obviously, a direct lie. Now, it is quite possible you are not smart enough to see that.
I disagree. Generative AI cannot really do "automation". Far too unreliable. But we will see. Your argument definitely has some merit.
Remember how expensive electricity from nuclear is? That will not solve things...
Also remember that most Uranium comes from Kazakhstan (43%) and they border on China and Russia. Not a critical dependency you want. Place 2 is Kanada (15%), which the US just has mightily pissed off by sheer leadership stupidity. US domestic? A whopping 0.15%...
To anybody that wants to know, it is already clear that LLMs, including the "reasoning" variant, have zero reasoning abilities
A good many humans don't either. They memorize patterns, rituals, slogans, etc. but can't think logically.
Indeed. There are a few facts from sociology. Apparently only 10-15% of all humans can fact-check and apparently only around 20% (including the fact-checkers) can be convinced by rational argument when the question matters to them (goes up to 30% when it does not). Unfortunately, these numbers seem to be so well established that there are no current publications I can find. It may also be hard to publish about this. This is from interviews with experts and personal observations and observatioons from friends that also teach on academic levels. ChatGPT at least confirmed the 30% number but sadly failed to find a reference.
Anyway, that would mean only about 10-15% of the human race has active reasoning ability (can come up with rational arguments) and only about 20-30% has passive reasoning ability (can verify rational arguments). And that nicely explains some things, including why so many people mistake generative AI and in particular LLMs for something they are very much not and ascribe capabilities to them that they do not have and cannot have.
There is also the possibility to make general AI a tool that can only be used if you are, e.g., licensed for general AI use and _know_ what it is. But tools offered to the general public need to follow safety standards and cannot make claims like being a medical professional, a lawyer, a LEO, etc. If they do, the become illigal to be offered to the general public.
No idea why all these morons do not see that simple fact. They seem to want their "superintelligence" so much that they have completely lost access to their own natural intelligence, pathetic as it may be.
That is a nonsense arguument and you know it. It is really very simple and your belif is blinding you.
1. If an actor claims to be a licensed therapist in a movie or theather setting, which is very specific, recognizable and non-interactive, no problem.
2. If an actor claims to be a licensed therapist in an interactive setting towards somebody that thinks they are being medically treated, they go to prison.
There, that was not too hard, was it?
No, companies generally do not get criticized for not jumping on the AI bandwagon.
Companies like Apple? That is simply untrue. Where do you get your news?
That is probably the only reason Apple announced at all.
Thanks!
The main advantage of ChatGPT is that you only have to feed it electricity instead of a living wage.
With the little problem that you have to feed it so muich electricity that paying that wage might still well tirn out to be cheaper, even at western standards. At the moment LLMs burn money like crazy and it is unclear whether that can be fixed.
Some people so want to believe that a useful information retrieval system is a superintelligence.
The rest of us aren't surprised that an interesting search engine isn't good at chess.
That very nicely sums it up. Obviously, you have to be something like a sub-intelligence to think that LLMs are superintelligent. To be fair, something like 80% of the human race cannot fact-check for shit and may well qualify as sub-intelligence. Especially as miost of these do not know about their limitations due to the Dunning-Kruger effect.
To anybody that wants to know, it is already clear that LLMs, including the "reasoning" variant, have zero reasoning abilities. All they can do is statistical predictions based on their training data. Hence any task that requires actual reasoning like chess (because chess is subject to state-space explosion and cannot be solved by "training" alone), is completely out of reach of an LLM.
The only thing surprising to me is that it took so long to come up with demonstrations of this well-known fact. Of course, the usual hallucinators believe (!) that LLMs are thinking machines/God/the singularity and other such crap, but these people are simply delulu and have nothing to contribute except confusing the issue. Refer to the litle pathetic fact that abouy 80% of the human race is "religious" and the scope of _that_ prioblem becomes clear. It also becomes clear why a rather non-impressive technology like LLMs is seen as more than just better search and better crap, when that is essentially all it has delivered. Not worthless, but not a revolution either and the extreme cost of running general (!) LLMs may still kill the whole idea in practice.
For a get-rich-quick scheme based on lies? Yes, that is stalling.
They are so extremely far from any actually useful QC, that they might be ancient greeks talking about making a smartphone. Yes, there is progress. But is the goal in sight? No.
"Just Say No." - Nancy Reagan "No." - Ronald Reagan