Comment A fake is a fake (Score 1) 56
Unless they have an actor of comparable skills driving things, this will remain flat and boring. All generative AI can do is imitation and averages. For some things that is enough, but it will never be good.
Unless they have an actor of comparable skills driving things, this will remain flat and boring. All generative AI can do is imitation and averages. For some things that is enough, but it will never be good.
LLM-type AI has some uses, but it is not the "God machine" many people seem to believe, after ample prompting by the LLM pushers. It is just a tool. It can do some things somewhat well but unreliably and needs a lot of manual oversight by actual experts. "Vibe coding" is somewhat suitable for mock-ups (which is useful), but cannot create production stable / secure / maintainable code and thereby fails basically all fundamental requirements for production code.
In the end, we will see what we saw with all other AI hypes: Productivity increases in the 1..10% range for some very specific things, productivity decreases in many others things that got pushed. Not a surprise. Only because of unfettered greed and customer stupidity did things get scaled to a completely irrational level and hence there will be a real crash this time, with real damage. The last AI hypes basically fizzled out quietly. Let's hope the sure-to-come next AI winter is bad enough that it makes it amply clear for at least a few decades that computers are not magic and cannot be magic and people should stop believing that.
Not reliable and not durable. Initially, it worked great, but then it started to have trouble. I had the SCSI version. I eventually got an MOD drive as replacement that never caused any issues. But sadly consumers do not want reliable storage, so MOD was not developed further.
IMO, the Zip drive was just another badly made piece of technology that claimed to be something it was not. Quite like QIC-80, at least the consumer versions. Essentially just a money-grab.
AI would have caught it if it was told it was there or at least to go look exactly there for it! Oh, wait...
Many will still be doing software, but with an actual engineering education to help them. But Data Science? I think that stuff may be within reach of AI eventually. It is mostly statistics and data conditioning, both things AI can do. Of course, really good data scientists will still be in demand, but the mid-range ones? They may be screwed.
You are quite likely correct.
"Algorithms do reasoning"? WTF are you smoking? If you have any CS degree, please hand it in, you just proved yourself completely unworthy of it.
You can continue to push lies. But I will not get on board with that. LLMs cannot reason. We have mathematical proof of that. Not that this is in any way a surprise to people that actually understand the mechanisms they are using. Among actual experts, this is not even a debate. The only thing these people look at is why the fake is so convincing.
That is because you cannot do it yourself. Sorry, no other conclusion is possible from what you post.
Let me break it down a bit: There is non-general intelligence. You can fake that with tables and other preconfiguration mechanisms, because the domain and the set of conclusion is finite and known in advance.
On the other side, there is general intelligence. That one cannot be done with any precomputations. It requires the elusive quality of "insight" and that is whu LLMs can (provably) not do it.
In essence, what an LLM (and sadly many people) do is look up some precomputed "reasoning" and replay that. Many people do that without any fact-checking at all and what they arrive is called an "illusion of understanding". Also note that only about 10-15% of the human race are independent thinkers (i.e. able to fact-check), the rest does not actually have or use general intelligence. You seem to be in the 85...90%.
True. But they cannot. A typical reasoning chain for anything real is somewhere between 20 and 100 elementary steps, sometimes longer. The fake need to be very good to get theough that.
At this point, you are just digging yourself deeper. Recognize you are following a cult. If you do, you still have a pretty good chance to get out.
Oh, and LLMs _cannot_ do math. There are hard proofs of that. Claiming anything else is a direct lie at this time.
The LLM produced that answer because it was trained to.
The problem is it does not do so reliably. Tiny details may throw it off. It has no understanding of context. It has no understanding of hard facts that cannot be ignored but must be taken as truth.
I am really bad at being a true believer. I have this drastic impairment that I can see reality and recognize facts.
Incidentally, that you got moderated down for that spot-on comment just shows that the AI fans are essentially a cult by now. "Cognitive surrender" all the way. Not that they had much in the way of cognitive abilities to begin with.
The number of vulnerabilities in some software is irrelevant. What matters is attacker effort to find and exploit them. Some actual insight required. You do not have that.
Context matters. Dumb people (like you) have severe trouble recognizing context.
Money cannot buy love, nor even friendship.