Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re: Its not logic, or reasoning (Score 1) 64

LLMs do not "think" in any meaningful sense of the word. There is no understanding. There is no reasoning. We know this because there is no possible mechanism by which anything like reasoning could happen. (No magical 'emergent' properties here, despite all of the wishful thinking.) There is simply no possible way for any internal deliberation to happen or for some internal state to persist between tokens. The very idea is absurd on its face. The only thing these models can do is generate a set of next-token probabilities. If you believe otherwise, you're going to need to show your work.

The Turing test is interesting in itself. The idea that in the end, if it fools a human then it can be considered to think.

Do you think the Eliza bot "can be considered to think"? It has fooled many humans, after all. Don't be stupid. Turing tests do not show what you seem to think they do. Even Turing knew that!

Comment Re:Its not logic, or reasoning (Score 1) 64

The evidence contradicts your claims

"Evidence"? LOL! What "evidence"? Do you also think that your Eliza chat logs are "evidence" that the program understands you and cares about your problems? Get real. Nothing I wrote in my post is even remotely controversial to anyone with even a very basic understanding of LLMs.

Here's a clue for you: if you're not doing math, whatever you're reading is essentially a comic book. Just mindless entertainment to tickle the imagination. Pop sci articles, like the one's you seem to think are "evidence", are generally superficial, sensational, and wrong.

What a joke.

Comment Re:Its not logic, or reasoning (Score 2) 64

It's an LLM. It doesn't "think", or "formulate strategy".

Correct.

It optimizes a probability tree based on the goals it is given

Nonsense. They do not and can not do this.

They do not operate on facts, concepts, goals, or any other higher-level concept. These things operate strictly on statistical relationships between tokens. That is all they do because this is all they can do. We know they do not plan because there is no mechanism by which they could plan. Even if they magically could form plans, their basic design prevents them from retaining them beyond the current token. Remember that the model only generate probabilities for the next token, the actual selection is stochastic. In contrast, the model itself is completely deterministic. It does not change as it's being used and no internal state is retained between tokens (save some optimizations, but those don't affect the output). They will always produce the same set of next-token probabilities for a given input.

. For the same reason, it cannot "understand" anything, or care about, or contemplate anything about the end (or continuation) of its own existence.

Correct. These are, after all, no more than simple, if very large, functions. No one in their right mind worries about, for example, an excel spreadsheet contemplating it's own existence. This is no different.

All the "guardrails" can honestly do, is try to make unethical, dishonest and harmful behavior statistically unappealing in all cases - which would be incredibly difficult with a well curated training set - and I honestly do not believe that any major model can claim to have one of those.

It's a fools errand. All these do is generate text, one token at a time. Even though it appears that they can follow instructions, they can't actually do that in any meaningful way. They're still just producing output consistent with the training data. You can argue that this doesn't matter as long as the output is consistent enough, but I think it's important to understand the real limitations. People already think they're magic brains and that can lead to these things being used in dangerous ways.

Comment Re:As a shareholder.... (Score 0) 14

"Stockholm syndrome is a psychological response where hostages or abuse victims develop positive feelings towards their captors or abusers. This phenomenon, though not an official diagnosis, involves victims bonding with their captors, sometimes to the point of sharing their goals and even resenting those trying to help them. It's often seen as a coping mechanism to deal with the trauma of captivity or abuse."

Comment Re:Same could be said for humans (Score 1) 70

This is the dumbest take. Yes, humans make mistakes and their performance drops of with task complexity, but these things are in no way comparable to the failures of LLMs. The kinds of mistakes humans make are nothing like the kinds of 'mistakes' that LLMs make. You won't find humans accidentally fabricating citations or unintentionally summarizing text that doesn't exist. As for 'complexity', LLMs fail on even simplified versions of Towers of Hanoi, even when they're given explicit instructions. Humans, in contrast can usually find a general solution to the problem in a few minutes which they can then trivially apply to towers of arbitrary size. Further, as LLMs do not and can not reason, to even talk about reasoning complexity is silly.

Comment Re:partially true (Score 1) 70

It is very hard to find a research paper unless you know the exact words to search for it

If you have the citation, finding the paper is likely going to be trivial. My guess is that you're just asking the LLM for a citation for something and not checking to see if the paper even exists. They're very good at generating pretend citations. LLMs are not search engines.

Testing out coding ideas. You can describe something you want and you get instantly code that creates the UI for you.

Or you could just draw a picture. Not only will you be able to iterate faster, you'll use significantly fewer resources. If that's not your thing, you could use any one of a zillion interface design tools. They don't need endless correction and won't randomly add or remove things unexpectedly.

I have never seen an official school book that didn't have any errors

Have you ever seen one that was half errors?

"Bernie Madoff stole billions, but Mike stole a candy bar when he was 3, so they're basically the same."

Slashdot Top Deals

"Old age and treachery will beat youth and skill every time." -- a coffee cup

Working...