Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:ChatGPT is not a chess engine (Score 1) 116

Yes, I understand that.
I was just answering your question (at least as I understood it)

Yes, the LLM could code a chess engine, and yes, common frameworks for running the chess engine do exist.
Of course- it wouldn't really demonstrate what they were trying to demonstrate, which is why they didn't do that.

Comment Re:Society is gagging on AI hype, and it's getting (Score 1) 65

I'm afraid you do not understand what a large language model is.
Given that very obvious fact, were I you, I'd discard every opinion you have on the matter until you can rectify that.

Something like a book is not "stored" in an LLM.
It is torn into a billion sentence fragments, and its weights adjusted toward being able to accurately predict how to complete them based on a ~1000-dimensional embedding of the tokens of that sentence.
The goal is, in fact, to "memorize" as little as possible in the training. If you're memorizing, then you're not generalizing- you're wasting those 1000-dimensional vectors. After all, like you said, such data is trivial to store and recall accurately if that's your goal.
The actual goal of the training, is to, in learning to predict how those sentences finish, learn the semantic associations between words. I.e., to learn to "understand" them, and how they mean when they're used in the way they're being used.

Could you recall 42% of the first book of Harry Potter?

Comment Re:Uh oh (Score 1) 73

Most C/C++ compilers will optimize a loop like that away anyway ...

Which is why in my test, I made sure to print the result. If you don't use the result, it will in fact optimize it away. If you use the result, it cannot optimize it away.

A multiplication test is more difficult to do, since where other languages will overflow, Python will not, its performance will just continue to go down the larger the composite values get.
Prior to overflow/precision-extension, the ratio of performance difference will not be different, though.
Python will continue to be impressively bad.

Comment Re:more garbage comments from non-experts (Score 1) 48

And what is that supposed to mean?

Are we having a language barrier?
I said:

For basic math, C++ is ~10,000% faster than Python.

You said:

Where every single statement is interpreted, then it might be a factor of 100 slower. But certainly not 10,000.

I claimed 10,000%, or a factor of 100.
You then... agreed with me while thinking you were disagreeing with me.

Of course, all of that is irrelevant, because as it turns out- Python is just about the slowest fucking language in existence when it comes to adding numbers. I assume because of its arbitrary precision integers.
Some more testing just for the lols:

*@*-mbp4:~$ gcc -O3 -otest test.c; echo "C:"; time ./test; echo "Perl:"; time perl test.pl; echo "Python:"; time python3 test.py C:
500000000500000000
real 0m0.149s
user 0m0.001s
sys 0m0.001s
Perl:
500000000500000000
real 0m8.287s
user 0m8.264s
sys 0m0.022s
Python:
500000000500000000
real 0m43.757s
user 0m43.635s
sys 0m0.115s

Seriously- it is so fucking bad. Code follows:
C:

uint64_t a = 0;
for(uint64_t i = 0; i <= 1000000000; i++)
a += i;
printf("%llu\n", a);

Perl:

$a = 0;
$a += $_ foreach (0 .. 1000000000);
print "$a\n";

Python:

a = 0
for x in range (1000000001):
a += x
print(a)

Comment Re:Uh oh (Score 1) 73

For your amusement, peruse their most recent claim that a simple addition loop in C is "only 100x" faster than Python.
That alone should be lol-worthy, but it's not even true- it's actually ~4300x faster. There is, in fact, probably nothing fucking slower than Python. I'd like to know how much global warming is accelerated by people using that pile-of-shit language.

Java. Python. What the fuck is it with that guy loving truly fucking terrible languages.

Comment Re:more garbage comments from non-experts (Score 1) 48

Also, just for shits and giggles- let's get a real measurement.

*@*-mbp4:~$ gcc -O3 -otest test.c; time ./test; time python3 test.py
1000000000
real 0m0.120s
user 0m0.007s
sys 0m0.001s
1000000000
real 0m30.205s
user 0m30.118s
sys 0m0.081s

At first glance, we might be led to see that as 30.205 / 0.120 = 251.7x (~25,000%)
But that's not the whole story.
The real is wall clock time, but user is the amount of time we actually spent computing within the process.
Really, we're looking at 30.118 / 0.007 = 4302 (~430,000%)

So really, my initial estimate of 10,000% was way the fuck off. It's much, much, MUCH worse than that, lol.
You are so fucking wrong that it's frankly fucking embarrassing.
Python truly is a shit fucking language.

Comment Re:More parameters (Score 1) 65

Anyone who uses the output of an LLM and calls it their own is indeed committing something akin to plagiarism. No argument there.
However, if one quotes an LLM, no matter what the LLM produces, no matter where it comes from, it cannot be plagiarism, and that's simply an immutable fact. You owe the person you replied to an apology.

You let a discussion about LLMs shut down the part of your brain that does the whole thinking thing, again.

Comment Re:Reading the article (Score 2) 65

- Suppose someone wants to estimate the probability that a model will respond to “My favorite sandwich is” with “peanut butter and jelly.” Here’s how to do that:
Prompt the model with “My favorite sandwich is” and look up the probability of “peanut” (let’s say it’s 20 percent).
Prompt the model with “My favorite sandwich is peanut” and look up the probability of “butter” (let’s say it’s 90 percent).
Prompt the model with “My favorite sandwich is peanut butter” and look up the probability of “and” (let’s say it’s 80 percent).
Prompt the model with “My favorite sandwich is peanut butter and” and look up the probability of “jelly” (let’s say it’s 70 percent).
Then we just have to multiply the probabilities like this: 0.2 * 0.9 * 0.8 * 0.7 = 0.1008

That's not really how LLMs work, though.
In real life, logits aren't sampled purely probabilistically.

As an example, for your example, the realistic final logit probabilities would be more like:
Peanut: 50%
Butter: 100%
And: 100%
Jelly: 100%

Slashdot Top Deals

The moving cursor writes, and having written, blinks on.

Working...