Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:The future (Score 1) 85

They say financial literacy is poor in the US.

Did you miss the part where it has to be invested in a market tracking index fund? The S&P 500, which is what people usually mean when they say "the market" has returned almost 10% / year over the last hundred years. Index fund fees are usually a fraction of a percent. So ~9% / year, subtract the US average inflation rate this century of 3% and you've got 6% real growth. No imaginary leverage scams needed.

Comment Re:Useless technology anyway (Score 1) 92

Right, because every phone has enough processing power to decode a 4K stream, display it, record its own screen, re-encode that, and send it to the TV. And enough battery life. And the LAN half of the wifi has the bandwidth to handle 3 4K streams (router to phone, phone to router, router to TV). Yes, the implementation has always had problems, and it may be niche, but there clearly is a use case.

Comment Re:What's old is new again (Score 1) 42

That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.

Comment Re:What's old is new again (Score 5, Informative) 42

Here's where the summary goes wrong:

Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.

Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.

But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".

Comment Re:What is thinking? (Score 1) 289

The quote you provided didn't say LLM, it said neural network. Neural networks, like any model, can interpolate or extrapolate, depending on whether the inference is between training samples or not.

LLMs are neural networks. You seem to be referring to a particular method of producing output where they predict the next token based on their conditioning and their previously generated text. It's true in the simplest sense that they're extrapolating, and reasonable for pure LLMs, but probably not really true for the larger models that use LLMs as their inputs and outputs. The models have complex states that have been shown to represent concepts larger than just the next token.

Comment Re:It's not meant to be a competition (Score 1) 21

This isn't some kind of 'our neutrino observatory is bigger than your neutrino observatory' contest.

That's exactly what it is. When your science depends on a big expensive piece of hardware that most or (best) nobody else has, that's what you tend to talk about. Especially in press releases and grant applications.

Comment Re:What is thinking? (Score 1) 289

Neural networks generally don't extrapolate, they interpolate

You could test that if someone were willing to define what they mean by "generally" I suppose. I think it's fairly safe to say that they work best when they're interpolating, like any model, but you can certainly ask them to extrapolate as well.

Slashdot Top Deals

FORTRAN is the language of Powerful Computers. -- Steven Feiner

Working...