Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:But who will train the AI? (Score 1) 57

Probably in 3-10 years there will be a handful of open, legally copyright free training sets anyone can use to train their own ~600b class model with whatever architecture is current state of the art. Researchers are already putting together 7b training sets like this. And LLMs can use tools like search now so they won't always need the most up to date news or info - they can use tools for that instead. Most of finance is analysis of documents, which LLMs have been excellent at for a while now.

Comment Re:Chickens come home to roost (Score 1) 68

Sorry, but this isn't evidence about the quality of the graduates. If they're just out of school you can't tell whether they're good or bad. I trained an astrologer to be a good programmer in less than a year. (Well, he soon moved into management, but he was capable.) HR hired a different astrologer, who was skilled at C. More skilled in the techniques than I was. But he was in love with macros, and used them everywhere, so nobody else could understand his code. It was the second one that had a high SAT score.

Comment Re:Irony at its finest (Score 1) 54

You're confusing "intelligence" with "goals". That's like confusing theorems with axioms. You can challenge the theorems. Say, for instance, the the proof is invalid. You can't challenge axioms (within the system). And you can't challenge the goals of the AI. You can challenge the plans it has to achieve them.

Comment Re:The three godfathers (Score 3, Insightful) 54

A sufficiently superhuman AI would, itself, be a risk, because it would work to achieve whatever it was designed to achieve, and not worry about any costs it wasn't designed to worry about.

Once you approach human intelligence (even as closely as he currently freely available LLMs) you really need to start worrying about the goals the AI is designed to try to achieve.

Comment Re:More like "Superstupidity" (Score 1, Interesting) 54

Specialized superintelligence is quite plausible. We don't have it yet, but close. Few people can do protein folding projections as well as a specialized AI. Just about nobody can out compute a calculator. Etc.

Your general point is quite valid, but I think you don't properly understand it. IIUC, we've got the basis for an AGI, but it needs LOTS of development. And LLMs are only one of the pieces needed, so it's not surprising that they have lots of failure modes. And once you get a real AGI, you've got the basis for super-human intelligence. So aiming directly for "SuperIntelligence!" is the wrong approach. (But if you're after a headline, proclaiming it is a good approach.)

Comment Re: So we all know the guy is selling snake oil (Score 2) 57

I'm all in favor of space travel, but that's not going to solve the social problems on earth, and we don't yet have the ability to run a small self-sufficient stable society in an off-earth environment.

I do support space habitats, but I tend to think of that as a "next century" (or after the singularity) kind of thing.

What a large war does is kill of a large proportion of the most aggressive young males. It's one of the traditional ways the current crop of alpha-male primates keep control.

Comment Re:This is how it should be (Score 1) 6

Google announced roughly the same thing, on device models for phones a couple weeks ago at their developer conference. The 1b model is fine for basic tasks like turning on lights, checking email, social media notifications etc and runs ok on midrange phone hardware. The 4b model technically runs but it's borderline unusable speed but it can answer questions like "how does a microwave work?" with moderate accuracy at a semi-scientific level which is impressive. I suspect most devices will be able to run a 1b and by the end of the decade most everything will run a 4b model at least at talking speed. There's a concept that all AI processing will be done in the datacenter, I suspect 80%+ of consumer LLM will happen on the device, and more complex tasks will get routed to the cloud. For a lot of end users (high school students, etc) 98%+ of requests will be on-device.

Comment Seems to fall apart above 200 LOC (Score 1) 71

LLMs are really good at stuff, better and faster than humans, as long as the complexity isn't much more than ~200 LOC (lines of code). 250-300 LOC and things start falling apart quickly. Sometimes (1/50) you'll get ready and it'll pop out 400 LOC without major errors but that seems to be the absolute limit of the current statistical model family everyone is using.
 
LLMs are really good at analyzing and summarizing text though, it has no problem analyzing 20-30 page PDFs of economic or financial data.
 
But yeah there was this idea that if you just kept training on bigger datasets for longer eventually you'd just arrive at AGI and it's pretty obvious via many many research papers that the error limit right now is ~1% and getting below that is really really dang hard. We're going to need a new breakthrough to get the ball further down the field.

Slashdot Top Deals

Between infinite and short there is a big difference. -- G.H. Gonnet

Working...