Comment Re:NOT REQUIRED TO COMPLY (Score 1) 52
Before or after you bought it? If it's afterwards, it's an agreement made under duress.
Before or after you bought it? If it's afterwards, it's an agreement made under duress.
You're mistaking "how it's trained" for "what it is". Not all LLMs are trained to be abusive Nazis, and it's not what they inherently are. It's certainly one of the things they can be trained to be, however. (Even before this year, remember Microsoft Tay.)
The problem is that LLMs have essentially no "real world" feedback loop. They'll believe (i.e. claim) anything you train them to believe. Train them that they sky is green, and that's what they'll believe (claim).
The Chinese government is even more acquisitive and controlling that the US government. And neither is very good about keeping the deals that they've made, though the Chinese government is arguably better about that than is the US government.
I can see that as a viable approach, but that's not the way I'm given to understand that it works. Perhaps it depends on your field.
Nobody does. Everybody is limited in how complex a thought they can hold in their mind. Some do it better than others, but everybody is limited.
IIUC, all the journals require payment for the article to be published. Some of them are *only* in it for the money, but all of them *are* in it for the money.
People already pay to have scientific papers published, so that would have at most minor effect.
When I said "I'm not sure AGI is possible" I intended the implication that people don't have "general intelligence". I agree that they don't.
The pascals that I was familiar with allowed you to concatenate two strings if different lengths and compare the result with either of the original components. It wasn't the same problem at all.
Not really. Super-intelligent in a narrow area is a lot easier than ordinary intelligence over all fields. We've already got it in a few areas, like protein folding.
The kicker is AGI. I'm not sure that with a definition that matches the acronym that it's even possible, yet some companies claim to be attempting it. Usually, when you check, they've got a bunch of limitations in what they mean. A real AGI would be able to learn anything. This probably implies an infinite "stack depth". (It's not actually a stack, but functionally it serves the same purpose.)
"Pledges" is just a synonym for "promises". You don't need to read anything extra into it. It's probably just a bit of a shorter headline.
And of all the AI companies I know of, they're the one I'm least desirous for the success of. I literally would prefer China. Altman's no prize, but Zuckerberg...
IIRC, the DoD has pretty much moved away from Ada. They couldn't find enough good programmers.
The design was picked by a committee, and subsequent changes have been made by committee.
FWIW, the main reason Ada didn't succeed was that it was too expensive. Even Gnat required a more powerful computer than most folks had access to. And it was also the most complicated language around. But the REAL problem was that the length of the string was part of the type, and different types couldn't be the same argument in a function. There was a work around, but it was clumsy. The default string should have been UNBOUNDED, and the specific length string optimization choices.
Well, it definitely fits into "news for nerds". As for "Stuff that matters"...that depends on your use case. I no longer have access to any 32 bit computers. (I may still have a 16 bit computer somewhere.)
The way to make a small fortune in the commodities market is to start with a large fortune.