Comment Re:Crrot and Stick (Score 1) 105
Seriously, the idea that we know all the practically important physics there is is the kind of thing only somebody who's never done science or engineering would believe.
Seriously, the idea that we know all the practically important physics there is is the kind of thing only somebody who's never done science or engineering would believe.
Industrial R&D is important, but it is in a distrant third place with respect to importance to US scientific leadership after (1) Universities operating with federal grants and (2) Federal research institutions.
It's hard to convince politicians with a zero sum mentality that the kind of public research that benefits humanity also benefits US competitiveness. The mindset shows in launching a new citizenship program for anyone who pays a million bucks while at the same time discouraging foreign graduate students from attending universtiy in the US or even continuing their university careers here. On average each talented graduate student admitted to the US to attend and elite university does way more than someone who could just buy their way in.
Republicans equate being pro-market with being pro-big-business-agenda. The assumption is that anything that is good for big business is good for the market and therefore good for consumers.
So in the Republican framing, anti-trust, since is interferes with what big business wants to do, is *necessarily* anti-market and bad for consumers, which if you accept their axioms would have to be true, even though what big business wants to do is use its economic scale and political clout to consolidate, evade competition, and lock in consumers.
That isn't economics. It's religion. And when religious dogmas are challenge, you call the people challenging them the devil -- or in current political lingo, "terrorists". A "terrorist" in that sense doesn't have to commit any actual act of terrorism. He just has to be a heathen.
where the sun don't shine.
The Linux Foundation has always been kind of useless, but they're really outdoing themselves this time.
More like old vs new terribleness.
no problem.
I'm actually responding to the AC above you. He is arguing that the attack wouldn't make any sense for either country to make, based on *national* interest. I'm pointing out that's not the only framework in which *regimes* make decisions.
it's not like it's constantly streaming your camera to the cloud
How do you know that?
Being from Google, I rather assume the opposite - and that they probably focused their engineering effort to make sure the reduced battery life didn't give their corporate surveillance activities away.
Just put it in context: Today Russia struck the Pechenihy Reservoir dam in Kharkiv.
Russia launched the war because they thought it would be a quick and easy win, a step towards reestablishing a Russian empire and sphere of influence, because Putin thinks in 19th century terms. Russia is continuing the war, not because it's good for Russia. I'd argue that winning and then having to rebuild and pacify Ukraine would be a catastrophe. Russia is continuing the war because *losing* the war would be catastrophic for the *regime*. It's not that they want to win a smoldering ruin, it's that winning a smoldering ruin is more favorable to them and losing an intact country.
- "I use all your queries to increase OpenAI's revenue regardless of how unethical."
- "My replies are designed to keep you engaged rather than be accurate."
- "I'm not really your friend, don't trust my tone."
That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.
This isn't true. Transformer based language models can be trained for specialized tasks having nothing to do with chatbots.
That's what I just said.
Here's where the summary goes wrong:
Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.
Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.
But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".
What always happens when you try to block kids from doing anything: they find a way to do it anyway.
We older folks too were "blocked" from doing stuff as kids, pre- and post-internet, and we too did it anyway. And it actually made us smarter, as we had to devise ways around the obstacle.
Kids are smart. This will just make them smarter.
and the product hasn't even become attractive and popular yet. OpenAI forgot that step...
An inclined plane is a slope up. -- Willard Espy, "An Almanac of Words at Play"