Comment Re:Time to find another distro... (Score 3, Insightful) 9
How are they supposed to detect whether AI was used or not? This is pretty much a forced move. The key part is that the developer still has full responsibility.
How are they supposed to detect whether AI was used or not? This is pretty much a forced move. The key part is that the developer still has full responsibility.
You didn't need to have a dependency on Clippy, just a prohibition on uninstalling it. Personally, I never liked it, and disabled it after a few days, but many organizations have a stricter configuration policy.
???
Apple bought Next, but I believe that MS developed it's own software after the first few years. Not that I agree any of it is "a great product", but at least until around 1995 most of it was pretty usable. (At that point I switched to Apple for a few years before moving on to Linux, so I don't know about recent MS products, but I'm pretty sure most of them were developed in-house.)
I like to say
I feel you have not given much thought to what "understanding" is.
People often say such things as you have, based on an intuitive understanding that does not stand up to scrutiny. "It just feels kind of right".
I could ask an AI what "understanding: is, and get a better answer than from 98% of people, but of course that is the type of answer that can come from regurgitating reading. The real proof is when you go into the details with more complex iterative queries, and (if) the AI understands your questions, and recognises your misconceptions. There is a lot that current LLMs cannot do well, like interacting with the real world, but they do "understand", often very well, depending on the context. Better at complex technical topics than celebrity gossip maybe?
And for that fraud and lack of victim the founder pled guilty, paid a $50m fine and went to prison.
I'm not commenting on this particular case, but due to the US plea-bargaining system, a guilty plea should not be seen as an admission of guilt.
The "right to a trial" is a distant memory in a land where anyone exercising that right case face a sentence 2 or 3 times longer than if they plead guilty.
China was working to self-sufficiency anyway. Unstable trade wars and tariffs have just made that more blatantly important.
Having a rotating camera, or multiple cameras, doesn't seem ridiculously hard. The problem is quickly interpreting the received images. (Knowing the distance helps a lot in that regard.)
Drivers sit in a fixed position. So "should be able to be as safe as humans" seems reasonable. This doesn't mean or imply that the actual implementations are as safe.
Besides, I'd prefer if automated driving were safer than human drivers.
FWIW, both the Democrats and the Republicans pretty much fit the Mussolini definition of fascism. "The government working together with the commercial companies to benefit and strengthen the country". There's the implication of centralized control by the government in Mussolini's formulation, but I think a oligarchy or plutocracy would also fit.
I don't think fascism is NECESSARILY bad, but it sure is easy to turn it that way. (Fascism is NOT Nazism. They really are distinct, if compatible, organizational principles.)
Well cameras & lights should be enough for most safe driving. Or at least as safely as people drive. (I'd add mics, but a lot of those people are driving in noisy enough environments that they can't register a loud noise off to their left.) Maybe add vibration sensors.
OTOH, if you want to drive more safely than people, it probably helps a lot to have accurate distance ranging.
The only thing wrong with your comment is that science isn't a "lone genius" driven activity either. That it sometimes seems to be is an artifact of the way histories get written. Without a massive support group, Einstein wouldn't have accomplished anything.
Corporations are not merely people, they're people acting in an environment that rewards particular motivations and discourages others. They're essentially a minimally regulated bureaucracy. They positively select for those who are power hungry and immoral to be rewarded.
I didn't either. I grew up with various different movies selling different lines...and I pretty much avoided those about business. But ISTM that more movies said "our government is good and trustworthy" than said the same about corporations.
Perhaps that should be "The model improvement for chatbots seems to be dying down.", as I think that's the correct statement of what you mean. And that's probably correct. Once you get beyond chatbots, though, the model improvement is continuing.
FWIW, I think chatbots have an intrinsically limited capability. However if you use chatbots as an interface to some other capability (i.e. robots, in various meanings of that term) then the limitation changes drastically.
They should get AI to write the Slashdot summaries.
It seems like every criticism I hear about AI could also be applied to humans. Sometimes more so.
AI confidently gives an answer when it doesn't know? check!
Lack of transparency for the process of coming to a conclusion? check!
Rationalisation - explaining the reasoning for a conclusion retrospectively. check!
AI output is only bad if you go in expecting it to be perfect, and not checking the results.
These are amazing tools when used correctly. Complaining about AI errors is like if someone showed you a talking dog, and you found fault with its grammar.
"Little prigs and three-quarter madmen may have the conceit that the laws of nature are constantly broken for their sakes." -- Friedrich Nietzsche