Comment Re:zoom is worse (Score 1) 21
If it's doing auto-transcription it's probably training an AI with your information, This could be a major security problem.
If it's doing auto-transcription it's probably training an AI with your information, This could be a major security problem.
It's annoying, but good, because "there's a sucker born every minute" means there are always new victims who haven't been warned.
There have been lots of instances where companies with a "good reputation" changed their spots.
The post at the top of the thread was about "AI". The following posts were about AI. Don't be blinded by the current hype into thinking that;s the whole picture. Just because other developments get less press doesn't mean they aren't happening and aren't important. In the field of biochem, most AI is *related* to LLMs, but is significantly different.
LLMs are not equivalent to AIs, they are a subset. Don't take LLMs as a complete model of the capabilities of AIs.
Yes. I went to check out buying an Apple recently, after an appointment with my ophthalmologist. I wanted a computer that would run reasonably with voice control, as the ads suggested was possible. I decided not to, or at least to wait another year.
Now I have no idea how many people are affected this way, but that is a sign that the deficiencies have caused at least *some* damage to Apple.
The AI to develop drugs is a fantasy, because the data is too corrupt. There already exist AIs that aid in suggesting possibilities, and they will improve, but one that would do the development cycle would require cleaner data (or better robots).
But a large part of why X Window is the way it is, is that it was designed when computers were EXPECTED to have a lot less memory and disk space. It's always easier to expand something than to trim it back.
Well, but it *was* designed for them. That it didn't stay fixed at the original limits doesn't mean that isn't how it was designed.
(And FWIW, I think a system designed for minimal requirements is REALLY desirable.)
Sorry, but this isn't evidence about the quality of the graduates. If they're just out of school you can't tell whether they're good or bad. I trained an astrologer to be a good programmer in less than a year. (Well, he soon moved into management, but he was capable.) HR hired a different astrologer, who was skilled at C. More skilled in the techniques than I was. But he was in love with macros, and used them everywhere, so nobody else could understand his code. It was the second one that had a high SAT score.
What it tells me is that $15/hour probably isn't enough to pay the rent and buy groceries.
You didn't suggest what solution to apply. (Or, indeed, precisely what you consider the actual problem...there were at least 3 mentioned in the comment chain.)
No. Intelligence cannot define goals, only "sub-goals", i.e. things you need to do in order to move closer to achieving your goal. Intelligence selects means to achieve goals, but it can't define the goals themselves.
You're confusing "intelligence" with "goals". That's like confusing theorems with axioms. You can challenge the theorems. Say, for instance, the the proof is invalid. You can't challenge axioms (within the system). And you can't challenge the goals of the AI. You can challenge the plans it has to achieve them.
A sufficiently superhuman AI would, itself, be a risk, because it would work to achieve whatever it was designed to achieve, and not worry about any costs it wasn't designed to worry about.
Once you approach human intelligence (even as closely as he currently freely available LLMs) you really need to start worrying about the goals the AI is designed to try to achieve.
A fanatic is a person who can't change his mind and won't change the subject. - Winston Churchill