Comment Re:What problem do they solve? (Score 1) 96
Well, they might let me remember people's names. There are probably other uses, but that's the one that occurs to me.
Well, they might let me remember people's names. There are probably other uses, but that's the one that occurs to me.
I accept that it is breaking the law. That doesn't mean any official is going to enforce that law. There are lots of laws that are frequently broken in obvious ways, and are enforced only when it is a convenient excuse.
How far in the future?
I can imagine useful AI based "smart glasses", but that description doesn't fit what I've heard so far, and my prescription means I couldn't try them if I wanted to do so.
That tracks, because I'm currently restoring a 1948 Chrysler straight 8 engine (really that flathead design like what packard was famous for) on a 1948 rolls royce chassis.
I seriously doubt your claim of free speech with a chatbot will hold up in court.
They themselves proved you can't tell a chatbot to not be a therapist. You're not supposed to cut your hand off, there's no regulation telling people to not cut their hand off. You can't create an endless set of rules to tell you what you're not allowed to use free LLMs for. Anyone can train any number of LLM using free open source data sets with any restrictions (or no restrictions) it costs less than $1000 to train a basic 7b model with a credit card at any cloud provider. Next they're going to have signs on gas pumps instructing people not to douse themselves in gasoline and set themselves on fire.
I recall there was "premium" cable and that was HBO, Showtime, Cinemax and probably a handful of has-beens and regional ones that got bought out. I recall regular cable being ~$32/mo and with premium cable (HBO) it jumped up to ~$75 but they only had a few shows and movies would rotate, not too different from how it is now except it's streaming over the internet.
This was largely the purpose of gmail IIRC. Humans can't read your email, but there's almost no value in allowing them access, whereas letting computers build a customer profile to sell ads to you, is invaluable.
If it's doing auto-transcription it's probably training an AI with your information, This could be a major security problem.
It's annoying, but good, because "there's a sucker born every minute" means there are always new victims who haven't been warned.
There have been lots of instances where companies with a "good reputation" changed their spots.
The post at the top of the thread was about "AI". The following posts were about AI. Don't be blinded by the current hype into thinking that;s the whole picture. Just because other developments get less press doesn't mean they aren't happening and aren't important. In the field of biochem, most AI is *related* to LLMs, but is significantly different.
LLMs are not equivalent to AIs, they are a subset. Don't take LLMs as a complete model of the capabilities of AIs.
Yes. I went to check out buying an Apple recently, after an appointment with my ophthalmologist. I wanted a computer that would run reasonably with voice control, as the ads suggested was possible. I decided not to, or at least to wait another year.
Now I have no idea how many people are affected this way, but that is a sign that the deficiencies have caused at least *some* damage to Apple.
The AI to develop drugs is a fantasy, because the data is too corrupt. There already exist AIs that aid in suggesting possibilities, and they will improve, but one that would do the development cycle would require cleaner data (or better robots).
I have never seen anything fill up a vacuum so fast and still suck. -- Rob Pike, on X.