Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment I'm not sure how you regulate that (Score 0) 66

They themselves proved you can't tell a chatbot to not be a therapist. You're not supposed to cut your hand off, there's no regulation telling people to not cut their hand off. You can't create an endless set of rules to tell you what you're not allowed to use free LLMs for. Anyone can train any number of LLM using free open source data sets with any restrictions (or no restrictions) it costs less than $1000 to train a basic 7b model with a credit card at any cloud provider. Next they're going to have signs on gas pumps instructing people not to douse themselves in gasoline and set themselves on fire.

Comment Re: Remember? (Score 1) 79

I recall there was "premium" cable and that was HBO, Showtime, Cinemax and probably a handful of has-beens and regional ones that got bought out. I recall regular cable being ~$32/mo and with premium cable (HBO) it jumped up to ~$75 but they only had a few shows and movies would rotate, not too different from how it is now except it's streaming over the internet.

Comment Re:This is how it should be (Score 1) 14

My guess is phones and silicon will improve to make 4b on mobile by 2030. If not then those requests will get forwarded to xyz cloud service. I can see a world where 7-12b cloud models are ad supported free tier and you either pay or self host 70-600b yourself. I expect processing requirements to drop by half due to whatever breakthrough comes next and then there's a long tail of improvement after that. Token verification was a major improvement.

Comment Re:Outrage Fatigue is also a factor (Score 1) 183

Yep I login to some of these sites and I see 3-4 tangentally-related things that are clearly algorithm rage-bait, designed to drive user interaction. I'm so tired of this. I login for 10-15 minutes a handful of times a week to check in on friends and family for updates, and then promptly uninstall the app.

Comment Re:But who will train the AI? (Score 1) 73

Probably in 3-10 years there will be a handful of open, legally copyright free training sets anyone can use to train their own ~600b class model with whatever architecture is current state of the art. Researchers are already putting together 7b training sets like this. And LLMs can use tools like search now so they won't always need the most up to date news or info - they can use tools for that instead. Most of finance is analysis of documents, which LLMs have been excellent at for a while now.

Comment Re:This is how it should be (Score 1) 14

Google announced roughly the same thing, on device models for phones a couple weeks ago at their developer conference. The 1b model is fine for basic tasks like turning on lights, checking email, social media notifications etc and runs ok on midrange phone hardware. The 4b model technically runs but it's borderline unusable speed but it can answer questions like "how does a microwave work?" with moderate accuracy at a semi-scientific level which is impressive. I suspect most devices will be able to run a 1b and by the end of the decade most everything will run a 4b model at least at talking speed. There's a concept that all AI processing will be done in the datacenter, I suspect 80%+ of consumer LLM will happen on the device, and more complex tasks will get routed to the cloud. For a lot of end users (high school students, etc) 98%+ of requests will be on-device.

Slashdot Top Deals

Time is nature's way of making sure that everything doesn't happen at once. Space is nature's way of making sure that everything doesn't happen to you.

Working...