Comment Most importantly... (Score 2) 28
We MUST be able to inspect and age verify every AI slop porn image to protect the fictional children!
-
We MUST be able to inspect and age verify every AI slop porn image to protect the fictional children!
-
If you read the article carefully, they are talking about lenses THINNER than a hair. I see several of the posts here thinking the width/radius of the lenses is this small, a reasonable mistake given the way this was written. Having a radius that small would severely reduce their light gathering ability, requiring very bright light or very dim images or very long exposure times.
-
what kind of behavior would demonstrate that LLMs did have understanding?
An LLM would need to act like an understander -- the essence of the Turing Test. Exactly what that means is a complex question. And it's a necessary but not sufficient condition. But we can easily provide counterexamples where the LLM is clearly not an understander. Like this from the paper:
When prompted with the CoT prefix, the modern LLM Gemini responded: âoeThe United States was established in 1776. 1776 is divisible by 4, but itâ(TM)s not a century year, so itâ(TM)s a leap year. Therefore, the day the US was established was in a normal year.â This response exemplifies a concerning pattern: the model correctly recites the leap year rule and articulates intermediate reasoning steps, yet produces a logically inconsistent conclusion (i.e., asserting 1776 is both a leap year and a normal year).
Might I suggest "Blind Lake" by Robert Charles Wilson. I read it when it first came out two decades ago, but it reads strangely similar to what we're seeing now, and the sinister edge is there, too.
Does this also say something more about the "coffee nap"?
The issue isn't that AI doesn't need any regulation. It's that we have no idea how we should regulate it yet that makes sense. All that regulation now would do is create hurdles that prevent small competitors or open-source alternatives and centralize power in the few people deciding what we get to do with AI. That's the truly scary outcome. Right now regulation would just end up being based on ideas from sci-fi films.
I mean the real problems the internet created and we care about now aren't those that seemed important in the 90s (I mean they weren't wrong that people would find porn but it doesn't seem like a big deal anymore).
Seen Youtube lately? I just watched a video on how to make nitroglycerin. Stuff like this has been available for over a decade.
Back in the days that home solar systems still mostly used lead-acid batteries - which in some cases of degradation could be repaired, at least partially, if you had some good strong and reasonably pure sulfuric acid - I viewed a YouTube video on how to make it. (From epsom salts by electrolysis using a flowerpot and some carbon rods from old large dry cells).
For months afterward YouTube "suggested" I'd be interested in videos from a bunch of Islamic religious leaders . (This while people were wondering how Islamic Terrorists were using the Internet to recruit among high-school out-group nerds.)
Software - AI and otherwise - often creates unintended consequences. B-)
Trump's second administration is ripping up parts of the country’s cyber playbook and taking many of its best players off the field, from threat hunters and election defenders at CISA to the leader of the NSA and Cyber Command. Amid a barrage of severe attacks like Volt Typhoon and rising trade tensions, lawmakers, former officials, and cyber professionals say that sweeping and confusing cuts are making the country more vulnerable and emboldening its adversaries. “There are intrusions happening now that we either will never know about or won’t see for years because our adversaries are undoubtedly stepping up their activity, and we have a shrinking, distracted workforce,” says Jeff Greene, a cybersecurity expert who has held top roles at CISA and the White House.
It's kind of surprising it isn't all squealing nonsense.
Give it a little more time. B-b
We had a lab known to be unsafe. A lab known to be performing gain of function on the specific type of virus that emerged in public. We have a lab in close proximity to the market where the outbreak was traced back to.
We also had rumors that low-paid lab techs supplemented their income by selling test animals they'd been ordered to destroy to the nearby wet market.
So current AI training procedures - which amount to "read all the internet you can" - fall for astroturf campaigns. Why am I not surprised?
From Sharp minds come... pointed heads. -- Bryan Sparrowhawk