Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 8 declined, 1 accepted (9 total, 11.11% accepted)

Submission + - SPAM: Absolute Zero Reasoner (AZR): the AI That Teaches Itself

oumuamua writes: Self improving AI is the key to reaching singularity and another step in that direction has arrived using Reinforcement Learning with Verifiable Rewards(RLVR):

Researchers from Tsinghua University, Beijing Institute for General Artificial Intelligence, and Pennsylvania State University have proposed an RLVR paradigm called Absolute Zero to enable a single model to autonomously generate and solve tasks that maximize its own learning progress without relying on any external data. Under this method, researchers have introduced the Absolute Zero Reasoner (AZR) that self-evolves its training curriculum and reasoning ability through a code executor that validates proposed code reasoning tasks and verifies answers, providing a unified source of verifiable reward to guide open-ended yet grounded learning.

That may be hard to parse so find Figure 2 in the paper (or nicely formatted on Github) for a visual representation. TLDR: AI is training AI with no human data curation. Great video explanation on Youtube [spam URL stripped]?...
Link to Original Source

Submission + - Submitted Story Disappeared from Firehose

oumuamua writes: Dear Slashdot a story I submitted 'The phony comforts of AI skepticism' disappeared from the firehose within minutes after posting. I presume it did not fit some submission guideline. Perhaps being more blog-like than news-like. If so that caveat should be placed on the submission page so people are not wasting time on submissions that will get rejected.

Submission + - The phony comforts of AI skepticism

oumuamua writes: A surprising number of technologically literate people feel AI is fake and sucks — that AI is an overhyped bubble that will soon deflate as NFTs did. Casey Newton on Platformer argues that view is misguided and lays out why he believes AI is real and dangerous. He lists some of the arguments of the AI is fake and sucks crowd such as:

Large language models built with transformers are not technically capable of creating superintelligence, because they are predictive in nature and do not understand concepts in the way that human beings do.

The AI skeptics tend to focus only on flaws:

Most people know these systems are flawed, and adjust their expectations and usage accordingly. The “AI is fake and sucks” crowd is hyper-fixated on the things it can’t do — count the number of r’s in strawberry, figure out that the Onion was joking when it told us to eat rocks — and weirdly uninterested in the things it can.

Some of the things AI did do in 2024:

Cut customer losses from scams in half through proactive detection, according to the Bank of Australia.
Preserved some of the 200 endangered Indigenous languages spoken in North America.
Accelerated drug discovery, offering the possibility of breakthrough protections against antibiotic resistance.
Detected the presence of tuberculosis by listening to a patient’s voice.
Reproduced an ALS patient’s lost voice.
Enabled persecuted Venezuelan journalists to resume delivering the news via digital avatars.
Pieced together fragments of the epic of Gilgamesh, one of the world’s oldest texts.
Caused hundreds of thousands of people to develop intimate relationships with chatbots.
Created engaging and surprisingly natural-sounding podcasts out of PDFs.
Created poetry that participants in a study say they preferred to human-written poetry in a blind test. (This may be because people prefer bad art to good art, but still.)

He concludes that:

Ultimately, both the “fake and sucks” and “real and dangerous” crowds agree that AI could go really, really badly. To stop that from happening though, the “fake and sucks” crowd needs to accept that AI is already more capable and more embedded in our systems than they currently admit.

There have been many responses to Newton, perhaps the most interesting is Benjamin Riley who provides a comprehensive overview of the types of AI skepticism and the people involved: Who and What comprise AI Skepticism?

Newton is wrong both on the Who of AI Skepticism, in terms of who is participating in this nascent intellectual movement, and the What, in terms of what people within this movement actually believe. I appreciate that Newton covers technology broadly, not just AI, and thus is unfamiliar with the nuances of AI Skepticism as a movement. But in this instance he didn’t just oversimplify things, he failed to grasp its core essence.


Submission + - By default, capital will matter more than ever after AGI (lesswrong.com)

oumuamua writes: Many people disagree on when AGI/ASI is coming but few doubt that it is coming. It is interesting to analyze what happens if nothing is done beforehand to prepare for it, that is, if it takes the ‘default’ course in the current system.

The key economic effect of AI is that it makes capital a more and more general substitute for labour. There's less need to pay humans for their time to perform work, because you can replace that with capital. (e.g. data centres running software replaces a human doing mental labour).

As jobs get replaced, the State will have to enact UBI but what lifestyle will it support? Currently the State looks after citizens because they make the State strong. This incentive could go away after AGI.

With labour-replacing AI, the incentives of states—in the sense of what actions states should take to maximise their competitiveness against other states and/or their own power—will no longer be aligned with humans in this way. The incentives might be better than during feudalism. During feudalism, the incentive was to extract as much as possible from the peasants without them dying. After labour-replacing AI, humans will be less a resource to be mined and more just irrelevant. However, spending fewer resources on humans and more on the AIs that sustain the state's competitive advantage will still be incentivised.

Sufficiently strong AI could make entrepreneurship and startups obsolete.

VC funds might be able to directly convert money into hundreds of startup attempts all run by AIs, without having to go through the intermediate route of finding a human entrepreneurs to manage the AIs for them.

This means AI is the main driver of wealth and will eliminate upward mobility in society.

In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labour-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.

It could be a dire future; however, you don’t need to wait for the default outcome to unfold, you can shape how it unfolds.

But it's also true that right now is a great time to do something ambitious. Robin Hanson calls the present "the dreamtime", following a concept in Aboriginal myths: the time when the future world order and its values are still liquid, not yet set in stone.


Submission + - World Robot Conference 2024 wraps up in Beijing

oumuamua writes: Elon Musk’s Optimus Robot was there but only displayed statically in a case. KrASIA reports that 27 other humanoid robots were on display:

These robots captivated audiences with demonstrations ranging from industrial training scenarios to increasingly versatile commercial applications. Visitors were treated to interactive and entertaining displays, such as robots fetching medicine, playing the piano, washing clothes, and making coffee.

EX-Robot, a company that specializes in creating convincing bionic humanoid robots for museums and institutions, went viral when an attendee posted a video of their fembots. People had trouble telling the real robots from the cosplay actors.

Watch this comprehensive and informative overview of WRC 2024 to see these robots in action — additional footage of the fembots starts here.

Submission + - Byron and Dresden Nuclear Plant Hostage Crisis Resolved

oumuamua writes: Two nuclear plants were almost shutdown prematurely because of lack of profit but were saved at the last minute by the Illinois Senate.

The Illinois Senate has approved a clean energy deal which includes a subsidy for Exelon to keep the Byron nuclear plant in operation, after the House passed it last week. The plan gives Exelon $694 million to keep the Byron and Dresden plants operational. Exelon had previously begun drawing down the Byron plant with an anticipated retirement date of Monday, September 13th, and had warned that once the nuclear fuel had been depleted, it could not be refueled after that date. Exelon said Monday that with the passage of the bill, it was preparing to refuel both plants.

With the urgency of the climate crisis more clear than ever, no nuclear plant should be closed prematurely while coal plants continue operation in the same state. Many celebrated the Senate move, however, others have criticized Exelon's actions.

“Exelon first started what we’ve dubbed the nuclear hostage crisis. It’s a pattern where a utility will for whatever reasons threaten closure, which gets the workers very upset, then the local community whose tax base depends on it gets upset, they pressure their legislators, and then the legislators grant bailouts,” said Dave Kraft, head of the Nuclear Energy Information Service.

Slashdot Top Deals

The more data I punch in this card, the lighter it becomes, and the lower the mailing cost. -- S. Kelly-Bootle, "The Devil's DP Dictionary"

Working...