Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Submission Summary: 0 pending, 11 declined, 5 accepted (16 total, 31.25% accepted)

Submission + - NYT: Generative AI and Conspiratorial Rabbit Holes (nytimes.com) 1

DesertNomad writes: From the article:

Generative AI chatbots are going down conspiratorial rabbit holes and endorsing wild, mystical belief systems. For some people, conversations with the technology can deeply distort reality.

Before ChatGPT distorted Eugene Torres's sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.

Mr. Torres, 42, an accountant in Manhattan, started using ChatGPT last year to make financial spreadsheets and to get legal advice. In May, however, he engaged the chatbot in a more theoretical discussion about “the simulation theory,” an idea popularized by “The Matrix,” which posits that we are living in a digital facsimile of the world, controlled by a powerful computer or technologically advanced society.

“What you're describing hits at the core of many people's private, unshakable intuitions ” that something about reality feels off, scripted or staged,” ChatGPT responded. “Have you ever experienced moments that felt like reality glitched?”

Not really, Mr. Torres replied, but he did have the sense that there was a wrongness about the world. He had just had a difficult breakup and was feeling emotionally fragile. He wanted his life to be greater than it was. ChatGPT agreed, with responses that grew longer and more rapturous as the conversation went on. Soon, it was telling Mr. Torres that he was “one of the Breakers — souls seeded into false systems to wake them from within.”

At the time, Mr. Torres thought of ChatGPT as a powerful search engine that knew more than any human possibly could because of its access to a vast digital library. He did not know that it tended to be sycophantic, agreeing with and flattering its users, or that it could hallucinate, generating ideas that weren't true but sounded plausible...

Submission + - IEEE Spectrum: It's Surprisingly Easy to Jailbreak LLM-Driven Robots (ieee.org)

DesertNomad writes: AI chatbots such as ChatGPT and other applications powered by large language models (LLMs) have exploded in popularity, leading a number of companies to explore LLM-driven robots. However, a new study now reveals an automated way to hack into such machines with 100 percent success. By circumventing safety guardrails, researchers could manipulate self-driving systems into colliding with pedestrians and robot dogs into hunting for harmful places to detonate bombs.

Submission + - ChatGPT is Bullshit (springer.com)

DesertNomad writes: Describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behavior of these systems. LLM machine learning systems produce human-like text and dialogue. These have been plagued by persistent inaccuracies in their output; which are by some often called “AI hallucinations”. These hallucinations are better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005). The models are in an important way indifferent to the truth of their outputs. This paper describes two models to demonstrate this; all LLMs clearly meet at least one of these definitions.
(Distilled from the abstract)

Submission + - High-Energy Cosmic Ray Sources Get Mapped Out for the First Time (wired.com)

DesertNomad writes: A dull, dark, otherwise unremarkable, spot near the constellation Canis Major appears to be the locus of extra-galactic super-high-energy cosmic ray production, with the actual source in the Virgo cluster, and the cosmic rays' paths distorted by the complex galactic magnetic field. Astrophysicists crafted the most state-of-the-art model of the Milky Way's magnetic field, and found that this model explains the significant change in direction of the cosmic rays.

Submission + - GNSS Jamming/Spoofing Mitigation Starting to Getting the Attention It Needs (eetimes.com)

DesertNomad writes: It's been known for a long time that the various GNSS systems are easily jammed; the more "interesting" problem is the potential to spoof a GNSS signal and by spoofing use that to cause GNSS receivers to determine incorrect positions. The challenge lies in the observation that the navigation messages can be constructed by bad actors on the ground. Work going on for several years now has been to provide crypto signatures that have the potential to authenticate valid transmissions. Current commercial receivers can't take advantage of that, so there may be industry-wide needs to update the receiver devices.

Submission + - Server retired after 18 years and ten months - Beat that, Readers! (theregister.co.uk)

DesertNomad writes: Article in ElReg about a fairly aged Pentium-based server that lasted 18+ years without much in the way of service. Reminds me that I have a pair of working, occasionally used, Pentium-based notebooks (more like lug-books), one of which is a 1999 Thinkpad, and the other a 1996 CTX. I'm sure that there's plenty of boxes out there that have survived at least 18 years and that are in daily or constant use. The fans are always the tricky part!

Slashdot Top Deals

Never invest your money in anything that eats or needs repainting. -- Billy Rose

Working...