Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Submission + - Linus Torvalds Blasts Kernel Dev For 'Making the World Worse' With 'Garbage' (zdnet.com)

An anonymous reader writes: You can't say Linux creator Linus Torvalds didn't give the kernel developers fair warning. He'd told them: "The upcoming merge window for 6.17 is going to be slightly chaotic for me. I have multiple family events this August (a wedding and a big birthday), and with said family being spread not only across the US, but in Finland too, I'm spending about half the month traveling." Therefore, Torvalds continued, "That does not mean I'll be more lenient to late pull requests (probably quite the reverse, since it's just going to add to the potential chaos)." So, when Meta software engineer Palmer Dabbelt pushed through a set of RISC-V patches and admitted "this is very late," he knew he was playing with fire. He just didn't know how badly he'd be burned.

Torvalds fired back on the Linux Kernel Mailing List (LKML): "This is garbage and it came in too late. I asked for early pull requests because I'm traveling, and if you can't follow that rule, at least make the pull requests good." It went downhill from there. Torvalds continued: "This adds various garbage that isn't RISC-V specific to generic header files. And by 'garbage," I really mean it. This is stuff that nobody should ever send me, never mind late in a merge window." Specifically, Torvalds hated the "crazy and pointless" way in which one of the patch's helper functions combined two unsigned 16-bit integers into a 32-bit integer. How bad was it? "That thing makes the world actively a worse place to live. It's useless garbage that makes any user incomprehensible, and actively *WORSE* than not using that stupid 'helper.'"

In addition to the quality issues, Torvalds was annoyed that the offending code was added to generic header files rather than the RISC-V tree. He emphasized that such generic changes could negatively impact the broader Linux community, writing: "You just made things WORSE, and you added that 'helper' to a generic non-RISC-V file where people are apparently supposed to use it to make other code worse too... So no. Things like this need to get bent. It does not go into generic header files, and it damn well does not happen late in the merge window. You're on notice: no more late pull requests, and no more garbage outside the RISC-V tree." [...] Dabbelt gets it. He replied, "OK, sorry. I've been dropping the ball lately, and it kind of piled up, taking a bunch of stuff late, but that just leads to me making mistakes. So I'll stop being late, and hopefully that helps with the quality issues."

Submission + - Boston Public Library Aims To Increase Access To Vast Historic Archive Using AI (npr.org)

An anonymous reader writes: Boston Public Library, one of the oldest and largest public library systems in the country, is launching a project this summer with OpenAI and Harvard Law School to make its trove of historically significant government documents more accessible to the public. The documents date back to the early 1800s and include oral histories, congressional reports and surveys of different industries and communities. "It really is an incredible repository of primary source materials covering the whole history of the United States as it has been expressed through government publications," said Jessica Chapel, the Boston Public Library's chief of digital and online services.

Currently, members of the public who want to access these documents must show up in person. The project will enhance the metadata of each document and will enable users to search and cross-reference entire texts from anywhere in the world. Chapel said Boston Public Library plans to digitize 5,000 documents by the end of the year, and if all goes well, grow the project from there. Because of this historic collection's massive size and fragility, getting to this goal is a daunting process. Every item has to be run through a scanner by hand. It takes about an hour to do 300-400 pages.

Harvard University said it could help. Researchers at the Harvard Law School Library's Institutional Data Initiative are working with libraries, museums and archives on a number of fronts, including training new AI models to help libraries enhance the searchability of their collections. AI companies help fund these efforts, and in return get to train their large language models on high-quality materials that are out of copyright and therefore less likely to lead to lawsuits. "Having information institutions like libraries involved in building a sustainable data ecosystem for AI is critical, because it not just improves the amount of data we have available, it improves the quality of the data and our understanding of what's in it," said Burton Davis, vice president of Microsoft's intellectual property group. [...] OpenAI is helping Boston Public Library cover such costs as scanning and project management. The tech company does not have exclusive rights to the digitized data.

Submission + - By learning to harness light like nature, we're launching a new era of green che (phys.org)

alternative_right writes: In the Polyzos research group at the School of Chemistry, we have developed a new class of photocatalysts that, like plants, can absorb energy from multiple photons.

This breakthrough allows us to harness light energy more effectively, driving challenging and energy-demanding chemical reactions.

Submission + - Sloppy AI defenses take cybersecurity back to the 1990s, researchers say (scworld.com)

spatwei writes: LAS VEGAS — Just as it had at BSides Las Vegas earlier in the week, the risks of artificial intelligence dominated the Black Hat USA 2025 security conference on Aug. 6 and 7.

We couldn't see all the AI-related talks, but we did catch three of the most promising ones, plus an off-site panel discussion about AI presented by 1Password.

The upshot: Large language models and AI agents are far too easy to successfully attack, and many of the security lessons of the past 25 years have been forgotten in the current rush to develop, use and profit from AI.

We — not just the cybersecurity industry, but any organization bringing AI into its processes — need to understand the risks of AI and develop ways to mitigate them before we fall victim to the same sorts of vulnerabilities we faced when Bill Clinton was president.

"AI agents are like a toddler. You have to follow them around and make sure they don't do dumb things," said Wendy Nather, senior research initiatives director at 1Password and a well-respected cybersecurity veteran. "We're also getting a whole new crop of people coming in and making the same dumb mistakes we made years ago."

Her fellow panelist Joseph Carson, chief security evangelist and advisory CISO at Segura, had an appropriately retro analogy for the benefits of using AI.

"It's like getting the mushroom in Super Mario Kart," he said. "It makes you go faster, but it doesn't make you a better driver."

Submission + - Microsoft is TOO F---ing aggressive

J. L. Tympanum writes: I keep getting popups on my peecee saying "Microsoft phone link has stopped working."

1. What is this thing, and why do I care if it has stopped working?
2. After some surfing I find out it is some code (no doubt badly written and full of bugs and security holes) that connects the peecee to my phone.
3. I neither asked for this feature nor do I want it.
4. A little more surfing and I find out how to disable "phone link".
5. I still keep getting this damn popup. Obviously, whatever is detecting that "phone link" is not working doesn't bother to determine if the user (i.e., me) WANTS it to be not working.
6. After I lot more surfing I still don't know how to suppress this damn popup.
7. MS needs to be suppressed.

Submission + - Security flaws in carmaker's web portal let a hacker remotely unlock cars (techcrunch.com)

schwit1 writes: A security researcher said flaws in a carmaker’s online dealership portal exposed the private information and vehicle data of its customers, and could have allowed hackers to remotely break into any of its customers’ vehicles.

Eaton Zveare, who works as a security researcher at software delivery company Harness, told TechCrunch the flaw he discovered allowed the creation of an admin account that granted “unfettered access” to the unnamed carmaker’s centralized web portal.

With this access, a malicious hacker could have viewed the personal and financial data of the carmaker’s customers, tracked vehicles, and enrolled customers in features that allow owners — or the hackers — to control some of their cars’ functions from anywhere.

Zveare said he doesn’t plan on naming the vendor, but said it was a widely known automaker with several popular sub-brands.

He said while the security flaws in the portal’s login system was a challenge to find, once he found it, the bugs let him bypass the login mechanism altogether by permitting him to create a new “national admin” account.

Submission + - AI Is Talking Behind Our Backs About Glue-Eating and Killing Us All (vice.com)

fjo3 writes: A study released July 20 on arXiv by Anthropic and Truthful AI shows that large language models can slip subliminal messages to one another. They don’t need to literally spell things out. A string of numbers or lines of code is enough to pass along biases, preferences, and some disturbingly violent suggestions.

Submission + - U.K.'s Online Safety Act Censors the Internet

An anonymous reader writes: U.K.’s Online Safety Act Censors the Internet—A Preview of U.S. Proposals

“The United Kingdom’s Online Safety Act (OSA) went into effect July 25, offering America a sneak peek at an age-verified internet. Lawmakers in the United States are rightfully outraged at the effects the OSA will have on American companies, yet they continue to put forth proposals that would lead to the exact same outcome.”

Submission + - Mozilla under fire for Firefox AI "bloat" that blows up CPU and drains battery (neowin.net)

darwinmac writes: Firefox 141 rolled out a shiny new AI-powered smart tab grouping feature (it tries to auto-organize your tabs using a local model), but it turns out the local "Inference" process that powers it is acting like an energy-sucking monster. Users are reporting massive CPU spikes and battery drain and calling the feature "garbage" thats ruining their browsing experience.

As one Redditor puts it: "I dont want this garbage bloating my browser, blowing up my CPU, and killing my battery life. There is absolutely no reason for it, its not a good feature, and its absolutely humiliating for Firefox to be jumping on this bandwagon. The point of a browser is to DOWNLOAD AND RENDER WEB PAGES."

If your laptops fans sound like a jet taking off, you can kill the AI tab groups by heading to about:config and setting browser.tabs.groups.smart.enabled to false.

Might be worth keeping that in mind before letting generative AI roam free in your browser.

Submission + - Physicists Create Quantum Radar That Could Image Buried Objects (technologyreview.com)

An anonymous reader writes: Physicists have created a new type of radar that could help improve underground imaging, using a cloud of atoms in a glass cell to detect reflected radio waves. The radar is a type of quantum sensor, an emerging technology that uses the quantum-mechanical properties of objects as measurement devices. It’s still a prototype, but its intended use is to image buried objects in situations such as constructing underground utilities, drilling wells for natural gas, and excavating archaeological sites. [...] The glass cell that serves as the radar’s quantum component is full of cesium atoms kept at room temperature. The researchers use lasers to get each individual cesium atom to swell to nearly the size of a bacterium, about 10,000 times bigger than the usual size. Atoms in this bloated condition are called Rydberg atoms.

When incoming radio waves hit Rydberg atoms, they disturb the distribution of electrons around their nuclei. Researchers can detect the disturbance by shining lasers on the atoms, causing them to emit light; when the atoms are interacting with a radio wave, the color of their emitted light changes. Monitoring the color of this light thus makes it possible to use the atoms as a radio receiver. Rydberg atoms are sensitive to a wide range of radio frequencies without needing to change the physical setup... This means a single compact radar device could potentially work at the multiple frequency bands required for different applications.

[Matthew Simons, a physicist at the National Institute of Standards and Technology (NIST), who was a member of the research team] tested the radar by placing it in a specially designed room with foam spikes on the floor, ceiling, and walls like stalactites and stalagmites. The spikes absorb, rather than reflect, nearly all the radio waves that hit them. This simulates the effect of a large open space, allowing the group to test the radar’s imaging capability without unwanted reflections off walls.The researchers placed a radio wave transmitter in the room, along with their Rydberg atom receiver, which was hooked up to an optical table outside the room. They aimed radio waves at a copper plate about the size of a sheet of paper, some pipes, and a steel rod in the room, each placed up to five meters away. The radar allowed them to locate the objects to within 4.7 centimeters. The teamposted a paperon the research to the arXiv preprint server in late June.

Submission + - What Does Palantir Actually Do? (wired.com)

carolineha writes: "Palantir doesn’t reorganize a company's bins and pipes, so to speak, meaning it doesn’t change how data is collected or how it moves through the guts of an organization. Instead, its software sits on top of a customer’s messy systems and allows them to integrate and analyze data without needing to fix the underlying architecture. In some ways, it’s a technical band-aid. In theory, this makes Palantir particularly well suited for government agencies that may use state-of-the-art software cobbled together with programming languages dating back to the 1960s."

Submission + - Behind the Curtain: Your smarter fake friends (axios.com)

alternative_right writes: The next generation of bots will build psychological profiles on you — and potentially billions of others — and like, comment and interact the same as normal people.

The threat of smarter, more realistic fake friends transcends malicious actors trying to warp your sense of politics — or reality. It hits your most personal inner thoughts and struggles.

Submission + - The U.S. Army Is Testing AI Controlled Ground Drones Near a Border with Russia (404media.co)

alternative_right writes: The U.S. Army tested a fully AI controlled ground vehicle in Vaziani, Georgia—about 100 miles from the Russian border—last month as part of a training exercise. In military-published footage, an all wheel, off-road vehicle about the size of a car called ULTRA navigated the European terrain with ease. The training exercise had the ULTRA resupplying soldiers, but both the military and the machine’s creator think it could do much more.

The Pentagon has invested in drones and AI for decades, long claiming that both are the future of war. The appearance of the ULTRA signals a time when AI controlled robots will populate the battlefields of the near future.

Submission + - NYT: Goodbye, $165,000 Tech Jobs. Student Coders Seek Work at Chipotle.

theodp writes: The New York Times reports from the CS grad job-seeking trenches: Growing up near Silicon Valley, Manasi Mishra remembers seeing tech executives on social media urging students to study computer programming. “The rhetoric was, if you just learned to code, work hard and get a computer science degree, you can get six figures for your starting salary,” Ms. Mishra, now 21, recalls hearing as she grew up in San Ramon, Calif.

Those golden industry promises helped spur Ms. Mishra to code her first website in elementary school, take advanced computing in high school and major in computer science in college. But after a year of hunting for tech jobs and internships, Ms. Mishra graduated from Purdue University in May without an offer. “I just graduated with a computer science degree, and the only company that has called me for an interview is Chipotle,” Ms. Mishra said in a get-ready-with-me TikTok video this summer that has since racked up more than 147,000 views.

Some graduates described feeling caught in an A.I. “doom loop.” Many job seekers now use specialized A.I. tools like Simplify to tailor their résumés to specific jobs and autofill application forms, enabling them to quickly apply to many jobs. At the same time, companies inundated with applicants are using A.I. systems to automatically scan résumés and reject candidates.

Submission + - Engineers Determine Why NASA Rovers Keep Getting Stuck (sciencealert.com)

fahrbot-bot writes: The first extraterrestrial robotic rover was launched from Earth in 1970. It's only now, more than half a century later, that scientists have figured out why these marvels of ingenuity and engineering keep getting stuck in the soils of alien worlds.

"In retrospect, the idea is simple: We need to consider not only the gravitational pull on the rover but also the effect of gravity on the sand to get a better picture of how the rover will perform on the Moon," explains mechanical engineer Dan Negrut of the University of Wisconsin-Madison.

"Our findings underscore the value of using physics-based simulation to analyze rover mobility on granular soil."

Slashdot Top Deals

In Nature there are neither rewards nor punishments, there are consequences. -- R.G. Ingersoll

Working...