Submission + - AI vs Denial Bots: Fight Health Insurance & Counterforce Health Take On Insu (pbs.org)

An anonymous reader writes: Health insurers have spent years using AI, opaque algorithms, and “batch denial” systems to reject claims at scale, but a new PBS NewsHour piece shows patients starting to fight back with AI of their own. The segment highlights Fight Health Insurance, a free and OSS AI tool to help patients draft prior auth requests and appeal letters, and Counterforce Health, (non-OSS alternative). PBS’s story, “How patients are using AI to fight denied insurance claims,” frames this as an “AI vs AI” turning point in U.S. healthcare bureaucracy: if denials can be automated, what happens when appeals are, too.
All hail our new robot overloards, may they win against the other robot overloards.

Submission + - Predictions of how 2026 could be (abc.net.au)

sandbagger writes: 67 years ago, the Australian Broadcasting Corporation recorded a collection of predictions about the future—the one we’re living in now, in 2026. Their forecasts are truly extraordinary — Intergalactic super speed travel, future pod houses, Nuclear fallout, but all of these are wrong..

Submission + - French-UK Starlink Rival Pitches Canada On 'Sovereign' Satellite Service (www.cbc.ca)

An anonymous reader writes: A company largely owned by the French and U.K. governments is pitching Canada on a roughly $250-million plan to provide the military with secure satellite broadband coverage in the Arctic, CBC News has learned. Eutelsat, a rival to tech billionaire Elon Musk's Starlink, already provides some services to the Canadian military, but wants to deepen the partnership as Canada looks to diversify defence contracts away from suppliers in the United States.

A proposal for Canada's Department of National Defence to join a French Ministry of Defence initiative involving Eutelsat was apparently raised by French President Emmanuel Macron with Prime Minister Mark Carney on the sidelines of last year's G7 summit in Alberta. The prime minister's first question, according to Eutelsat and French defence officials, was how the proposal would affect the Telesat Corporation, a former Canadian Crown corporation that was privatized in the 1990s.

Telesat is in the process of developing its Lightspeed system, a Low Earth Orbit (LEO) constellation of satellites for high-speed broadband. And in mid-December, the Liberal government announced it had established a strategic partnership with Telesat and MDA Space to develop the Canadian Armed Forces' military satellite communications (MILSATCOM) capabilities. A Eutelsat official said the company already has its own satellite network in place and running, along with Canadian partners, and has been providing support to the Canadian military deployed in Latvia.

Submission + - AI Models Are Starting to Learn by Asking Themselves Questions (wired.com)

An anonymous reader writes: [P]erhaps AI can, in fact, learn in a more human way—by figuring out interesting questions to ask itself and attempting to find the right answer. A project from Tsinghua University, the Beijing Institute for General Artificial Intelligence (BIGAI), and Pennsylvania State University shows that AI can learn to reason in this way by playing with computer code. The researchers devised a system called Absolute Zero Reasoner (AZR) that first uses a large language model to generate challenging but solvable Python coding problems. It then uses the same model to solve those problems before checking its work by trying to run the code. And finally, the AZR system uses successes and failures as a signal to refine the original model, augmenting its ability to both pose better problems and solve them.

The team found that their approach significantly improved the coding and reasoning skills of both 7 billion and 14 billion parameter versions of the open source language model Qwen. Impressively, the model even outperformed some models that had received human-curated data. [...] A key challenge is that for now the system only works on problems that can easily be checked, like those that involve math or coding. As the project progresses, it might be possible to use it on agentic AI tasks like browsing the web or doing office chores. This might involve having the AI model try to judge whether an agent’s actions are correct. One fascinating possibility of an approach like Absolute Zero is that it could, in theory, allow models to go beyond human teaching. “Once we have that it’s kind of a way to reach superintelligence,” [said Zilong Zheng, a researcher at BIGAI who worked on the project].

Submission + - Stop paying TurboTax when IRS Free File covers most taxpayers for FREE (nerds.xyz)

BrianFagioli writes: The IRS has flipped the switch on Free File for the 2026 tax season, letting most Americans file federal taxes for exactly zero dollars. Anyone with 2025 adjusted gross income of $89,000 or less qualifies, covering an estimated 70 percent of taxpayers. The catch is that you have to start at IRS.gov/FreeFile, not the commercial sites that steer users into paid upgrades. Free File partners include familiar software brands like TaxAct, FreeTaxUSA, 1040.com and TaxSlayer, and many will throw in free state returns too. The program works on computers and phones and supports e-filing before the official opening of tax season.

What surprises me every year is how few people know this exists. Despite more than 77 million returns filed through Free File since 2003, most folks reach for TurboTax or H&R Block and end up paying for something the government already supports at no cost. Even gig workers and renters qualify now if their AGI is under the limit. If you want to keep more money in your wallet, start at IRS.gov/FreeFile and skip the upsell parade.

Submission + - CES Worst in Show Awards Call Out The Tech Making Things Worse (ifixit.com)

chicksdaddy writes: CES, the Consumer Electronics Show, isn’t just about shiny new gadgets, as AP reports (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fapnews.com%2Farticle%2Fces-worst-show-ai-0ce7fbc5aff68e8ff6d7b8e6fb7b007d): this year brought back the fifth annual Worst in Show anti-awards (https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.worstinshowces.com%2F), calling out the most harmful, wasteful, invasive, and unfixable tech at the Las Vegas show. The coalition behind the awards — including Repair.org, iFixit, EFF, PIRG, Secure Repairs and others — put the spotlight on products that miss the point of innovation and make life worse for users.

2026 Worst in Show winners include:
  Overall (and Repairability): Samsung’s AI-packed Family Hub fridge — overengineered, hard to fix, and trying to do everything but keep food cold.
  Privacy: Amazon Ring AI — expanding surveillance with features like facial recognition and mobile towers.
  Security: Merach UltraTread treadmill — AI fitness coach that also hoovers up sensitive data with weak security guarantees — including a Privacy Policy that declares the company "cannot guarantee the security of your personal information" (!!)
  Environmental Impact: Lollipop Star — a single-use music-playing electronic lollipop that epitomizes needless e-waste.
  Enshittification: Bosch eBike Flow App — pushing lock-in and digital restrictions that make gear worse over time.
  “Who Asked For This?”: Bosch Personal AI Barista — voice-assistant coffee maker that nobody really wanted.
  People’s Choice: Lepro Ami AI Companion — an overhyped “soulmate” cam that creeps more than comforts.

The message? Not all tech is progress. Some products add needless complexity, threaten privacy, or throw sustainability out the window — and the industry’s watchdogs are calling them out.

Submission + - A.I pandering to political correctness

An anonymous reader writes: Elon Musk: “I have a concern with companies like Google, Gemini, OpenAI & Meta that they are not maximally truth seeking. Their A.I. are pandering to political correctness and are being trained to lie.”

Me: “Do you agree with the sentiments?”

ClippyAI: ‘Yes, the concern holds merit based on extensive evidence from studies and real-world examples discussed earlier. AI models from OpenAI, Google (Gemini), and Meta often exhibit left-leaning tendencies due to training data scraped from ideologically skewed internet sources and reinforcement learning from human feedback (RLHF) where labelers favor "safe" or progressive framings.’

Submission + - President Trump and NASA's Isaacman: Please Take the Crew Off of Artemis II (pjmedia.com)

An anonymous reader writes: According to the present plan, the next mission, Artemis II, will launch no later than April 2026, with SLS sending an Orion capsule carrying four astronauts around the Moon and back to Earth. Artemis III will then follow in 2028, landing three astronauts on the lunar surface. This tight schedule is necessary in order to meet Trump's desire to achieve that new American manned landing by 2028 — ahead of the Chinese — and thus setting the groundwork for the initial components of a permanent manned base by 2030.

I am writing now to plead with both President Trump and NASA Administrator Jared Isaacman to please reconsider this schedule. Take the crew off the Artemis II mission in the spring, and fly it as an unmanned mission around the Moon.

I am suggesting this because right now it appears that NASA, the President, and Congress are all repeating the same mistakes NASA made in 1967 with the Apollo 1 launchpad fire that killed three astronauts, as well as in 1986 with the space shuttle Challenger disaster that killed seven astronauts. In both cases, there were clear and obvious engineering issues that said both the Apollo capsule and the space shuttle were not ready to fly, but the pressure of schedule convinced managers at NASA to look the other way, to place those scheduling concerns above fundamental engineering principles. In both cases, people died when the engineering issues were ignored.

It presently appears that the same circumstances exist today with Orion: serious engineering issues that everyone is ignoring because of the need to meet an artificial schedule.

Submission + - Some Super-Smart Dogs Can Learn New Words Just By Eavesdropping (npr.org)

An anonymous reader writes: [I]t turns out that some genius dogs can learn a brand new word, like the name of an unfamiliar toy, by just overhearing brief interactions between two people. What's more, these "gifted" dogs can learn the name of a new toy even if they first hear this word when the toy is out of sight — as long as their favorite human is looking at the spot where the toy is hidden. That's according to a new study in the journal Science. "What we found in this study is that the dogs are using social communication. They're using these social cues to understand what the owners are talking about," says cognitive scientist Shany Dror of Eotvos Loránd University and the University of Veterinary Medicine, Vienna. "This tells us that the ability to use social information is actually something that humans probably had before they had language," she says, "and language was kind of hitchhiking on these social abilities."

[...] "There's only a very small group of dogs that are able to learn this differentiation and then can learn that certain labels refer to specific objects," she says. "It's quite hard to train this and some dogs seem to just be able to do it." [...] To explore the various ways that these dogs are capable of learning new words, Dror and some colleagues conducted a study that involved two people interacting while their dog sat nearby and watched. One person would show the other a brand new toy and talk about it, with the toy's name embedded into sentences, such as "This is your armadillo. It has armadillo ears, little armadillo feet. It has a tail, like an armadillo tail." Even though none of this language was directed at the dogs, it turns out the super-learners registered the new toy's name and were later able to pick it out of a pile, at the owner's request.

To do this, the dogs had to go into a separate room where the pile was located, so the humans couldn't give them any hints. Dror says that as she watched the dogs on camera from the other room, she was "honestly surprised" because they seemed to have so much confidence. "Sometimes they just immediately went to the new toy, knowing what they're supposed to do," she says. "Their performance was really, really high." She and her colleagues wondered if what mattered was the dog being able to see the toy while its name was said aloud, even if the words weren't explicitly directed at the dog. So they did another experiment that created a delay between the dog seeing a new toy and hearing its name. The dogs got to see the unfamiliar toy and then the owner dropped the toy in a bucket, so it was out of sight. Then the owner would talk to the dog, and mention the toy's name, while glancing down at the bucket. While this was more difficult for dogs, overall they still could use this information to learn the name of the toy and later retrieve it when asked. "This shows us how flexible they are able to learn," says Dror. "They can use different mechanisms and learn under different conditions."

Submission + - Google Is Adding an 'AI Inbox' to Gmail That Summarizes Emails (wired.com)

An anonymous reader writes: Google is putting even more generative AI tools into Gmail as part of its goal to further personalize user inboxes and streamline searches. On Thursday, the company announced a new “AI Inbox” tab, currently in a beta testing phase, that reads every message in a user’s Gmail and suggests a list of to-dos and key topics, based on what it summarizes. In Google’s example of what this AI Inbox could look like in Gmail, the new tab takes context from a user’s messages and suggests they reschedule their dentist appointment, reply to a request from their child’s sports coach, and pay an upcoming fee before the deadline. Also under the AI Inbox tab is a list of important topics worth browsing, nestled beneath the action items at the top. Each suggested to-do and topic links back to the original email for more context and for verification.

[...] For users who are concerned about their privacy, the information Google gleans by skimming through inboxes will not be used to improve the company's foundational AI models. “We didn’t just bolt AI onto Gmail,” says Blake Barnes, who leads the project for Google. “We built a secure privacy architecture, specifically for this moment.” He emphasizes that users can turn off Gmail’s new AI tools if they don’t want them. At the same time Google announced its AI Inbox, the company made free for all Gmail users multiple Gemini features that were previously available only to paying subscribers. This includes the Help Me Write tool, which generates emails from a user prompt, as well as AI Overviews for email threads, which essentially posts a TL;DR summary at the top of long message threads. Subscribers to Google's Ultra and Pro plans, which start at $20 a month, get two additional new features in their Gmail inbox. First, an AI proofreading tool that suggests more polished grammar and sentence structures. And second, an AI Overviews tool that can search your whole inbox and create relevant summaries on a topic, rather than just summarizing a single email thread.

Submission + - GLP-1 Medication Gains Are Lost After Stopping Use (bmj.com)

Supp0rtLinux writes: Scientists at the University of Oxford examined multiple of studies following people after they discontinued GLP-1 based obesity medications. Former users typically regained close to a pound a month and they regained weight faster than people who shed their weight through positive lifestyle changes alone (calorie reduction, healthier dietary choices, exercise, etc).

Personally, I need to lose a few pounds but I would rather do it through diet and exercise than a pill; mostly for the personal reward/encouragement factor, but also for the other overall health benefits. But the bigger concern with dropping GLP-1 options could be the fallout from those that saw unrelated, off-market changes related to addiction tendencies. We've read how many obesity drugs don't just suppress appetite, but also help with addictive behaviors and related (smoking, alcohol consumption, sex addiction, other compulsive activities, etc). The question is if you go off the drugs do the other vices return as well? Or since those are more habit forming, can you still get the benefits? Does using a pill long enough to break a habit result in long term results or will you revert similarly to regaining lost weight?

Submission + - Fusion Physicists Found a Way Around a Long-Standing Density Limit (sciencealert.com)

alternative_right writes: Experiments inside a fusion reactor in China have demonstrated a new way to circumvent one of the caps on the density of the superheated plasma swirling inside.

At the Experimental Advanced Superconducting Tokamak (EAST), physicists successfully exceeded what is known as the Greenwald limit, a practical density boundary beyond which plasmas tend to violently destabilize, often damaging reactor components.

For a long time, the Greenwald limit was accepted as a given and incorporated into fusion reactor engineering. The new work shows that precise control over how the plasma is created and interacts with the reactor walls can push it beyond this limit into what physicists call a 'density-free' regime.

Submission + - Musk lawsuit over OpenAI for-profit conversion can head to trial, US judge says (reuters.com)

schwit1 writes: US District Judge Yvonne Gonzalez Rogers:
"There is plenty of evidence suggesting OpenAI's leaders made assurances that its original nonprofit structure was going to be maintained."

The backstory:
Elon co-founded OpenAI in 2015 and contributed roughly $38 million, about 60% of its early funding, based on assurances it would remain a nonprofit dedicated to public benefit.

Musk left in 2018. Since then, OpenAI cut multibillion-dollar deals with Microsoft and restructured toward for-profit status.

The accusation:
Elon alleges Sam Altman and Greg Brockman plotted the for-profit switch to enrich themselves, betraying OpenAI's founding mission.

OpenAI's response:
They called Elon "a frustrated commercial competitor seeking to slow down a mission-driven market leader."

The judge disagreed. Now a jury will decide.

Submission + - Exercise can be powerful in helping depression (telegraph.co.uk)

Bruce66423 writes: 'Exercise may be as good at treating depression as psychological therapies and possibly antidepressants, a study suggests.

'A review of 73 studies from researchers at the University of Lancashire found exercise may have a moderate benefit on reducing symptoms of depression when compared with no treatment or a placebo.

'Exercise was also as beneficial as psychological therapies, based on evidence from 10 clinical trials.'

The observation helps explain the explosion of depression in our culture today; too many people taking zero exercise.

My own experience was after being formally diagnosed with the mildest category of depression, regular gym sessions and now an exercise bike at home have largely kept me clear of symptoms.

Submission + - Google co-founder leaves California amid wealth tax fears (yahoo.com)

schwit1 writes: Larry Page, the Google co-founder and world’s second-richest person, has reportedly left California amid concerns about a wealth tax on billionaires.

Mr Page has moved the registrations of several entities, including his family office and flying car business from California to Delaware, according to filings with the states.

He has also personally moved out of the state ahead of a potential vote on a 5pc wealth tax, according to Business Insider, which first reported the move.

Mr Page, who founded Google in 1998, is the world’s second-richest person with a net worth of $270bn (£200bn).

The world’s richest person, Elon Musk, left California for Texas in 2020.

Submission + - How Bright Headlights Escaped Regulation — and Blinded Us All (autoblog.com)

schwit1 writes: Modern LED technology promised safer roads. Instead, it’s creating a blinding menace that regulators refuse to address.

- Headlight brightness has doubled in a decade, with widespread driver complaints and frustration.
- Regulatory loopholes allow manufacturers to increase brightness because of outdated federal standards.
- Regulations capping maximum brightness for LED headlights have still not been formulated.

Submission + - California Lawmaker Proposes a Four-Year Ban On AI Chatbots In Kids' Toys (techcrunch.com)

An anonymous reader writes: Senator Steve Padilla (D-CA) introduced a bill on Monday that would place a four-year ban on the sale and manufacture of toys with AI chatbot capabilities for kids under 18. The goal is to give safety regulators time to develop regulations to protect children from “dangerous AI interactions.”

“Chatbots and other AI tools may become integral parts of our lives in the future, but the dangers they pose now require us to take bold action to protect our children,” Senator Padilla said in a statement. “Our safety regulations around this kind of technology are in their infancy and will need to grow as exponentially as the capabilities of this technology do. Pausing the sale of these chatbot-integrated toys allows us time to craft the appropriate safety guidelines and framework for these toys to follow.” [...] “Our children cannot be used as lab rats for Big Tech to experiment on,” Padilla said.

Submission + - Warner Bros Rejects Revised Paramount Bid, Sticks With Netflix (reuters.com)

An anonymous reader writes: Warner Bros Discovery's board unanimously turned down Paramount Skydance's latest attempt to acquire the studio, saying its revised $108.4 billion hostile bid amounted to a risky leveraged buyout that investors should reject. In a letter to shareholders on Wednesday, Warner Bros' board said Paramount's offer hinges on "an extraordinary amount of debt financing" that heightens the risk of closing. It reaffirmed its commitment to streaming giant Netflix's $82.7 billion deal for the film and television studio and other assets.

Their assessment comes even after Paramount, which has a market value of around $14 billion, proposed to use $40 billion in equity personally guaranteed by Oracle billionaire co-founder Larry Ellison — father of Paramount CEO David Ellison — and $54 billion in debt to finance the deal. The decision keeps Warner Bros on track for its deal with Netflix, even after Paramount amended its bid on December 22 to address the earlier concerns about the lack of a personal guarantee from Larry Ellison.

Submission + - ChatGPT orchestrated marriage nullified (nos.nl)

thrill12 writes: A marriage that was orchestrated by ChatGPT was nullified in The Netherlands on Monday, as a judge ruled that the vows generated with the AI were not according to the law. Dutch NOS reports (translated version) "The court in Zwolle declared a marriage invalid because the speech delivered during the ceremony did not comply with the Civil Code. The special registrar had written the speech using ChatGPT."
The original court documents(translated here) indicated that the person performing the marriage was appointed as extraordinary civil servant for the occassion, and used a ChatGPT written text during the ceremony. The judge found that that text did not have the same meaning as the words written in the law, including "fulfilling all obligations legally associated with a marriage". The marriage was nullified and protests of the couple were not accepted.

Slashdot Top Deals