Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
Transportation

Class Action Accuses Toyota of Illegally Sharing Drivers' Data (insurancejournal.com) 51

"A federal class action lawsuit filed this week in Texas accused Toyota and an affiliated telematics aggregator of unlawfully collecting drivers' information and then selling that data to Progressive," reports Insurance Journal: The lawsuit alleges that Toyota and Connected Analytic Services (CAS) collected vast amounts of vehicle data, including location, speed, direction, braking and swerving/cornering events, and then shared that information with Progressive's Snapshot data sharing program. The class action seeks an award of damages, including actual, nominal, consequential damages, and punitive, and an order prohibiting further collection of drivers' location and vehicle data.
Florida man Philip Siefke had bought a new Toyota RAV4 XLE in 2021 "equipped with a telematics device that can track and collect driving data," according to the article. But when he tried to sign up for insurance from Progressive, "a background pop-up window appeared, notifying Siefke that Progressive was already in possession of his driving data, the lawsuit says. A Progressive customer service representative explained to Siefke over the phone that the carrier had obtained his driving data from tracking technology installed in his RAV4." (Toyota told him later he'd unknowingly signed up for a "trial" of the data sharing, and had failed to opt out.) The lawsuit alleges Toyota never provided Siefke with any sort of notice that the car manufacture would share his driving data with third parties... The lawsuit says class members suffered actual injury from having their driving data collected and sold to third parties including, but not limited to, damage to and diminution in the value of their driving data, violation of their privacy rights, [and] the likelihood of future theft of their driving data.
The telemetry device "can reportedly gather information about location, fuel levels, the odometer, speed, tire pressure, window status, and seatbelt status," notes CarScoop.com. "In January, Texas Attorney General Ken Paxton started an investigation into Toyota, Ford, Hyundai, and FCA..." According to plaintiff Philip Siefke from Eagle Lake, Florida, Toyota, Progressive, and Connected Analytic Services collect data that can contribute to a "potential discount" on the auto insurance of owners. However, it can also cause insurance premiums to be jacked up.
The plaintiff's lawyer issued a press release: Despite Toyota claiming it does not share data without the express consent of customers, Toyota may have unknowingly signed up customers for "trials" of sharing customer driving data without providing any sort of notice to them. Moreover, according to the lawsuit, Toyota represented through its app that it was not collecting customer data even though it was, in fact, gathering and selling customer information. We are actively investigating whether Toyota, CAS, or related entities may have violated state and federal laws by selling this highly sensitive data without adequate disclosure or consent...

If you purchased a Toyota vehicle and have since seen your auto insurance rates increase (or been denied coverage), or have reason to believe your driving data has been sold, please contact us today or visit our website at classactionlawyers.com/toyota-tracking.

On his YouTube channel, consumer protection attorney Steve Lehto shared a related experience he had — before realizing he wasn't alone. "I've heard that story from so many people who said 'Yeah, I I bought a brand new car and the salesman was showing me how to set everything up, and during the setup process he clicked Yes on something.' Who knows what you just clicked on?!"

Thanks to long-time Slashdot reader sinij for sharing the news.
AI

DeepSeek IOS App Sends Data Unencrypted To ByteDance-Controlled Servers (arstechnica.com) 68

An anonymous Slashdot reader quotes a new article from Ars Technica: On Thursday, mobile security company NowSecure reported that [DeepSeek] sends sensitive data over unencrypted channels, making the data readable to anyone who can monitor the traffic. More sophisticated attackers could also tamper with the data while it's in transit. Apple strongly encourages iPhone and iPad developers to enforce encryption of data sent over the wire using ATS (App Transport Security). For unknown reasons, that protection is globally disabled in the app, NowSecure said. What's more, the data is sent to servers that are controlled by ByteDance, the Chinese company that owns TikTok...

[DeepSeek] is "not equipped or willing to provide basic security protections of your data and identity," NowSecure co-founder Andrew Hoog told Ars. "There are fundamental security practices that are not being observed, either intentionally or unintentionally. In the end, it puts your and your company's data and identity at risk...." This data, along with a mix of other encrypted information, is sent to DeepSeek over infrastructure provided by Volcengine a cloud platform developed by ByteDance. While the IP address the app connects to geo-locates to the US and is owned by US-based telecom Level 3 Communications, the DeepSeek privacy policy makes clear that the company "store[s] the data we collect in secure servers located in the People's Republic of China...."

US lawmakers began pushing to immediately ban DeepSeek from all government devices, citing national security concerns that the Chinese Communist Party may have built a backdoor into the service to access Americans' sensitive private data. If passed, DeepSeek could be banned within 60 days.

Medicine

Hydroxychloroquine-Promoting COVID Study Retracted After 4 Years (nature.com) 110

Nature magazine reports that "A study that stoked enthusiasm for the now-disproven idea that a cheap malaria drug can treat COVID-19 has been retracted — more than four-and-a-half years after it was published." Researchers had critiqued the controversial paper many times, raising concerns about its data quality and an unclear ethics-approval process. Its eventual withdrawal, on the grounds of concerns over ethical approval and doubts about the conduct of the research, marks the 28th retraction for co-author Didier Raoult, a French microbiologist, formerly at Marseille's Hospital-University Institute Mediterranean Infection (IHU), who shot to global prominence in the pandemic. French investigations found that he and the IHU had violated ethics-approval protocols in numerous studies, and Raoult has now retired.

The paper, which has received almost 3,400 citations according to the Web of Science database, is the highest-cited paper on COVID-19 to be retracted, and the second-most-cited retracted paper of any kind....

Because it contributed so much to the HCQ hype, "the most important unintended effect of this study was to partially side-track and slow down the development of anti-COVID-19 drugs at a time when the need for effective treatments was critical", says Ole Søgaard, an infectious-disease physician at Aarhus University Hospital in Denmark, who was not involved with the work or its critiques. "The study was clearly hastily conducted and did not adhere to common scientific and ethical standards...."

Three of the study's co-authors had asked to have their names removed from the paper, saying they had doubts about its methods, the retraction notice said.

Nature includes this quote from a scientific-integrity consultant in San Francisco, California. "This paper should never have been published — or it should have been retracted immediately after its publication."

"The report caught the eye of the celebrity doctor Mehmet Oz," the Atlantic reported in April of 2020 (also noting that co-author Raoult "has made news in recent years as a pan-disciplinary provocateur; he has questioned climate change and Darwinian evolution...")

And Nature points out that while the study claimed good results for the 20 patients treated with HCQ, six more HCQ-treated people in the study actually dropped out before it was finished. And of those six people, one died, while three more "were transferred to an intensive-care unit."

Thanks to Slashdot reader backslashdot for sharing the news.
Australia

Australia Struggling With Oversupply of Solar Power (abc.net.au) 203

Mirnotoriety writes: Amid the growing warmth and increasingly volatile weather of an approaching summer, Australia passed a remarkable milestone this week. The number of homes and businesses with a solar installation clicked past 4 million -- barely 20 years since there was practically none anywhere in the country. It is a love affair that shows few signs of stopping.

And it's a technology that is having ever greater effects, not just on the bills of its household users but on the very energy system itself. At no time of the year is that effect more obvious than spring, when solar output soars as the days grow longer and sunnier but demand remains subdued as mild temperatures mean people leave their air conditioners switched off.

Such has been the extraordinary production of solar in Australia this spring, the entire state of South Australia has -- at various times -- met all of its electricity needs from the technology.

[...] [T]here is, at times, too much solar power in Australia's electricity systems to handle.

Comment Apologies mean nothing - Allow Rollbacks (Score 1) 30

I was one of the Sonos users affected by this. I used wireless sonos speakers to play music from my personal music (yes, legit) library. I used it daily for years.

One day in May, this just stopped working. It was not clear at first what was wrong. I had to debug it myself. After working through many appalling, mysterious error screens I eventually concluded it was the Sonos app itself. Sonos's help was no help at all, it had me checking my media library configuration instead of recognizing the app release had major broken functionality.

Sonos's app release broke my single most used, major use case -- playing music from my personal library via mobile devices. And they were very slow to own up to it, and very slow to eventually getting around to fixing it. (Issues were resolved without very awkward workarounds in Aug.) Instead, their new app boasted working support for a wide variety of niche monthly music services I have never heard of and have no interest in using. I drew the obvious conclusion that my use case is not very important to Sonos, and apparently wasn't even in their test plan.

Apologies from the CEO and waving a flag of quality control isn't enough to restore my trust. I wanted (and still want) to see some rollback mechanism for app updates, even if it would be implemented as a separate app download. There should have been a rollback option instead of these endless do-nothing apologies and 3+ months of broken functionality. Please do no use Sonos (or any other product) whose actions do not show reasonable commitment to their existing customers.

To end this on a positive note -- I was playing the "Baldur's Gate" game with a group of friends recently and one of their updates broke our ability to play as a group. However, in this case, Baldur's Gate's publisher (Larian) DOES allow a fairly straightforward rollback option to their updates. We were able to use it and get the group back together. I realize that game updates are not exactly the same as mobile app updates, but I wanted to point out that someone is doing it well, and it won't be the last time this issue of pushed updates-that-break-major-functionality comes up.

AI

AI Researcher Warns Data Science Could Face a Reproducibility Crisis (beabytes.com) 56

Long-time Slashdot reader theodp shared this warning from a long-time AI researcher arguing that data science "is due" for a reckoning over whether results can be reproduced. "Few technological revolutions came with such a low barrier of entry as Machine Learning..." Unlike Machine Learning, Data Science is not an academic discipline, with its own set of algorithms and methods... There is an immense diversity, but also disparities in skill, expertise, and knowledge among Data Scientists... In practice, depending on their backgrounds, data scientists may have large knowledge gaps in computer science, software engineering, theory of computation, and even statistics in the context of machine learning, despite those topics being fundamental to any ML project. But it's ok, because you can just call the API, and Python is easy to learn. Right...?

Building products using Machine Learning and data is still difficult. The tooling infrastructure is still very immature and the non-standard combination of data and software creates unforeseen challenges for engineering teams. But in my views, a lot of the failures come from this explosive cocktail of ritualistic Machine Learning:

- Weak software engineering knowledge and practices compounded by the tools themselves;
- Knowledge gap in mathematical, statistical, and computational methods, encouraged black boxing API;
- Ill-defined range of competence for the role of data scientist, reinforced by a pool of candidates with an unusually wide range of backgrounds;
- A tendency to follow the hype rather than the science.


- What can you do?

- Hold your data scientists accountable using Science.
- At a minimum, any AI/ML project should include an Exploratory Data Analysis, whose results directly support the design choices for feature engineering and model selection.
- Data scientists should be encouraged to think outside-of-the box of ML, which is a very small box - Data scientists should be trained to use eXplainable AI methods to provide context about the algorithm's performance beyond the traditional performance metrics like accuracy, FPR, or FNR.
- Data scientists should be held at similar standards than other software engineering specialties, with code review, code documentation, and architectural designs.

The article concludes, "Until such practices are established as the norm, I'll remain skeptical of Data Science."
AI

'What Kind of Bubble Is AI?' (locusmag.com) 100

"Of course AI is a bubble," argues tech activist/blogger/science fiction author Cory Doctorow.

The real question is what happens when it bursts?

Doctorow examines history — the "irrational exuberance" of the dotcom bubble, 2008's financial derivatives, NFTs, and even cryptocurrency. ("A few programmers were trained in Rust... but otherwise, the residue from crypto is a lot of bad digital art and worse Austrian economics.") So would an AI bubble leave anything useful behind? The largest of these models are incredibly expensive. They're expensive to make, with billions spent acquiring training data, labelling it, and running it through massive computing arrays to turn it into models. Even more important, these models are expensive to run.... Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical.

AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments... There just aren't that many customers for a product that makes their own high-stakes projects betÂter, but more expensive. There are many low-stakes applications — say, selling kids access to a cheap subscription that generates pictures of their RPG characters in action — but they don't pay much. The universe of low-stakes, high-dollar applications for AI is so small that I can't think of anything that belongs in it.

There are some promising avenues, like "federated learning," that hypothetically combine a lot of commodity consumer hardware to replicate some of the features of those big, capital-intensive models from the bubble's beneficiaries. It may be that — as with the interregnum after the dotcom bust — AI practitioners will use their all-expenses-paid education in PyTorch and TensorFlow (AI's answer to Perl and Python) to push the limits on federated learning and small-scale AI models to new places, driven by playfulness, scientific curiosity, and a desire to solve real problems. There will also be a lot more people who understand statistical analysis at scale and how to wrangle large amounts of data. There will be a lot of people who know PyTorch and TensorFlow, too — both of these are "open source" projects, but are effectively controlled by Meta and Google, respectively. Perhaps they'll be wrestled away from their corporate owners, forked and made more broadly applicable, after those corporate behemoths move on from their money-losing Big AI bets.

Our policymakers are putting a lot of energy into thinking about what they'll do if the AI bubble doesn't pop — wrangling about "AI ethics" and "AI safety." But — as with all the previous tech bubbles — very few people are talking about what we'll be able to salvage when the bubble is over.

Thanks to long-time Slashdot reader mspohr for sharing the article.
AI

Meta's New Rule: If Your Political Ad Uses AI Trickery, You Must Confess (techxplore.com) 110

Press2ToContinue writes: Starting next year, Meta will play the role of a strict schoolteacher for political ads, making them fess up if they've used AI to tweak images or sounds. This new 'honesty policy' will kick in worldwide on Facebook and Instagram, aiming to prevent voters from being duped by digitally doctored candidates or made-up events. Meanwhile, Microsoft is jumping on the integrity bandwagon, rolling out anti-tampering tech and a support squad to shield elections from AI mischief.
Privacy

TSA Expands Controversial Facial Recognition Program (cbsnews.com) 70

SonicSpike shares a report from CBS News: As possible record-setting crowds fill airports nationwide, passengers may encounter new technology at the security line. At 25 airports in the U.S. and Puerto Rico, the TSA is expanding a controversial digital identification program that uses facial recognition. This comes as the TSA and other divisions of Homeland Security are under pressure from lawmakers to update technology and cybersecurity. "We view this as better for security, much more efficient, because the image capture is fast and you'll save several seconds, if not a minute," said TSA Administrator David Pekoske.

At the world's busiest airport in Atlanta, the TSA checkpoint uses a facial recognition camera system to compare a flyer's face to the picture on their ID in seconds. If there's not a match, the TSA officer is alerted for further review. "Facial recognition, first and foremost, is much, much more accurate," Pekoske said. "And we've tested this extensively. So we know that it brings the accuracy level close to 100% from mid-80% with just a human looking at a facial match." The program has been rolled out to more than two dozen airports nationwide since 2020 and the TSA plans to add the technology, which is currently voluntary for flyers, to at least three more airports by the end of the year. There are skeptics. Five U.S. senators sent a letter demanding that TSA halt the program.

Slashdot Top Deals

All the evidence concerning the universe has not yet been collected, so there's still hope.

Working...