Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Submission + - Bioethicists Want to Infect People With Disease That Makes You Allergic to Meat. (hotair.com)

An anonymous reader writes: You would think that the development of a discipline called "bioethics" would be a good thing, but you would be wrong.

The history of science and medicine is filled with appalling instances of scientists and doctors cruelly abusing people and animals--forced sterilizations, lobotomies, experiments on unwitting victims...the list goes on.

The problem, though, is that the people doing the bioethics are the same people, in many cases, who dream up these nightmares. Their goal is not to "do no harm," but to justify it.

The Almighty Buck

Top AI Salaries Dwarf Those of the Manhattan Project and the Space Race 49

An anonymous reader quotes a report from Ars Technica: Silicon Valley's AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone -- it shattered every historical precedent for scientific and technical compensation we can find on record. [Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years.] That includes salaries during the development of major scientific milestones of the 20th century. [...]

To put these salaries in a historical perspective: J. Robert Oppenheimer, who led the Manhattan Project that ended World War II, earned approximately $10,000 per year in 1943. Adjusted for inflation using the US Government's CPI Inflation Calculator, that's about $190,865 in today's dollars -- roughly what a senior software engineer makes today. The 24-year-old Deitke, who recently dropped out of a PhD program, will earn approximately 327 times what Oppenheimer made while developing the atomic bomb. [...] The Apollo program offers another striking comparison. Neil Armstrong, the first human to walk on the moon, earned about $27,000 annually -- roughly $244,639 in today's money. His crewmates Buzz Aldrin and Michael Collins made even less, earning the equivalent of $168,737 and $155,373, respectively, in today's dollars. Current NASA astronauts earn between $104,898 and $161,141 per year. Meta's AI researcher will make more in three days than Armstrong made in a year for taking "one giant leap for mankind."
The report notes that the sums being offered to some of these AI researchers top even the most popular sports athletes. "The New York Times noted that Steph Curry's most recent four-year contract with the Golden State Warriors was $35 million less than Deitke's Meta deal (although soccer superstar Cristiano Ronaldo will make $275 million this year as the highest-paid professional athlete in the world)," reports Ars.
Power

Researchers Map Where Solar Energy Delivers the Biggest Climate Payoff (rutgers.edu) 54

A Rutgers-led study using advanced computational modeling reveals that expanding solar power by just 15% could reduce U.S. carbon emissions by over 8.5 million metric tons annually, with the greatest benefits concentrated in specific regions like California, Texas, and the Southwest. The study has been published in Science Advances. From the report: The study quantified both immediate and delayed emissions reductions resulting from added solar generation. For example, the researchers found that in California, a 15% increase in solar power at noon was associated with a reduction of 147.18 metric tons of CO2 in the region in the first hour and 16.08 metric tons eight hours later.

The researchers said their methods provide a more nuanced understanding of system-level impacts from solar expansion than previous studies, pinpointing where the benefits of increased solar energy adoption could best be realized. In some areas, such as California, Florida, the mid-Atlantic, the Midwest, Texas and the Southwest, small increases in solar were estimated to deliver large CO2 reductions, while in others, such as New England, the central U.S., and Tennessee, impacts were found to be minimal -- even at much larger increases in solar generation.

In addition, the researchers said their study demonstrates the significant spillover effects solar adoption has on neighboring regions, highlighting the value of coordinated clean energy efforts. For example, a 15% increase in solar capacity in California was associated with a reduction of 913 and 1,942 metric tons of CO2 emissions per day in the northwest and southwest regions, respectively.
"It was rewarding to see how advanced computational modeling can uncover not just the immediate, but also the delayed and far-reaching spillover effects of solar energy adoption," said the lead author Arpita Biswas, an assistant professor with the Department of Computer Science at the Rutgers School of Arts and Sciences. "From a computer science perspective, this study demonstrates the power of harnessing large-scale, high-resolution energy data to generate actionable insights. For policymakers and investors, it offers a roadmap for targeting solar investments where emissions reductions are most impactful and where solar energy infrastructure can yield the highest returns."
Education

Lying Increases Trust In Science, Study Finds (phys.org) 153

A new paper from Bangor University outlines the "bizarre phenomenon" known as the transparency paradox: that transparency is needed to foster public trust in science, but being transparent about science, medicine and government can also reduce trust. The paper argues that while openness in science is intended to build trust, it can backfire when revealing uncomfortable truths. Philosopher Byron Hyde and author of the study suggests that public trust could be improved not by sugarcoating reality, but by educating people to expect imperfection and understand how science actually works. Phys.org reports: The study revealed that, while transparency about good news increases trust, transparency about bad news, such as conflicts of interest or failed experiments, decreases it. Therefore, one possible solution to the paradox, and a way to increase public trust, is to lie (which Hyde points out is unethical and ultimately unsustainable), by for example making sure bad news is hidden and that there is always only good news to report.

Instead, he suggests that a better way forward would be to tackle the root cause of the problem, which he argues is the public overidealising science. People still overwhelmingly believe in the 'storybook image' of a scientist who makes no mistakes, which creates unrealistic expectations. Hyde is calling for a renewed effort to teach the public about scientific norms, which would be done through science education and communication to eliminate the "naive" view of science as infallible.
"... most people know that global temperatures are rising, but very few people know how we know that," says Hyde. "Not enough people know that science 'infers to the best explanation' and doesn't definitively 'prove' anything. Too many people think that scientists should be free from biases or conflicts of interest when, in fact, neither of these are possible. If we want the public to trust science to the extent that it's trustworthy, we need to make sure they understand it first."

The study has been published in the journal Theory and Society.
AI

Anthropic Revokes OpenAI's Access To Claude Over Terms of Service Violation 10

An anonymous reader quotes a report from Wired: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5," Anthropic spokesperson Christopher Nulty said in a statement to WIRED. "Unfortunately, this is a direct violation of our terms of service." According to Anthropic's commercial terms of service, customers are barred from using the service to "build a competing product or service, including to train competing AI models" or "reverse engineer or duplicate" the services. This change in OpenAI's access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.

OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude's capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models' behavior under similar conditions and make adjustments as needed. "It's industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic's decision to cut off our API access, it's disappointing considering our API remains available to them," OpenAI's chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will "continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry."
Power

Peak Energy Ships America's First Grid-Scale Sodium-Ion Battery (electrek.co) 97

Longtime Slashdot reader AmiMoJo shares a report from Electrek: Peak Energy shipped out its first sodium-ion battery energy storage system, and the New York-based company says it's achieved a first in three ways: the US's first grid-scale sodium-ion battery storage system; the largest sodium-ion phosphate pyrophosphate (NFPP) battery system in the world; and the first megawatt-hour scale battery to run entirely on passive cooling -- no fans, pumps, or vents. That's significant because removing moving parts and ditching active cooling systems eliminates fire risk.

According to the Electric Power Research Institute, 89% of battery fires in the US trace back to thermal management issues. Peak's design doesn't have those issues because it doesn't have those systems. Instead, the 3.5 MWh system uses a patent-pending passive cooling architecture that's simpler, more reliable, and cheaper to run and maintain. The company says its technology slashes auxiliary power needs by up to 90%, saves about $1 million annually per gigawatt hour of storage, and cuts battery degradation by 33% over a 20-year lifespan. [...]

Peak is working with nine utility and independent power producer (IPP) customers on a shared pilot this summer. That deployment unlocks nearly 1 GWh of future commercial contracts now under negotiation. The company plans to ship hundreds of megawatt hours of its new system over the next two years, and it's building its first US cell factory, which is set to start production in 2026.

Transportation

Aurora's Self-Driving Trucks Are Now Driving At Night (freightwaves.com) 34

Aurora Innovation has expanded its autonomous trucking operations with nighttime driverless runs between Dallas and Houston and a new Phoenix terminal. "Efficiency, uptime, and reliability are important for our customers, and Aurora is showing we can deliver," said Chris Urmson, co-founder and CEO of Aurora, in a press release. "Just three months after launch, we're running driverless operations day and night and we've expanded our terminal network to Phoenix. Our rapid progress is beginning to unlock the full value of self-driving trucks for our customers, which has the potential to transform the trillion-dollar trucking industry." FreightWaves reports: The expansion allows for continuous utilization, shortening delivery times and serving as part of its path to autonomous trucking profitability. Aurora notes that the unlocking of nighttime autonomous operations can also improve road safety. It cited a 2021 Federal Motor Carrier Safety Administration report on large truck and bus crashes that noted a disproportionate 37% of fatal crashes involving large trucks occurred at night. This comes despite trucks traveling fewer miles during those hours.

Aurora's SAE L4 autonomous driving system, called the Aurora Driver, can detect objects in the dark more than 450 meters away via its proprietary, long-range FirstLight Lidar. The lidar can identify pedestrians, vehicles, and debris up to 11 seconds sooner than a traditional driver, according to the company. In addition to the fleet and operations expansion, the new terminal in Phoenix, which opened in June, is part of an infrastructure-light approach. Aurora notes this design will closely resemble how the company plans to integrate with future customer endpoints, optimized for speed to market.

This expansion of the more than 15-hour Fort Worth to Phoenix route opens up opportunities to showcase the autonomous truck's ability to cut transit time in half compared to a single driver, who is limited to the 11-hour hours-of-service limitation. Aurora is piloting the autonomous trucking Phoenix lane with two customers, Hirschbach and Werner.

Transportation

Skipping Over-The-Air Car Updates Could Be Costly (autoblog.com) 78

Longtime Slashdot reader Mr_Blank shares a report from Autoblog: Once a new OTA update becomes available, owners of GM vehicles have 45 days to install the update. After this date, the company will not cover any damages or issues that are caused by ignoring the update. "Damage resulting from failure to install over-the-air software updates is not covered," states the warranty booklet for 2025 and 2026 models.

This same rule applies to all GM's brands in the USA: Chevrolet, Buick, Cadillac, and GMC. However, if the software update itself causes any component damage, that will be covered by the warranty. Owners coming from older GM vehicles will have to adapt as the company continues to implement its Global B electronic architecture on newer models, which relies heavily on OTA updates. Similar policies appear in the owner's manual for Tesla. Software-defined vehicles are here to stay, even if some of them have far more tech glitches than they should -- just ask Volvo.

Bug

A Luggage Service's Web Bugs Exposed the Travel Plans of Every User (wired.com) 1

An anonymous reader quotes a report from Wired: An airline leaving all of its passengers' travel records vulnerable to hackers would make an attractive target for espionage. Less obvious, but perhaps even more useful for those spies, would be access to a premium travel service that spans 10 different airlines, left its own detailed flight information accessible to data thieves, and seems to be favored by international diplomats. That's what one team of cybersecurity researchers found in the form of Airportr, a UK-based luggage service that partners with airlines to let its largely UK- and Europe-based users pay to have their bags picked up, checked, and delivered to their destination. Researchers at the firm CyberX9 found that simple bugs in Airportr's website allowed them to access virtually all of those users' personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.

Airportr's CEO Randel Darby confirmed CyberX9's findings in a written statement provided to WIRED but noted that Airportr had disabled the vulnerable part of its site's backend very shortly after the researchers made the company aware of the issues last April and fixed the problems within a few day. "The data was accessed solely by the ethical hackers for the purpose of recommending improvements to Airportr's security, and our prompt response and mitigation ensured no further risk," Darby wrote in a statement. "We take our responsibilities to protect customer data very seriously." CyberX9's researchers, for their part, counter that the simplicity of the vulnerabilities they found mean that there's no guarantee other hackers didn't access Airportr's data first. They found that a relatively basic web vulnerability allowed them to change the password of any user to gain access to their account if they had just the user's email address -- and they were also able to brute-force guess email addresses with no rate limitations on the site. As a result, they could access data including all customers' names, phone numbers, home addresses, detailed travel plans and history, airline tickets, boarding passes and flight details, passport images, and signatures.

By gaining access to an administrator account, CyberX9's researchers say, a hacker could also have used the vulnerabilities it found to redirect luggage, steal luggage, or even cancel flights on airline websites by using Airportr's data to gain access to customer accounts on those sites. The researchers say they could also have used their access to send emails and text messages as Airportr, a potential phishing risk. Airportr tells WIRED that it has 92,000 users and claims on its website that it has handled more than 800,000 bags for customers. [...] The researchers found that they could monitor their browser's communications as they signed up for Airportr and created a new password, and then reuse an API key intercepted from those communications to instead change another user's password to anything they chose. The site also lacked a "rate limiting" security measure that would prevent automated guesses of email addresses to rapidly change the password of every user's account. And the researchers were also able to find email addresses of Airportr administrators that allowed them to take over their accounts and gain their privileges over the company's data and operations.
"Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company," says Himanshu Pathak, CyberX9's founder and CEO. "The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have the ability to do anything."
The Military

Palantir Lands $10 Billion Army Software and Data Contract (cnbc.com) 23

Palantir has secured a massive $10 billion contract with the U.S. Army to unify 75 contracts into a single AI-focused enterprise framework, streamlining procurement and enhancing military readiness. CNBC reports: The agreement creates a "comprehensive framework for the Army's future software and data needs" that provides the government with purchasing flexibility and removes contract-related fees and procurement timelines, according to a release. Palantir co-founder and CEO Alex Karp has been a vocal proponent of protecting U.S. interests and joining forces on AI to fend off adversaries.

Earlier this year, Palantir delivered its first two AI-powered systems in its $178 million contract with the U.S. Army. In May, the Department of Defense boosted its Maven Smart Systems contract to beef up AI capabilities by $795 million.

Submission + - Top AI Salaries Dwarf Those of the Manhattan Project and the Space Race (arstechnica.com)

An anonymous reader writes: Silicon Valley's AI talent war just reached a compensation milestone that makes even the most legendary scientific achievements of the past look financially modest. When Meta recently offered AI researcher Matt Deitke $250 million over four years (an average of $62.5 million per year)—with potentially $100 million in the first year alone—it shattered every historical precedent for scientific and technical compensation we can find on record. [Meta CEO Mark Zuckerberg reportedly also offered an unnamed AI engineer $1 billion in compensation to be paid out over several years.] That includes salaries during the development of major scientific milestones of the 20th century. [...]

To put these salaries in a historical perspective: J. Robert Oppenheimer, who led the Manhattan Project that ended World War II, earned approximately $10,000 per year in 1943. Adjusted for inflation using the US Government's CPI Inflation Calculator, that's about $190,865 in today's dollars—roughly what a senior software engineer makes today. The 24-year-old Deitke, who recently dropped out of a PhD program, will earn approximately 327 times what Oppenheimer made while developing the atomic bomb. [...] The Apollo program offers another striking comparison. Neil Armstrong, the first human to walk on the moon, earned about $27,000 annually—roughly $244,639 in today's money. His crewmates Buzz Aldrin and Michael Collins made even less, earning the equivalent of $168,737 and $155,373, respectively, in today's dollars. Current NASA astronauts earn between $104,898 and $161,141 per year. Meta's AI researcher will make more in three days than Armstrong made in a year for taking "one giant leap for mankind."

Businesses

Atlassian Terminates 150 Staff With Pre-Recorded Video (cyberdaily.au) 41

Atlassian laid off 150 employees via a pre-recorded video. "While not specifically outlined, the affected staff seem to be from the company's European operations, with The Australian saying that Cannon-Brooke's overshared that it would be difficult to axe its European staff due to contract arrangements, but that the company had already begun moving in that direction," reports CyberDaily. While the company claims the cuts weren't directly caused by AI, it has simultaneously rolled out AI-enhanced customer service tools and emphasized automation as a key part of its digital transformation strategy. From the report: Atlassian CEO and co-founder Mike Cannon-Brookes sent the video titled "Restructuring the CSS Team: A Difficult Decision for Our Future" to staff on Wednesday morning (30 July), informing them that 150 staff had been made redundant. The video reportedly did not make it seem that the decision was difficult, but rather said it would allow its staff "to say goodbye." The video itself did not announce who was leaving, but it told employees they would have to wait 15 minutes for an email about their employment. Those who were terminated had their laptops blocked immediately. They reportedly will receive six months' pay.

"AI is going to change Australia," [said former co-CEO and co-founder Scott Farquhar]. "Every person should be using AI daily for as many things as they can. Like any new technology, it will feel awkward to start with, but every business person, every business leader, every government leader, and every bureaucrat should be using it." He also said that governments should be implementing AI more broadly. [...] Commenting on the termination, Farquhar said the mass termination was due to the customer service team no longer being needed in the same capacity, as larger clients required less complex support following a move to the cloud.

Submission + - Anthropic Revokes OpenAI's Access To Claude For Violating Its ToS (wired.com)

An anonymous reader writes: Anthropic revoked OpenAI's API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service. “Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools ahead of the launch of GPT-5,” Anthropic spokesperson Christopher Nulty said in a statement to WIRED. “Unfortunately, this is a direct violation of our terms of service.” According to Anthropic’scommercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services. This change in OpenAI’s access to Claude comes as the ChatGPT-maker isreportedlypreparing to release a new AI model, GPT-5, which isrumoredto be better at coding.

OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude’s capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models’ behavior under similar conditions and make adjustments as needed. “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them,” OpenAI’s chief communications officer Hannah Wong said in a statement to WIRED. Nulty says that Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.”

Advertising

Amazon CEO Wants To Put Ads In Your Alexa+ Conversations (techcrunch.com) 48

An anonymous reader quotes a report from TechCrunch: Amazon CEO Andy Jassy sees an opportunity to deliver ads to users during their conversations with the company's AI-powered digital assistant, Alexa+, he said during Amazon's second-quarter earnings call Thursday. "People are excited about the devices that they can buy from us that has Alexa+ enabled in it. People do a lot of shopping [with Alexa+]; it's a delightful shopping experience that will keep getting better," said Jassy on the call with investors and Wall Street analysts. "I think over time, there will be opportunities, as people are engaging in more multi-turn conversations, to have advertising play a role to help people find discovery, and also as a lever to drive revenue."

[...] Amazon has made Alexa+ free for Prime customers (who pay $14.99 a month) and added a $20-a-month subscription tier for Alexa+ on its own. Jassy suggested on Thursday that Alexa+ could eventually include subscription tiers beyond what's available today -- perhaps an ad-free tier. Up until now, ads have only appeared in Alexa in limited ways. Users may occasionally see a visual ad on Amazon's smart display device, the Echo Show, or hear a pre-recorded ad in between songs on one of Alexa's smart speakers. But Jassy's description of an AI-generated ad that Alexa+ delivers in a multistep conversation, which could help users find new products, is uncharted territory for Amazon and the broader tech industry. Marketers have expressed interest in advertising in AI chatbots, and specifically Alexa+, but exactly how remains unclear. [...] Jassy is betting that users will talk to Alexa+ more than Alexa, which could drive more advertising and more shopping on Amazon.com. However, early reviews of Alexa+ have been mixed. Amazon has reportedly struggled to ship some of Alexa+'s more complicated features, and the rollout has been slower than many expected.

There's a lot to figure out before Amazon puts ads in Alexa+. Like most AI models, Alexa+ is not immune to hallucinations. Before advertisers agree to make Alexa+ a spokesperson for their products, Amazon may have to come up with some ways to ensure that its AI will not offer false advertising for a product. Jassy seems enthusiastic about making advertising a larger part of Amazon business. Amazon's advertising revenue went up 22% in the second quarter, compared to the same period last year. Delivering ads in AI chatbot conversations may also raise privacy concerns. People tend to talk more with AI chatbots compared to deterministic assistants, like the traditional Alexa and Siri products. As a result, generative AI chatbots tend to collect more information on users. Some users might be unsettled by having that information sold to advertisers and having ads appear in their natural language conversations with AI.

Submission + - A Luggage Service's Web Bugs Exposed the Travel Plans of Every User (wired.com)

An anonymous reader writes: An airlineleaving all of its passengers’ travel records vulnerable tohackerswould make an attractive target for espionage. Less obvious, but perhaps even more useful for those spies, would be access to a premium travel service that spans 10 different airlines, left its own detailed flight information accessible to data thieves, and seems to be favored by international diplomats. That's what one team of cybersecurity researchers found in the form of Airportr, a UK-based luggage service that partners with airlines to let its largely UK- and Europe-based users pay to have their bags picked up, checked, and delivered to their destination. Researchers at the firm CyberX9 found that simple bugs in Airportr's website allowed them to access virtually all of those users' personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.

Airportr’s CEO Randel Darby confirmed CyberX9's findings in a written statement provided to WIRED but noted that Airportr had disabled the vulnerable part of its site’s backend very shortly after the researchers made the company aware of the issues last April and fixed the problems within a few day. “The data was accessed solely by the ethical hackers for the purpose of recommending improvements to Airportr’s security, and our prompt response and mitigation ensured no further risk,” Darby wrote in a statement. “We take our responsibilities to protect customer data very seriously.” CyberX9's researchers, for their part, counter that the simplicity of the vulnerabilities they found mean that there's no guarantee other hackers didn't access Airportr's data first. They found that a relatively basic web vulnerability allowed them to change the password of any user to gain access to their account if they had just the user's email address—and they were also able to brute-force guess email addresses with no rate limitations on the site. As a result, they could access data including all customers' names, phone numbers, home addresses, detailed travel plans and history, airline tickets, boarding passes and flight details, passport images, and signatures.

By gaining access to an administrator account, CyberX9's researchers say, a hacker could also have used the vulnerabilities it found to redirect luggage, steal luggage, or even cancel flights on airline websites by using Airportr's data to gain access to customer accounts on those sites. The researchers say they could also have used their access to send emails and text messages as Airportr, a potential phishing risk. Airportr tells WIRED that it has 92,000 users and claims on itswebsitethat it has handled more than 800,000 bags for customers. [...] The researchers found that they could monitor their browser's communications as they signed up for Airportr and created a new password, and then reuse an API key intercepted from those communications to instead change another user's password to anything they chose. The site also lacked a “rate limiting” security measure that would prevent automated guesses of email addresses to rapidly change the password of every user's account. And the researchers were also able to find email addresses of Airportr administrators that allowed them to take over their accounts and gain their privileges over the company's data and operations.

Slashdot Top Deals

Always try to do things in chronological order; it's less confusing that way.

Working...