177243091
submission
tedlistens writes:
Eight years after creating the Cybersecurity and Infrastructure Security AgencyTrump's second administration is ripping up parts of the country’s cyber playbook and taking many of its best players off the field, from threat hunters and election defenders at CISA to the leader of the NSA and Cyber Command. Amid a barrage of severe attacks like Volt Typhoon and rising trade tensions, lawmakers, former officials, and cyber professionals say that sweeping and confusing cuts are making the country more vulnerable and emboldening its adversaries.
“There are intrusions happening now that we either will never know about or won’t see for years because our adversaries are undoubtedly stepping up their activity, and we have a shrinking, distracted workforce,” says Jeff Greene, a cybersecurity expert who has held top roles at CISA and the White House.
174238283
submission
tedlistens writes:
A trip down memory lane, thirty years ago in 1994—what CERN dubbed "the Year of the Web"—offers some telling glimpses of the future, and some lessons for it too.
173141458
submission
tedlistens writes:
A wave of scrutiny and sanctions have helped expose the secretive, quasi-legal industry behind spyware tools, and put financial strain on firms like Israel’s NSO Group, which builds Pegasus. And yet business is booming. New research published this month by Google and Meta suggest that despite new restrictions, the cyberattack market is growing, and growing more dangerous, aiding government violence and repression and eroding democracy around the globe.
“The industry is thriving,” says Maddie Stone, a researcher at Google’s Threat Analysis Group (TAG) who hunts zero-day exploits, the software bugs that have yet to be fixed and are worth potentially hundreds of millions to spyware sellers. “More companies keep popping up, and their government customers are determined to buy from them, and want these capabilities, and are using them.”
For the first time, half of known zero-days against Google and Android products now come from private companies, according to a report published this month by Stone’s team at Google. Beyond prominent firms like NSO and Candiru, Google’s researchers say they are tracking about 40 companies involved in the creation of hacking tools that have been deployed against “high risk individuals.” “If governments ever had a monopoly on the most sophisticated capabilities, that era is certainly over,” reads the report.
The Google findings and a spyware-focused threat report published by Meta a week later reflect an increasingly tough response by Big Tech to an industry that profits from breaking into its systems. The reports also put new pressure on the US and others to take action against the mostly unregulated industry.
171991177
submission
tedlistens writes:
Via Fast Company:
To meet the world’s growing hunger for chips, a startup wants to upend the costly semiconductor fabrication plant with a nimbler, cheaper idea, one they believe can faster spread the manufacturing of the chips inside nearly everything we use: an AI-enabled chip factory that can be assembled and expanded modularly with prefab pieces, like high-tech Lego bricks.”
“We’re democratizing the ownership of semiconductor fabs,” says Matthew Putman... the founder and CEO of Nanotronics, a New York City-based industrial AI company that deploys advanced optical solutions for detecting defects in manufacturing procedures. Its new system, called Cubefabs, combines its modular inspection tools and other equipment with AI, allowing the proposed chip factories to monitor themselves and adapt accordingly—part of what Putman calls an “autonomous factory.” The bulk of the facility can be preassembled, flat-packed and put in shipping containers so that the facilities can be built “in 80% of the world,” says Putman.
Eventually, the company envisions hundreds of the flower-shaped fabs around the world, starting with a prototype in New York or Kuwait that it hopes to start building by the end of the year...
The world’s chip dependence has never been in sharper relief, with a spike in fab construction in the U.S. alone to the tune of hundreds of billions—a boost from the CHIPS and Science Act of 2022 and similar incentives around the globe, not to mention the growing geopolitical anxieties over the most advanced chips. (New chip restrictions on China by the Biden administration have raised the temperature of a global chip war, frustrating, among others, American chip makers.) Despite a recent deceleration in semiconductor demand, the global hunger for chips is expected to double in size by 2030. Meanwhile, the costs of fabricating them are skyrocketing. Moore’s law says that the number of transistors in an integrated circuit doubles every two years; Rock’s law says that the cost of a chip fabrication plant doubles every four.
171830862
submission
tedlistens writes:
Two centuries ago, the Luddites revolted against an unjust technological disruption. In an excerpt from his forthcoming history on the movement, “Blood in the Machine,” Brian Merchant describes how tech uprisings begin.
171511382
submission
tedlistens writes:
All large language models are liable to produce toxic and other unwanted outputs, either by themselves or at the encouragement of users. To evaluate and "detoxify" their LLMs, OpenAI, Meta, Anthropic, and others are using Perspective API—a free tool from Google's Jigsaw unit designed to flag toxic human speech on social media platforms and comment sections. But, as Alex Pasternack reports at Fast Company, researchers and Jigsaw itself acknowledge problems with Perspective and other AI classifiers, and worry that AI developers using them to build LLMs could be inheriting their failures, false positives, and biases. That could, in turn, make the language models more biased or less knowledgeable about minority groups, harming some of the same people the classifiers are meant to help. “Our goal is really around humans talking to humans,” says Jigsaw's Lucy Vasserman, “so [using Perspective to police AI] is something we kind of have to be a little bit careful about.”
166744013
submission
tedlistens writes:
Two corners of Brooklyn’s historic Navy Yard will be connected by a small test bed for quantum networking, a first step toward a future “quantum internet” that promises to transform computing and make communications untappable. The effort, by a startup company called Qunnect, will join dozens of experiments around the U.S., Europe, and China, but would be the first commercial quantum network in the country, and the first to use only small, room-temperature devices. Such tools could make it easier to link quantum computers across the planet, opening the door to more practical uses of the technology in research, defense, finance, and other yet-to-be-determined applications.
“We can have these networks go all the way from here, coast to coast, and eventually global,” says Dr. Noel Goddard, the CEO of Qunnect. In addition to testing a protocol for sharing quantum information across conventional fiber-optic lines, the 12-person startup will use the network to test a group of quantum networking hardware that can fit into the server racks of existing telecom buildings. Its flagship product, spun out of research at SUNY Stony Brook, is a type of device thought to be crucial to establishing the “magic” of quantum entanglement across a fiber line, called a quantum memory... unlike many quantum machines—often sprawling tabletop contraptions that rely on cryogenic cooling, vacuums, and other delicate equipment—Qunnect’s memory machine operates at room temperature and fits inside a box the size of a large desk drawer.
165368223
submission
tedlistens writes:
Nuclear is booming again. And with a serious pile of private and public funding behind them—and physics (see the recent breakthrough at Livermore National Lab)—these companies say they’re getting closer to fusion.
147359102
submission
tedlistens writes:
A series of recent studies find the Everything Store’s algorithms promoting falsehoods about the coronavirus, vaccines, AIDS, the election, and more.
In one audit first published in January, researchers at the University of Washington surveyed Amazon’s search results for four dozen terms related to vaccines. Among 38,000 search results and over 16,000 recommendations, they counted nearly 5,000 unique products containing misinformation, or 10.47% of the total. For books, they found that titles deemed misinformative appeared higher in search results than books that debunked their theories. “Overall, our audits suggest that Amazon has a severe vaccine/health misinformation problem exacerbated by its search and recommendation algorithms,” write Prerna Juneja and Tanushee Mitra in their paper, presented last month at the CHI Conference on Human Factors in Computing Systems. “Just a single click on an anti-vaccine book could fill your homepage with several other similar anti-vaccine books.”
Juneja and other researchers say Amazon needn’t remove conspiracy theory books, but it could keep them out of its search results and recommendations. “There is a need for Amazon to treat vaccine-related searches as searches of higher importance and ensure higher quality content for them,” she tells Fast Company.
138194724
submission
tedlistens writes:
Whoever wins, the digital political wars aren't going anywhere, and one groupsays it's built the most powerful weapon for persuading voters with Facebook ads, with a hundred experiments, an unparalleled dataset, and a Silicon Valley "dream team" of data and social scientists. This spring, it says its ads lowered Trump's approval rate by 3.6%.
The group's leader, James Barnes, helped pioneer some of its cutting-edge techniques in 2016 while working for the other side: he was the Facebook "embed" who taught the Trump campaign how to use Facebook and Instagram with tremendous success. His new effort—targeting low-information voters on all sides of the spectrum, as part of an aggressive $75 million campaign by a well-funded Democratic super PAC—isn't just an implicit attempt to make up for his work for Trump; it's a response to the mess his former has helped make, and an election marred by waves of misinformation. “One of the reasons that the work that we’re doing is so important, just reminding low-information voters—folks whom we have a lot of evidence that their opinions are easier to shift, and putting mainstream, fact-checked news in front of folks that help them understand what’s actually going on in the world—is because all of the decisions that Facebook has made, especially in the past few years."
136114208
submission
tedlistens writes:
The CEO of Taser maker Axon, Rick Smith, has a lot of high-tech ideas for fixing policing. One idea for identifying potentially abusive behavior is AI, integrated with the company's increasingly ubiquitous body cameras and the footage they produce. In a patent application filed last month, Axon describes the ability to search video not only for words and locations but also for clothing, weapons, buildings, and other objects. AI could also tag footage to enable searches for things such as “the characteristics [of] the sounds or words of the audio,” including “the volume (e.g., intensity), tone (e.g., menacing, threatening, helpful, kind), frequency range, or emotions (e.g., anger, elation) of a word or a sound.”
Building that kind of software is a difficult task, and in the realm of law enforcement, one with particularly high stakes. But Smith also faces a more low-tech challenge, he tells Fast Company: making his ideas acceptable both to intransigent police unions and to the communities those police serve. Of course, right now many of those communities aren’t calling for more technology for their police but for deep reform, if not deep budget cuts. And police officers aren't exactly clamoring for more scrutiny, especially if it's being done by a computer.
135167201
submission
tedlistens writes:
Facing questions about a mysterious series of changes to some fact-check labels, Facebook recently wrote to a group of senators with an assurance: its fact checkers can and do label "opinion" content if it crosses the line into falsehood.
What Facebook didn't tell the senators: the company draws that line, and can pressure changes to fact checks & misinformation penalties. And it does. Facebook acknowledged to me that it may ask fact checkers to change their ratings, and that it exercises control over pages' internal misinformation strikes.
In one case—a video containing misinformation about climate change published by PragerU—Facebook downgraded a fact-check label from "false" to "partly false," and removed the page's misinformation strikes.
Was the change warranted? "Let me put it this way," says Scott Johnson, an editor at Climate Feedback, one of Facebook's third-party fact checking organizations. "Our reviewers gave it a -2 rating on our +2 to -2 scale and our summary describes it as 'incorrect and misleading to viewers,' so we had selected the 'false' label accordingly."
In some cases the video now carries no apparent label at all. After an update that Facebook announced last week, the company is using what it calls a "lighter-weight warning label" for "partly false" content in the U.S.: an unobtrusive box below the video under "related articles" that says "fact check," with a link. Meanwhile, older versions of the video appeared to evade labels completely: A handful of other PragerU posts containing the video appear without any labeling, a review by Fast Company found. Versions of the labeled and unlabeled video have now racked up millions of views since April 2016, when it was first published.
132259860
submission
tedlistens writes:
Without cellphone video, George Floyd’s death might have been what the Minneapolis police initially described in a statement as simply a “medical incident during a police interaction.” Fortunately, the officers were also filming the entire encounter on their body cameras, the result of a previous round of reforms aimed at reducing force and enhancing transparency. And yet, the public still hasn’t seen those videos: Like many states, Minnesota gives police wide discretion about when and how to release the footage, if at all.
It’s a pattern repeated at police departments across the country, and it adds to a growing chorus of questions about the actual impact of police video.
“We spent a king’s ransom on body cameras in this country, for accountability,” says Barry Friedman, a professor at New York University School of Law, and director of the Policing Project. But research shows that the cameras aren't having their intended effects. And the devices raise other concerns about police reforms: Policies and laws keep videos from the public and allows cops to manipulate what gets captured on camera, while new technologies like live-streaming and face recognition are turning cameras into powerful street-level surveillance tools.
118236676
submission
tedlistens writes:
Social Science One, an unprecedented, Mark Zuckerberg-backed plan to open up Facebook's data to outside researchers—with the aim of fighting disinformation and propaganda ahead of elections in 2020—has run up against privacy concerns at Facebook. A month after the funders' deadline, Facebook continues to work on treating the data with differential privacy techniques and says it hopes to publish more datasets soon. But researchers are frustrated and confused, and the backers are reconsidering their support. And lawmakers like Sen. Mark Warner, the vice chair of the Senate Intelligence Committee, are growing impatient too.
“In Congress, we need to require greater accountability from social media platforms on everything from the transparency of political ad funding, to the legitimacy of content, to the authenticity of user accounts,” Warner tells Alex Pasternack at Fast Company. “And if platforms refuse to comply, we need to be able to hold them responsible.”
108063204
submission
tedlistens writes:
"One of the most common techniques people think can help hide their activity is the use of an "incognito" mode in a browser," writes Michael Grothaus at Fast Company. But "despite what most people assume, incognito modes are primarily built to block traces of your online activity being left on your computer--not the web. Just because you are using incognito mode, that doesn't mean your ISP and sites like Google, Facebook, and Amazon can't track your activity."
However, there's still a way to brew your own, safer "incognito mode." It's called browser compartmentalization. Grothaus writes: "The technique sees users using two or even three browsers on the same computer. However, instead of switching between browsers at random, users of browser compartmentalization dedicate one browser to one type of internet activity, and another browser to another type of internet activity."