131419520
submission
Lasrick writes:
12 years (and billions of rubles) after skirmishes between pro-Russian separatists and government forces in Georgia and the subsequent invasion of the former Soviet republic by Russian forces, Russia has heeded the lessons learned from that conflict: The Russian military had gone to war in using World War II-era compasses for navigation and outmoded equipment for weapons targeting, a far cry from the capabilities of the US military. But Russia is now challenging the United States’ long-standing supremacy in space, working to exploit the US military’s dependence on space systems for communications, navigation, intelligence, and targeting.
Aaron Bateman of Johns Hopkins, a former US Air Force intelligence officer who has published on technology and military strategy, Cold War history, and European security affairs, writes about a coming space arms race, with Moscow’s aggressive behavior in space potentially inducing the United States to pursue more assertive policies, like the reinvigoration of Cold War-era anti-satellite weapons programs.
124064628
submission
Lasrick writes:
Scientists with only the pursuit of truth in mind have proven—through meticulous radio-carbon dating and NO TASTING AT ALL—that half the bottles of expensive aged Scotch whisky they tested weren’t as old and valuable as purported.
122924310
submission
Lasrick writes:
On the surface, who could disagree with quashing the idea of supposed killer robots? Dr. Larry Lewis, who spearheaded the first data-based approach to protecting civilians in conflict, wants us to look a bit closer.
121161812
submission
Lasrick writes:
In a stunning change, India has been aggressively pivoting away from coal-fired power plants and towards electricity generated by solar, wind, and hydroelectric power. "The reasons for this change are complex and interlocking, but one aspect in particular seems to stand out: The price for solar electricity has been in freefall, to levels so low they were once thought impossible." This is a piece of exceptionally good news, as it follows on the heels of the general chaos and weakening of goals that seem to have come out of last week's UN climate conference in Madrid, where the United States, Australia, and Brazil pushed for carbon loopholes, sending the conference into overtime and diluting the call for countries to strengthen their commitments under the Paris Agreement.
António Guterres, the UN Secretary-General, had been pushing for the world's biggest emitters to do much more. Guterres took to Twitter over the weekend: "I am disappointed with the results of #COP25. The international community lost an important opportunity to show increased ambition on mitigation, adaptation & finance to tackle the climate crisis. But we must not give up, and I will not give up."
119999386
submission
Lasrick writes:
You may remember the explosion at VECTOR, once a center of Soviet biological warfare research. Filippa Lentzos, senior research fellow jointly appointed in the Departments of War Studies and of Global Health and Social Medicine at King’s College London, just posted an update on what happened after the explosion. Her research focuses on biological threats and on the security and governance of emerging technologies in the life sciences, and she's been covering the accident since it first happened in September.
115861280
submission
Lasrick writes:
Two Canadian climate researchers had both calculated their carbon budgets and long believed that a single transatlantic flight would blow their annual carbon budget. Then they spoke to a mathematics colleague, who helped them crunch the numbers.
109834126
submission
Lasrick writes:
This is the first installment in a new series at the New York Times, “Op-Eds From the Future,” in which science fiction authors, futurists, philosophers and scientists write op-eds that they imagine we might read 10, 20 or even 100 years in the future. The challenges they predict are imaginary — for now — but their arguments illuminate the urgent questions of today.
108328724
submission
Lasrick writes:
In an era of unceasing cyberattacks, including cases of state-sponsored hacking, insurance companies are beginning to re-interpret an old line in their contracts known as the “war exclusion.” Take the case of snack company Mondelez International, hit in the so-called “NotPetya” attack of 2017. Zurich Insurance rejected a $100 million claim from the company after the White House, in January 2018, attributed the NotPetya attack to the Russian military. Mondelez filed a lawsuit last fall, so the question for Zurich is whether the chain of events that led to NotPetya striking down Mondelez’s network qualifies as warfare. A court ruling in favor of Zurich could make cyberwar much more real, and costly.
105595116
submission
Lasrick writes:
A court ruling on whether bots have First Amendment free speech rights remains in the realm of conjecture, but as a new law in California will soon force bots that engage in electioneering or marketing to declare their non-human identity, it may be coming soon. Laurent Sacharoff, a law professor at the University of Arkansas, thinks the people programming bots may want US courts to answer the question on free speech rights for bots in the affirmative. Take a hypothetical bot that engages a voter around a shared concern like motherhood, for instance. "If it has to say, ‘Well look, I’m not really a mother, I’m a chatbot mother, a mother of other chatbots. And when I say I feel your pain, I don’t actually have feelings.’ That’s just not going to be very effective,” Sacharoff says.
96572189
submission
Lasrick writes:
A former Army Ranger—who happens to have led the team that established Defense Department policy on autonomous weapons—explains in an interview with the Bulletin of the Atomic Scientists just what these weapons are good for, what they’re bad at, and why banning them is going to be a very difficult challenge.
96251887
submission
Lasrick writes:
In 2017, the cyber threat finally began to seem real to the general public. Advances in biotech in 2017 could lead to the deliberate spread of disease & a host of other dangers. And then there were the leaps forward made in AI. Here's a roundup of coverage from the Bulletin of the Atomic Scientists on advances in emerging technological threats that were made in the last year.
88259189
submission
Lasrick writes:
'For more than three years, rather than rely on military officers working out of isolated bunkers, Russian government recruiters have scouted a wide range of programmers, placing prominent ads on social media sites, offering jobs to college students and professional coders, and even speaking openly about looking in Russia’s criminal underworld for potential talent.' Important read.
86673965
submission
Lasrick writes:
Blockchain technology has been slow to gain adoption in non-financial contexts, but it could turn out to have invaluable military applications. DARPA, the storied research unit of the US Department of Defense, is currently funding efforts to find out if blockchains could help secure highly sensitive data, with potential applications for everything from nuclear weapons to military satellites.
81855003
submission
Lasrick writes:
Ray Weiss at the Scripps Institution of Oceanography describes how countries report greenhouse gas emissions--a 'bottom-up' approach that can result in inventories that differ from those determined by measuring the actual increases of emitted gases in the atmosphere. Weiss proposes a 'top-down" atmospheric monitoring system for greenhouse gases and goes into the technology that already exists for doing so. Fascinating stuff.
81121129
submission
Lasrick writes:
A pretty informative debate on banning autonomous weapons has just closed at the Bulletin of the Atomic Scientists. The debate looks at an open letter, published In July, 2015, in which researchers in artificial intelligence and robotics (and endorsed by high-profile individuals such as Stephen Hawking) called for 'a ban on offensive autonomous weapons beyond meaningful human control.' The letter echoes arguments made since 2013 by the Campaign to Stop Killer Robots, which views autonomous weapons as 'a fundamental challenge to the protection of civilians and to international human rights and humanitarian law.'
But support for a ban is not unanimous. Some researchers argue that autonomous weapons would commit fewer battlefield atrocities than human beings—and that their development might even be considered morally imperative. The authors in this debate focus on these questions: Would deployed autonomous weapons promote or detract from civilian safety; and is an outright ban the proper response to development of autonomous weapons?