Comment Re:The problem is not AI but who owns AI (Score 3, Interesting) 28
Did you even actually read the report? This is Slashdot, so my money is on "no". Do you even know who the authors are? For example, Friends of the Earth is highly anti-tech, famous for example for protesting demanding the closure of nuclear plants, against mining (in general), against logging (in general), they've protested wind farms on NIMBY grounds, etc. Basically anti-growth.
The report is a long series of bait and switches.
Talking about how "data centres" consume 1,5% of global electricity. But AI is only a small fraction of that (Bitcoin is the largest fraction).
Making some distinction between "generative AI" and "traditional AI". But what they're calling "traditional AI" today by and large incorporates Transformers (the backend of e.g. ChatGPT), and even where it doesn't (for example, non-ViT image recognition models) tends to "approximate" Transformers. And some outright use multimodal models anyway. It's a dumb argument and dumb terminology in general; all are "generating" results. Their "traditional" AI generally involves generative pipelines, was enabled by the same tech that enabled things like ChatGPT, and advances from the same architecture advancements that advance things like ChatGPT (as well as advancements in inference, servers, etc). Would power per unit compute have dropped as much as 33x YoY (certain cases at Google) if not for the demand for things like ChatGPT driving hardware and inference advancements? Of course not.
They use rhetoric that doesn't actually match their findings, like "hoax", "bait and switch", etc. They present zero evidence of coordination, zero evidence of fraud or attempt to deceive, and most of their examples are of projects that haven't yet had an impact, not projects that have in any way failed. Indeed, they use a notable double standard: anything they see as a harm is presented as "clear, proven and growing", even if it's not actually a harm today; but any benefit that hasn't yet scaled is a "hoax", "bait and switch", etc.
One thing that they call "bait and switch" is all of the infrastructure being built on what they class as "generative AI", saying you can't attribute that to "non-generative AI". But it's the same infrastructure. It's not bait and switch, it's literally the same hardware. And companies *do* separate out use cases in their sustainability reports.
They extensively handpick which papers they're going to consider and which they aren't. For example, they excluded the famous "Tackling Climate Change with Machine Learning" paper "on the grounds that it does not claim a ‘net benefit’ from the deployment of AI, and pre-dates the onset of consumer generative AI". Except they also classify the vast majority of what they review as "non-generative", so what sort of argument is that? Most of the papers are recent (e.g. 2025), and thus discuss projects that have not yet been implemented, whereas the Rolnick paper is from 2019, and many things that it covers have been implemented.
They have a terrible metric for measuring their impacts: counts of claims and citation quality, rather than magnitude and verifiability of individual impacts. Yet their report claims to be about impacts. They neither model nor attempt to refute the IEA or Stern et al's net benefit impact studies.
They cite but downplay efficiency gains. For example, efficiency in general is gained from (A) better control, and (B) better design. Yet they just handwave away software-design efficiency gains (aka, A) and improved systems design software (up from molecular to macroscopic modeling). They handwave away e.g. a study of GitHub CoPilot increasing software development speed by 55%, ignoring that this also applies to all software that boosts efficiency.
They routinely classify claims as "weak" that demonstrably aren't - for example, Google Solar API's solar mapping does demonstrably accelerate deployment - but that's "weak" because it's a "corporate study". But if a corporate study talks about harms (for example, gas turbines at datacentres), they're more than happy to cite that.
It's just bad. Wind forecasting value uplift of 20%? Nope. 71% decrease in rainforest deforestation in Rainforest Connection monitored areas? Nope. AI methane leak detection? Nope. AI real-time balancing of renewables (particularly in controlling grid-scale batteries' decisions on when to charge and discharge)? Nope. These are real things, in use today, making real impacts. They don't care. And these are based on the same technological advances that have boosted the rest of the AI field. Transformers boosts audio detection. It boosts weather forecasting. It boosts general data analysis. And indeed, the path forward for power systems modeling involves more human data, like text. It benefits a model to know that, say, an eclipse might be coming to X area soon, and not only will this affect solar and wind output, but many people will drive to see it, which will increase EV charging needs along their routes (which one needs to understand where people will be coming from and going and what roads they'd be likely to choose), while decreasing consumption at those peoples' homes in the meantime. The path forward for energy management is multimodality. Same with self-driving and all sorts of other things.
If you're forecasting AI causing the emissions of ~1% of global emissions by 2030 - which is a forecast that assumes a lot of growth - you really don't need much efficiency gains at all to offset it. The real concerns are not what they focus on here: they're Jevon's Paradox. It's not what the AI itself consumes, but it's what happens if global productivity increases greatly because of AI. There it doesn't have to offset a 1% increase in emissions - it has to offset a far larger growth of emissions in the broader economy.