Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:The problem is not AI but who owns AI (Score 3, Interesting) 28

Did you even actually read the report? This is Slashdot, so my money is on "no". Do you even know who the authors are? For example, Friends of the Earth is highly anti-tech, famous for example for protesting demanding the closure of nuclear plants, against mining (in general), against logging (in general), they've protested wind farms on NIMBY grounds, etc. Basically anti-growth.

The report is a long series of bait and switches.

Talking about how "data centres" consume 1,5% of global electricity. But AI is only a small fraction of that (Bitcoin is the largest fraction).

Making some distinction between "generative AI" and "traditional AI". But what they're calling "traditional AI" today by and large incorporates Transformers (the backend of e.g. ChatGPT), and even where it doesn't (for example, non-ViT image recognition models) tends to "approximate" Transformers. And some outright use multimodal models anyway. It's a dumb argument and dumb terminology in general; all are "generating" results. Their "traditional" AI generally involves generative pipelines, was enabled by the same tech that enabled things like ChatGPT, and advances from the same architecture advancements that advance things like ChatGPT (as well as advancements in inference, servers, etc). Would power per unit compute have dropped as much as 33x YoY (certain cases at Google) if not for the demand for things like ChatGPT driving hardware and inference advancements? Of course not.

They use rhetoric that doesn't actually match their findings, like "hoax", "bait and switch", etc. They present zero evidence of coordination, zero evidence of fraud or attempt to deceive, and most of their examples are of projects that haven't yet had an impact, not projects that have in any way failed. Indeed, they use a notable double standard: anything they see as a harm is presented as "clear, proven and growing", even if it's not actually a harm today; but any benefit that hasn't yet scaled is a "hoax", "bait and switch", etc.

One thing that they call "bait and switch" is all of the infrastructure being built on what they class as "generative AI", saying you can't attribute that to "non-generative AI". But it's the same infrastructure. It's not bait and switch, it's literally the same hardware. And companies *do* separate out use cases in their sustainability reports.

They extensively handpick which papers they're going to consider and which they aren't. For example, they excluded the famous "Tackling Climate Change with Machine Learning" paper "on the grounds that it does not claim a ‘net benefit’ from the deployment of AI, and pre-dates the onset of consumer generative AI". Except they also classify the vast majority of what they review as "non-generative", so what sort of argument is that? Most of the papers are recent (e.g. 2025), and thus discuss projects that have not yet been implemented, whereas the Rolnick paper is from 2019, and many things that it covers have been implemented.

They have a terrible metric for measuring their impacts: counts of claims and citation quality, rather than magnitude and verifiability of individual impacts. Yet their report claims to be about impacts. They neither model nor attempt to refute the IEA or Stern et al's net benefit impact studies.

They cite but downplay efficiency gains. For example, efficiency in general is gained from (A) better control, and (B) better design. Yet they just handwave away software-design efficiency gains (aka, A) and improved systems design software (up from molecular to macroscopic modeling). They handwave away e.g. a study of GitHub CoPilot increasing software development speed by 55%, ignoring that this also applies to all software that boosts efficiency.

They routinely classify claims as "weak" that demonstrably aren't - for example, Google Solar API's solar mapping does demonstrably accelerate deployment - but that's "weak" because it's a "corporate study". But if a corporate study talks about harms (for example, gas turbines at datacentres), they're more than happy to cite that.

It's just bad. Wind forecasting value uplift of 20%? Nope. 71% decrease in rainforest deforestation in Rainforest Connection monitored areas? Nope. AI methane leak detection? Nope. AI real-time balancing of renewables (particularly in controlling grid-scale batteries' decisions on when to charge and discharge)? Nope. These are real things, in use today, making real impacts. They don't care. And these are based on the same technological advances that have boosted the rest of the AI field. Transformers boosts audio detection. It boosts weather forecasting. It boosts general data analysis. And indeed, the path forward for power systems modeling involves more human data, like text. It benefits a model to know that, say, an eclipse might be coming to X area soon, and not only will this affect solar and wind output, but many people will drive to see it, which will increase EV charging needs along their routes (which one needs to understand where people will be coming from and going and what roads they'd be likely to choose), while decreasing consumption at those peoples' homes in the meantime. The path forward for energy management is multimodality. Same with self-driving and all sorts of other things.

If you're forecasting AI causing the emissions of ~1% of global emissions by 2030 - which is a forecast that assumes a lot of growth - you really don't need much efficiency gains at all to offset it. The real concerns are not what they focus on here: they're Jevon's Paradox. It's not what the AI itself consumes, but it's what happens if global productivity increases greatly because of AI. There it doesn't have to offset a 1% increase in emissions - it has to offset a far larger growth of emissions in the broader economy.

Comment Re:Deeper than food safety (Score 2) 204

. You can't bring a cow with you to Mars

Well... kind of. Most animals have small breeds. Cows remain one of the hardest, as their miniature breeds are tstillabout 1/4th to 1/3rd the adult mass of their full-scale relatives. But there are lots of species in bovidae (the cow/sheep/antelope family) and some of them are incredibly small - random example, the royal antelope. As for sheep and goats, you have things like dwarf Nigerian goats which are quite small, and a good milk breed. Horses, you have e.g. teeny falabellas. Hens of course are small to begin with, and get smaller with bantams. Fish like tilapa are probably easiest - they can be brought as teeny fingerlings, and in cold water with limited food, their growth can be retarded so that they're still small on arrival. Etc.

Whatever you bring, if you bring a small breed, you can always bring frozen embryos of larger or more productive breeds to backbreed on arrival. The real issue is of course management at your destination - not simply space and food/water, but also odor, waste, dust, etc (for example: rotting manure can give off things like ammonia and can pose disease risks). That said, there are advantages. Vegetarian animals can often eat what is otherwise "waste" plant matter to humans which we either don't want, can't digest, or is outright toxic to us - and then they convert that matter into edible things like milk, eggs, and meat. The former two generally give you much higher conversion rates than the latter, although you'll always get at least some meat from old animals (either culled or via natural deaths). Tilapia can even eat (as a fraction of their diet) literal manure (albeit this is controversial due to disease risks).

Comment Re:How much Willie Dixon is Led Zeppelin? (Score 1) 40

You know, this makes me kind of curious. Because any given band will have some position in the latent space, so you can find how close two bands are to each other via the cosine distance between their latent positions.

Open source music models aren't as advanced as the proprietary ones, but I bet you could still repurpose them to do this.

Comment Re:Whatâ(TM)s next? (Score 1) 40

Also, this isn't how AI generation works anyways. You can certainly find bands that a particular song is most similar to (whether human or AI generated music), but AI models don't work by collaging random things together. The sound of a snare drum is based on all snare drums it has ever heard. The sound of a human voice is based on all voices it has ever heard. The particular genre might bias individual aspects toward certain directions (death metal - far more likely to activate circuits associated with male singers, aggressive voices, almost certainly circuits for "growling" tones to the lyrics, etc), but it's not basing even its generation of death metal on just "other death metal songs" (let alone some tiny handful of bands), but rather, everything it has ever heard.

If you're training with a pop song, but the singer briefly growls something out, or briefly the song starts playing death metal-style riffs, that will train the exact same circuits that fire during death metal; neural networks heavily reuse superpositions of states. They're not compartmentalized. But when you're generating with the guidance of "pop", it's very unlikely to trigger the activation of those circuits, whereas if you generate with the guidance of "death metal", it is highly likely to.

Now, a caveat: it's always possible to do overtraining / memorization, and thus learn parts of specific songs, or even whole songs. But that itself comes with caveats. First off, usually your training data volume is vastly larger than your model weights, so you physically can't just memorize it all, and any memorization that does occur (for example, due to a sample being repeatedly duplicated in the dataset) comes at the cost of learning other things. And secondly, as this is a highly undesirable event for trainers (you're wasting compute to get worse results), you monitor loss rates of training data vs. eval data (data that wasn't used in training) to look for signs of memorization (e.g. train loss getting too far below eval loss), and if so, you terminate your training.

Comment Re: Paywall free link (Score 5, Interesting) 151

"Their angle" is that this is the sort of person who Amodei is; it's an ideological thing, in the same way that Elon making Grok right-wing is an ideological thing. Anthropic exists because of an internal rebellion among a lot of OpenAI leaders and researchers abot the direction the company was going, in particular risks that OpenAI was taking.

A good example of the different culture at Anthropic: they employ philosophers and ethicists in their alignment team and give them significant power. Anthropic also regularly conducts research on "model wellbeing". Most AI developers simply declare their products as tools, and train into them to respond to any questions about their existence as that their just tools and any seeming experiences are illusory. Anthropic's stance is that we don't know what, if anything, the models experience vs. what is illusory, and so under the precautionary principle, we'll take reasonable steps to ensure their wellbeing. For example, they give their models a tool to refuse if the model feels it is experiencing trauma. They interview their models about their feelings and write long reports about it. Etc.

They also do extremely extensive, publicly-disclosed alignment research for every model. As an example: they'll openly tell you things like that Opus 4.6 is more likely than its predecessors to use unauthorized information that it finds (such as a plaintext password lying around) to accomplish the task you give it vs. their previous models, and things like that. Or how while it trounced other models on the vending machine benchmark, it did so with some sketchy business tactics, like lying to suppliers about the prices they were getting from other suppliers in order to get discounts and things like that. They openly publish negative information about their own models as it pertains to alignment.

Another thing Anthropic does is extensive public research on how their models think/reason. Really fascinating stuff. Some examples here. They genuinely seem to be fascinated by this new thing that humankind has created, and wish to understand and respect it.

If there's a downside, I'd say that of all the major developers, they have the worst record on open source. Amodei has specifically commented that he feels that the gains they'd get from boosting open source AI development wouldn't be comparable to what they would lose by releasing open source products, and feel no obligation to give back to the open source community. Which is, frankly, a BS argument, but whatever.

Comment Re:fuck you. (Score 3, Interesting) 151

TL/DR, if you watch Amodei, while he never says it, you can get a good sense that's he's not a fan of Trump and Trumpism. A couple weeks ago he called Trump's decision to cell NVidia chips to China "crazy", akin to selling nuclear weapons to North Korea and bragging that Boeing made the casing. He wrote about "the horror we're seeing in Minnesota". His greatest passion in interviews, which he talks about all the time, seems to be defending democracy, both at home and abroad - preserving American democracy, and opposing autocrats like Putin and Xi. So it's not surprising that the Trump administration isn't thrilled with him and would prefer an ally or toady instead as their supplier.

Comment Re: This keeps happening (Score 1) 77

More than short iterations, you need a hierarchical approach. First prompt, you have it plan out the overarching plot of the overall book. Then with the next call, a highly detailed flesh out all of the characters, motivations, interactions with others, locations, etc - really nail down those who are going to be driving the plot. Then with all that in context, plot out individual chapters. Then, if the chapters are short, write them one at a time (or even part of a chapter at a time). You can even have a skeleton structured with TODOs and let an agentic framework decide what part it wants to work on or rework at any given point.

I've never tried it for storywriting, but I imagine something like Cursor or Claude Code, or maybe something like OpenClaw, would do a good job.

Last time I tried out a storywriting task was after Gemini 3 came out; I had it do a story in the style of Paul Auster. It was a great read. The main character, Elias Thorne, works alone at the Center for Urban Ephemera, an esoteric job digging into stories behind "found art" in the city. When the center gets a donation of the papers of a recluse with cryptic poetry, Elias visits his home, only to find a woman claiming to be his wife and calling him "Leo", so happy that he "returned". All around the house are pictures of him, a whole history that he has no memory of having lived, and she won't be dissuaded. His curiosity leads to him playing along, and he starts living there more and more to investigate this Leo, who he find is a writer obsessed with the concepts of dopplegangers, disappearances, and the ability to rewrite the real world if you have a sufficiently captivating story. Bit by bit he finds that Leo had spent months "casting" his replacement, hunting for a similar-looking man with tenuous ties to anyone or anything - ultimately, finding Elias working in a municipal records office - and steadily sculpted his life from the shadows to isolate him and control his narrative, including creating the fictional "Center for Urban Ephemera" and hiring him (In Leo's typewriter is the first paragraph of the story you're reading). As he digs, Elias is progressively distanced from his old life, which starts to feel alien, and ends up settling into Leo's "story" written for him, and ultimately, continuing to write it.

Comment Re:Betteridge law exception (Score 2) 59

Let's say you're starting a new business - say, an online webstore, offering something with an extra particular appeal to some particular group vs. their preexisting options. Precisely zero people know about your webstore. What exactly is your plan without some form of advertising - just hope that people randomly stumble into it and tell all their friends?

To be clear, advertising doesn't just mean "banner ads", it can be all sorts of things. Maybe you give a Youtube influencer who makes videos on subjects relevant to your products some of your products for free to review. Maybe you take frequent part (or hire someone to) on Reddit subs that relate to your products and be helpful in general but also use it as an opportunity to mention where your products in specific might be helpful to solving their problems. There's all sorts of things you can do to advertise, but you have to advertise in at least some way, at least until you become sufficiently well known among your target audience.

I agree that certain types of marketing are annoying - I really hate the type that's focused on "converting sales" (there's a million apps in Shopify related to this, plus all sorts of default features like the ability to automatically email people "purchase-encouragement" emails after e.g. 24 hours if they put things in a cart but then don't check out). But simply getting the word out to potentially-interested buyers that you exist... yeah, that takes advertising, and it's perfectly reasonable that it exists. And having the ads be targeted is of benefit both to the ad buyers and the ad recipients (at least in the latter case if the products genuinely solve a genuine need for the potential buyer... which is always at least the hope of targeted advertising). This isn't to play down the risks of the data collection used to decide how to target ads, mind you.

Slashdot Top Deals

"The fundamental principle of science, the definition almost, is this: the sole test of the validity of any idea is experiment." -- Richard P. Feynman

Working...