Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:fake news!!! (Score 1) 79

CPB and the government have been collected data directly from the airlines ever since the aftermath of 9/11 through a number of programs, for example to check passengers against watch lists and to verify the identity of travelers on international flights.

What has changed is that by buying data from a commerical broker instead of a a congressionally instituted program, it bypasses judicial review and limits set by Congress on data collected through those programs -- for example it can track passengers on domestic flights even if they're not on a watch list.

Comment Re: It's not a decline... (Score 1) 168

Fascism isn't an ideology; it's more like a disease of ideology. The main characteristic of fascist leaders is that they're unprincipled; they use ideology to control others, they're not bound by it themselves. It's not just that some fascists are left-wing and others are right-wing. Any given fascist leader is left-wing when it suits his purposes and right-wing when that works better for him. The Nazis were socialists until they got their hands on power and into bed with industry leaders, but it wasn't a turn to the right. The wealthy industrialists thought they were using Hitler, but it was the other way around. The same with Mussolini. He was socialist when he was a nobody but turned away from that when he lost his job at a socialist newspaper for advocating militarism and nationalism.

In any case, you should read Umberto Eco's essay on "Ur-Fascism", which tackles the extreme difficulties in characterizing fascism as an ideology (which as I stated I don't think it is). He actually lived under Mussolini.

Comment Midjourney lawsuit - both necessary and inevitable (Score 1) 76

It was only a matter of time. With Disney and NBCUniversal now suing Midjourney for training on their IP and outputting near-replicas of characters like Aladdin and the Minions, we’ve officially entered the next phase of the AI copyright wars. This isn't a fan-fiction dispute or a YouTube takedown. This is major-league litigation—backed by companies who understand copyright law better than anyone because they’ve weaponized it for decades.

And you know what? On this point, I’m with them.

I’ve argued before—and still believe—that creators, whether they’re indie artists or billion-dollar studios, deserve compensation when their work is harvested as fuel for someone else’s generative model. This applies to Midjourney, just as it does to Meta and OpenAI. Remember, Meta’s LLaMa has also come under fire for training on copyrighted books. The “but it was on the internet” defense doesn’t hold up when your model learns to replicate the style, structure, and soul of other people’s work. You’re not building from scratch—you’re remixing without consent or credit.

Yes, copyright law needs modernization. Yes, fair use is important. But let’s not pretend this is fair use. If you train a model on The Lion King, then ask it to draw a lion with big eyes in a sunset and get something that’s 95% Simba, you’ve crossed a line. That’s not transformative—that’s substitution.

To be clear: Hollywood wordsmiths are already using Midjourney, Sora, and open-source Huggingface models to generate visuals for the shows they are hired to create for. When it comes to generating locations, atmospheres, and character sketches, these tools are astonishingly good. Being able to see the scene, or generate a beat sheet for the emotional arc they are trying to capture, is beyond useful. I think generative models have real power to augment human creativity.

But augmentation doesn’t mean expropriation. And that power doesn’t excuse how these models were built. If your model can conjure a close approximation of a Disney character, that’s not fair use—it’s mimicry at scale. If it can generate Minions on demand because it was trained on millions of frames of Minions without paying Universal, you’re not in a legal gray zone. You’re in infringement territory.

Studios suing AI platforms doesn’t automatically make the studios the good guys. (Disney crying foul over creative overreach is rich.) But that doesn’t make them wrong. If you're going to claim your model learns like a human, then it’s time to follow the human rules: using someone else’s work without permission or payment isn’t innovation—it’s theft.

Comment Re:Weird (Score 1) 95

It's so weird that so many people are ignoring the massive accuracy issues of LLMs and have this misguided idea that you can just trust the output of a computer because... well, it's a computer.

What’s actually weird is pretending anyone in AI development is saying “just trust the computer.” Nobody is advocating blind trust—we’re advocating tool use. You know, like how compilers don’t write perfect code, but we still use them. Or how your IDE doesn’t understand your architecture, but it still catches your syntax errors.

Even weirder? Watching people whose jobs are 40% boilerplate and 60% Googling suddenly develop deep philosophical concerns about epistemology. Everyone who isn’t rage-posting their insecurity over imminent obsolescence is treating LLMs like any other fallible-but-useful tool—except this one just happens to be better at cranking out working code than half of GitHub.

You're not warning us about trust. You're panicking because the tool is starting to do your job—and it doesn’t need caffeine, compliments, or a pull request approval.

It's literally using random numbers in its text generation algorithm.

Translation: randomness isn’t the problem. It’s your discomfort with why it still works anyway that has your knickers in a twist.

That sentence is doing a lot of work to sound like it understands probabilistic modeling. Spoiler: it doesn’t. Claiming LLMs are invalid because they use randomness is like claiming Monte Carlo methods in physics or finance are junk science. Randomness isn’t failure—it’s how we explore probability spaces, discover novel solutions, and generate diverse, coherent outputs.

If you actually understood beam search, temperature, or top-k sampling, you’d know “random” here means controlled variation, not “magic 8-ball with delusions of grammar.” Controlled randomness is what lets LLMs generate plausible alternatives—something you’d know if you’d ever tuned a sampler instead of just rage-posting from the shallow end of the AI pool.

If your job is threatened by a model that uses weighted randomness, I have bad news: your stackoverflow-to-clipboard ratio was higher than you thought. Time to read the GPT-text on the wall and start plotting your next career pivot.

Why not just use astrology?

Because astrology never passed the bar exam, defeated a world-class Go champion, debugged a microservice, or explained value vs. reference semantics in C without making it worse. LLMs have. Hell, you probably leaned on one the last time your regex failed and you didn’t want to ask in the group chat.

But sure—let’s pretend astrology is the same as a transformer architecture trained on hundreds of billions of tokens and fine-tuned across dozens of domains to produce results you can’t replicate without five open tabs and a panic attack.

Want to know the real difference? Nobody ever replaced a software engineer because they wanted a Capricorn instead of an Aquarius.

You’re not mad because LLMs are inaccurate. You’re mad because they’re accurate enough, cheap enough, and scalable enough for management to finally put a price tag on your replaceability.

The AI doomsday clock for coders is ticking. And you just realized it's set to UTC.

Comment MAGA FDA: Deregulation Disguised as Innovation (Score 1) 95

The FDA just unveiled a sweeping set of policy shifts—faster drug approvals, tighter industry "partnerships," AI-assisted review pipelines, and a renewed focus on processed food additives. On the surface, it reads like a long-overdue modernization push. But dig a little, and it starts to reek of MAGA. When an administration this allergic to science starts promising "gold-standard science and common sense," what they usually mean is less science, more business. Replacing randomized trials with curated real-world data (who is doing the curation, I wonder...surely not Big Pharma?) cutting pre-market testing, and shaving safety review timelines down to national public health emergency levels? That’s not reform; that’s regulatory cosplay. And when these policy proposals are coming from a MAGA-approved physician who is on the public record denouncing school closures during the Covid 19 pandemic, you have to be...skeptical. Makary was approved by a party line vote, with three Democratic Party senators defecting. This isn't public health policy -- it's just more MAGA dumb-fuckery, hiding in plain sight.

That said, I’m not completely dismissive—particularly when it comes to the FDA’s use of AI. If there’s a defensible, low-risk entry point for generative AI in public health, it’s exactly where the agency is putting it: first-pass reviews of half-million-page submissions, table generation, and low-level document triage. Nobody’s pretending this replaces human judgment (yet), and unlike autonomous vehicles or predictive policing, the harm of a hallucinated table of contents is... manageable.

Still, this policy bundle isn’t just about AI. It’s about redefining what constitutes a sufficient standard of proof, under the guise of efficiency. And in that broader context, even the good ideas—like using causal inference from big datasets to monitor post-market outcomes—risk being co-opted as excuses to approve products faster and cheaper, not better. If the food dye bans and ultraprocessed food warnings survive this policy wave, great. But I wouldn’t count on it. The rest feels like an industry wishlist endorsed by MAGA and fed to their pet FDA chairman.

Comment Re:Don't forget Starlink (Score 1) 105

Back in the days of the Rainbow series, the Orange Book required that data that was marked as secure could not be transferred to any location or user who was (a) not authorised to access it or (b) did not have the security permissions regardless of any other authorisation. There was an additional protocol, though, listed in those manuals - I don't know if it was ever applied though - which stated that data could not be transferred to any device or any network that did not enforce the same security rules or was not authorised to access that data.

Regardless, in more modern times, these protocols were all abolished.

Had they not been, and had all protocols been put in place and enforced, then you could install all the unsecured connections and unsecured servers you liked, without limit. It wouldn't have made the slightest difference to actual security, because the full set of protocols would have required the system as a whole to not place sensitive data on such systems.

After the Clinton email server scandal, the Manning leaks, and the Snowden leaks, I'm astonished this wasn't done. I am dubious the Clinton scandal was actually anything like as bad as the claimants said, but it doesn't really matter. If these protocols were all in place, then it would be absolutely impossible for secure data to be transferred to unsecured devices, and absolutely impossible for secure data to be copied to machines that had no "need to know", regardless of any passwords obtained and any clearance obtained.

If people are using unsecured phones, unsecured protocols, unsecured satellite links, etc, it is not because we don't know how to enforce good policy, the documents on how to do this are old and could do with being updated but do in fact exist, as does the software that is capable of enforcing those rules. It is because a choice has been made, by some idiot or other, to consider the risks and consequences perfectly reasonable costs of doing business with companies like Microsoft, because companies like Microsoft simply aren't capable of producing systems that can achieve that kind of level of security and everyone knows it.

Comment Re:Honestly this is small potatoes (Score 1) 105

In and of itself, that's actually the worrying part.

In the 1930s, and even the first few years of the 1940s, a lot of normal (and relatively sane) people agreed completely with what the fascists were doing. In the Rhythm 0 "endurance art" by Marina Abramovi, normal (and relatively sane) people openly abused their right to do whatever they liked to her, at least up to the point where one tried to kill her with a gun that had been supplied as part of the installation, at which point the people realised they may have gone a little OTT.

Normal (and relatively sane) people will agree with, and support, all kinds of things most societies would regard as utterly evil, so long as (relative to some aspirational ideal) the evil is incremental, with each step in itself banal.

There are various (now-disputed) psychology experiments that attempted to study this phenomenon, but regardless of the credibility of those experiments, there's never really been much of an effort by any society to actually stop, think, and consider the possibility that maybe they're a little too willing to agree to stuff that maybe they shouldn't. People are very keen to assume that it's only other people who can fall into that trap.

Normal and sane is, sadly as Rhythm 0 showed extremely well, not as impressive as we'd all like to think it is. The veneer of civilisation is beautiful to behold, but runs awfully thin and chips easily. Normal and sane adults are not as distant from chimpanzees as our five million years of divergence would encourage us to think. Which is rather worrying, when you get right down to it.

Comment Re:Honestly this is small potatoes (Score 0) 105

Pretty much agree, I'd also add that we don't have a clear impression of who actually did the supposed rioting, the media were too busy being shot by the National Guard to get an overly-clear impression.

(We know during the BLM "riots" that a suspiciously large number of the "rioters" were later identified as white nationalists, and we know that in the British police spy scandal that the spies often advocated or led actions that were more violent than those the group they were in espoused, so I'd be wary of making any assumptions at the heat of the moment as to exactly who did what, until that is clearly and definitively known. If this had been a popular uprising, I would not have expected such small-scale disturbances - the race riots of the 60s, the Rodney King riots, the British riots in Brixton or Toxteth in the 80s, these weren't the minor events we're seeing in California, which are on a very very much smaller scale than the protest marches that have been taking place.)

This is different from the Jan 6th attempted coup, when those involved in the coup made it very clear they were indeed involved and where those involved were very clearly affiliated with domestic terrorist groups such as the Proud Boys. Let's get some clear answers as to exactly what scale was involved and who it involved, because, yes, this has a VERY Reichstag-fire vibe to it.

Comment Re:Honestly this is small potatoes (Score 2) 105

I would have to agree. There is no obvious end-goal of developing an America that is favourable to the global economy, to Americans, or even to himself, unless we assume that he meant what he said about ending elections and becoming a national dictator. The actions favour destabilisation, fragmentation, and the furthering of the goals of anyone with the power to become a global dictator.

Exactly who is pulling the strings is, I think, not quite so important. The Chechen leader has made it clear he sees himself as a future leader of the Russian Federation, and he wouldn't be the first tyrant to try and seize absolute power in the last few years. (Remember Wagner?) We can assume that there's plenty lurking in the shadows, guiding things subtly in the hopes that Putin will slip.

Comment Did he rename his preferred existing parts? (Score 1) 105

The Trump administration has been largely a copy-paste production. When they initially wanted to "replace" the ACA back during his first term, their plan was to replace the ACA with the ACA - made better by putting his signature at the end instead of the signature of President Obama. When they were finally called out on that, they quietly dropped their efforts to repeal the ACA, instead focusing on various things they can do in the name of "border security" (nevermind that no effort has been made this term for the wall that he used to talk nonstop about).

Comment Re:It's not a decline... (Score 4, Interesting) 168

I think people expect commercial social media networks to be something they can't be -- a kind of commons where you are exposed to the range of views that exist in your community. But that's not what makes social networks money, what makes them money is engagement, and consuming a variety of opinions is tiresome for users and bad for profits. When did you ever see social media trying to engage you with opinions you don't agree with or inform you about the breadth of opinion out there? It has never done that.

The old management of Twitter had a strategy of making it a big tent, comfortable for centrist views and centrist-adjacent views. This enabled it to function as a kind of limited town common for people who either weren't interested in politics, like authors or celebrities promoting their work, or who wanted to reach a large number of mainly apolitical people. This meant drawing lines on both sides of the political spectrum, and naturally people near the line on either side were continually furious with them.

It was an unnatural and unstable situation. As soon as Musk tried to broaden one side of the tent, polarization was inevitable. This means neither X nor Bluesky can be what Twitter was for advertisers and public figures looking for a broad audience.

At present I'm using Mastodon. For users of old Twitter, it must seem like an empty wasteland, but it's a non-commercial network, it has no business imperative to suck up every last free moment of my attention. I follow major news organizations who dutifully post major stories. I follow some interest groups which are active to a modest degree, some local groups who post on local issues, and a few celebrities like George Takei. *Everybody's* not on it, but that's OK; I don't want to spend more than a few minutes a day on the thing so I don't have time to follow everyone I might be interested in. Oh, and moderation is on a per-server basis, so you can choose a server where the admins have a policy you're OK with.

Comment Re:whatever happened to transparent government? (Score 3, Insightful) 39

No, there are all kinds of information the government has that are legitimately not available. Sensitive data on private citizens, for example, which is why people are worried about unvetted DOGE employees getting unfettered access to federal systems. Information that would put witnesses in ongoing criminal investigations at risk. Military operations in progress and intelligence assets in use.

The problem is ever since there has been a legal means to keep that information secret, it's also been used to cover up government mistake and misconduct. It's perfectly reasonable for a government to keep things from its citizens *if there is a specific and articulable justification* that can withstand critical examination.

And sometimes those justifications are overridden by public interest concerns -- specifically when officials really want to bury something like the Pentagon Papers because they are embarrassing to the government. "Embarrassing to the government" should be an argument against secrecy, because of the public interest in knowing the government is doing embarrassing things. In the end, the embarrassment caused by the Pentagon Papers was *good* for the country.

Slashdot Top Deals

"An ounce of prevention is worth a ton of code." -- an anonymous programmer

Working...