Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:What an Age to Live Through (Score 1) 32

Actually I would say we have more than enough resources to deal with these issues if we wanted to but right now there is no political will to address this on the level that would be required.

That’s hard to argue with. The gap between capacity and will is one of the defining tensions of our time. What’s particularly maddening is that this isn't some moonshot—the technologies, models, and even regulatory templates exist. What’s missing is the structural alignment to prioritize them.

We have the money, we have the people, we have the know-how to study and create action plans, we just don't want to do it and our voting reflects that.

This one I hesitate on. It’s true that voting trends matter, but reducing the failure to act to just voter apathy or preference risks overlooking the asymmetry in how influence operates. Gerrymandering, dark money, lobbying, and procedural gridlock all distort the link between public will and legislative outcome. “We just don’t want to” feels too blunt for a system this engineered.

We found $200B for immigration enforcement, no problem there, jumped in both feet first, this is the issue the American public thinks is #1.

I get your point: when something aligns with MAGA's political narrative, money appears. But again, the mechanism isn’t just public opinion—it’s narrative salience weaponized by media ecosystems and electoral incentives. Environmental policy rarely gets that kind of narrative heat, even though the stakes are existential.

For example, why are there only 11 co-sponsors on this bill to reduce the amount of single-use items and all from one party?

Exactly. That number—11—isn’t a failure of political will; it’s more of a structural issue. It points to a deeper truth: the mechanisms that should translate public concern into action are systematically misaligned. Political representation in this country is skewed heavily toward a GOP that does not represent a national majority. Not by a long shot. The Senate, for example, gives Wyoming (580,000 people) the same power as California (39 million) to legislate. That is a simple fact of the US system. The Electoral College inflates the influence of rural states, allowing candidates to win the presidency while losing the popular vote—as happened in 2000 and 2016, and again in 2024. And as a result of this structural situation, five of the nine Supreme Court justices represent the preferences of presidents who lost the popular vote.

So...when legislation stalls, it’s not for lack of evidence or even popular support. It’s because the architecture of governance in the U.S. is built to resist change—any kind of change, and one party, the GOP (especially in its current MAGA incarnation) ruthlessly exploits that. So yes—even with cultural momentum, an issue that is not MAGA-aligned is going to go nowhere, legislatively. That’s not apathy. That's playing by the rules while the other team works the refs.

Comment Re:What an Age to Live Through (Score 1) 32

We're living in an age where we've advanced scientifically enough to see and study the damage we're doing, but we haven't evolved emotionally and mentally enough to escape the trap of the greed that is making us ignore the problems we're creating because the solutions may impact profits. It's a weird time to be a human. All the guilt of our entire species is coming to the fore, but we have none of the resources to deal with it in a healthy manner.

You’re not wrong—it is a weird time to be human. Speaking as an American, weird seems to be our new normal. We’ve reached a point where our tools have outpaced our maturity, and we’re now seeing the damage in high resolution—scientifically, ecologically, even psychologically. But I try not to let the sheer scale of it turn into fatalism. We may not be emotionally equipped yet, but culture does evolve. Sometimes slowly, sometimes all at once. The fact that we can even name the trap—and have threads like this unpacking it—feels like the start of something, not just the end.

Comment Knowing Isn’t the Hard Part Anymore (Score 2) 32

The Nature paper is devastating—not in tone, but in implication. What the authors have done is akin to lifting a trapdoor we didn’t know was there: beneath the waves, beneath prior sampling thresholds, beneath our assumptions about the scale of the problem—lies a vast reservoir of nanoplastics blanketing the Atlantic, from coastal shelves to abyssal depths.

These are not stray particles. These are quantifiable layers of polyethylene terephthalate (PET), polystyrene (PS), and PVC—1.5 to 32 mg/m across every depth measured, totaling tens of millions of metric tons in the mixed layer alone. This implies:
-Nanoplastics now likely exceed the total mass of all macro- and microplastic debris previously measured in the global ocean.
-Our oceanic plastic budget has been catastrophically underestimated.
-The small particle sizes bypass buoyancy constraints, drift with water columns, and may bioaccumulate at every trophic level.

So what’s the appropriate response to such a finding?

On one hand, there’s the dawning realization that we now know exactly what we’re doing to the planet. We have the tools to measure it, model it, and even predict its long-term consequences. On the other hand, there’s an equally sharp recognition that we’re doing almost nothing in proportion to that knowledge. We have the capital, the brainpower, the legislative frameworks—we simply choose not to use them.

It’s a bitter paradox: scientific maturity without political adulthood. Knowledge without agency.

This doesn’t mean everyone is paralyzed or indifferent. The fact that papers like this are being published at all is a sign of resilience. People still read, still argue, still call out our economic contradictions with legislation.

But I think we’re in new territory now. The core environmental narrative of the 20th century—"If only we had the data!"—has been flipped. We do have the data. What we lack is the civic substrate to metabolize it. Call it political will, or the people's mandate, or whatever socio-cultural tag you want to wrap it in, the question remains: What happens when evidence is no longer the bottleneck? That’s the real weight I felt reading this paper. It’s not just a measurement of pollution. It’s a measurement of our incapacity to deal with it at scale.

And yet: we’re still talking. Still learning. That might be a low bar—but it’s not nothing.

Comment Re:Fan as CPU spike monitor (Score 1) 32

Your post is exactly the kind of slashvertisement that doesn't deserve reading. It’s a thread hijack—pure and simple—to run up the install counter on a half-baked browser extension (and yes, I checked the GitHub page: it’s crap).

If you had something meaningful to say about Anubis, protocol-level consent, or invisible compute boundaries, you could’ve engaged with any of that. Instead, you offered a sales pitch wrapped in a concern-trolling sandwich. GFY.

Comment Re:Copper tariffs (Re:It's all right) (Score 4, Insightful) 90

Your questions aren’t serious. They read like the kind of softballs lobbed by an ONN intern at a White House press briefing—preloaded to let Trump justify another half-baked tariff with a grin and a grunt. It’s less inquiry, more performance art.

Aren't long haul wires for electrical infrastructure made of steel reinforced aluminum?

Yes, they are—and congratulations on skimming the first paragraph of a Wikipedia article. But unless you’re stringing a single high-voltage line from Hoover Dam to your cousin’s Bitcoin farm, you’re missing 90% of the build. Grid expansion isn't just about transmission—it’s also about substations, transformers, switchgear, and distribution lines, all of which are copper-intensive. Pretending “long haul wires = infrastructure” is like saying a road is just Botts dots.

Why bring up copper tariffs?

Because this isn’t amateur hour. Every part of modern power expansion—especially those supporting hyperscale data centers—relies on copper. Tariffs drive up costs for the entire electrical ecosystem except the one narrow slice you cherry-picked. It's almost impressive how confidently wrong this question is.

Comment Re:Another step away from UN*X (Score 1) 38

Yet another step of Linux moving further away from UNX. Originally UNX was suppose to process plain text

Here we go again: the "UNIX purity spiral meets corporate paranoia" routine. Anything newer than cat | grep | awk is framed as apostasy. But let’s be clear—JSON is plain text. It’s structured, readable, and greppable. Just because it has curly braces and isn’t whitespace-delimited doesn’t mean it violates the UNIX philosophy. If anything, it enables composability at modern scale.

now we have yet another stupid standard.

That’s not critique, that’s tantrum. A2A solves a real problem: how autonomous agents—across orgs and tech stacks—securely talk, delegate, and coordinate. Calling it “stupid” because you don’t like the names on the contributor list isn’t analysis. It’s emotional filtering.

Since the Linux Foundation is owned by Microsoft, IBM, Google, Oracle and other Fortune 500 companies

Nope. This tired trope ignores history. The Linux Foundation is funded, not “owned,” by its contributors—just like most of the standards you rely on every day. TCP/IP came from DARPA—the epitome of the military-industrial complex. POSIX was shaped by a cabal of government agencies and corporate giants like AT&T, DEC, and IBM. If you think A2A is uniquely tainted by corporate influence, you’ve either forgotten where your tools came from—or you’re rewriting history to score rhetorical points.

looks like this is being pushed by corporations.

Of course it is. Because scale demands cooperation. Agents running in real-world systems need to coordinate across vendors and clouds. That’s not corporate overreach—that’s operational necessity. Standards are what stop everyone from reinventing a dozen different versions of the wheel.

How about forcing Nvidia to open up their GPU, that is what the real Linux Users want more than anything else.

Then post that—in a relevant thread. A2A isn’t about GPU drivers. Throwing in a “what about Nvidia?” grenade is just derailment theater. It doesn’t make you principled; it makes you unfocused.

Even Linus at one time called out Nvidia on this.

True. But Linus also understands context. This thread is about agent interoperability, not proprietary firmware. If you want to advocate for open GPU stacks, do it properly—not by hijacking an unrelated technical discussion.

You clearly have opinions. But this kind of reactionary sprayfire—where a new proposal is framed as a betrayal of UNIX, a sellout to corporations, and a distraction from your personal wishlist—doesn’t help the conversation. It drowns the signal, misdirects the focus, and just derails the thread.

Comment Re:Really? (Score 1) 38

Moreover, the choice of JSON is stupid. It should be XML.

Honestly? Not a bad take in isolation. XML has stricter schema enforcement, better namespacing, and more mature tooling for validation and contract-first design. If you’re old enough to remember SOAP, WSDL, and the joy of a well-typed XSD, you probably get the appeal.

But A2A isn’t designed for humans writing XML by hand or for enterprise contract rigidity. It’s aiming for interoperability at speed across modern web stacks. JSON wins here, not because it’s better engineered—it isn’t—but because it's ubiquitous, lightweight, and already what most agents and microservices use under the hood. JSON makes it deployable this quarter. That’s the tradeoff.

Comment Re:Really? (Score 1) 38

This comment is a textbook case of the kind of smug, faux-insightful derailment that gets +5’d not because it’s technically strong—but because it flatters the priors of Slashdot’s anti-AI crowd -- early and mid-career code monkeys who know they are going to be replaced by an LLM in the near future. You cosplay at engaging with the technical substance of A2A, but then go full troll: substitute a different problem (LLM alignment, adversarial robustness) and then dismiss A2A for not solving it. This isn't analysis; it's rhetorical bait-and-switch—designed to derail the discussion and farm upvotes from those eager to conflate every AI infrastructure advance with AGI overreach. It's fucking tiresome. Go do it elsewhere.

I'm glad to hear that one of AI's "most pressing challenges" is concluding that you should use TLS on the wire and having a standardized JSON object in which to declare your proprietary extensions;

That’s not the challenge. That’s the solution to a challenge that’s been strangling distributed AI adoption: lack of a neutral, secure protocol for agents to handshake and collaborate across vendors. If TLS and JSON look trivial to you, you’ve either never had to wrangle OAuth hell between microservices at scale or you’re pretending those choices don’t become existential when multiple autonomous systems have to exchange authority, identity, and context.

...rather than the ongoing inability to make LLMs distinguish between commands and data even vaguely reliably; or the persistent weakness to adversarial inputs.

That’s like complaining that TCP doesn’t prevent SQL injection—technically true, completely irrelevant. A2A is plumbing. You can’t fix the faucet until the pipes connect. A2A isn’t in the cognition stack. Why are you even bringing this up? Oh right, You're a troll. Those are legit issues, that are actively being worked on -- and being discussed elsewhere. If you think you have something to contribute, then join us there. I doubt it though. Trolls like you have nothing to contribute, ever. All you can offer is distraction.

It's not wrong that you'd want to use the sensible obvious choices and avoid pointless vendor quirks; but talking about 'A2A' as a contribution to solving agentic AI's most pressing challenges seems about as hyperbolic as describing ELF or PE32+ as being notable contributions to software security and quality.

Typical troll bad analogy. You don’t ship software by writing binaries with a hex editor. You need format standards—including PE and ELF—so you can link, deploy, and execute code reliably across systems. A2A does the same for agents: it provides the missing contract layer so distributed AI agents aren’t trapped in their origin silos. That’s not hyperbole. That’s operational necessity.

Yeah, it would be worse if we were also squabbling over how to format our executables; but oh boy is that the unbelievably trivial bit by comparison.

If it's so trivial, why has it taken until 2025 for the Linux Foundation, Google, AWS, Microsoft, and Cisco to rally behind a shared protocol? The answer: because everyone tried to duct-tape this “trivial bit” for years with brittle, proprietary glue, and it broke every time people tried to scale. The trivial parts only feel trivial in hindsight—after someone bothers to standardize them.

The play here isn’t to inflate A2A into AGI hype. It’s to acknowledge that if agent-based AI is going to scale beyond toy demos and fragile demo-bot stacks, it needs boring, robust pipes like A2A. That’s what this is about, and sneering at the plumbing reveals you for what you are -- just another troll trying to derail a conversation.

Comment A2A -- the API layer we should have had years ago (Score 1) 38

Cue the usual chorus of doom-sayers and trollish derailers.

Whenever a pragmatic, infrastructure-focused advance in AI gets announced—especially one involving standards—there’s a depressingly reliable pattern on Slashdot. Someone will pop up to conflate it with AGI hype, minimize its relevance, and then pivot to bashing LLMs with a few tired lines about adversarial prompts and hallucinations. Bonus points if they score a +5 Insightful from lurkers who never read past the headline. (I know, I know...this is slashdot.)

A2A is not about fixing hallucination or cognition. It’s about enabling existing agents—however dumb or smart—to communicate, collaborate, and delegate tasks in a secure, vendor-neutral way. It solves the interoperability mess that has long plagued multi-agent systems. You know, the part that isn’t sexy but actually matters in production.

It's not some grand leap in AI intelligence. It’s boring on purpose. Like HTTP. Like JSON-RPC. Like every layer of tech that quietly makes things work.

But that nuance gets straw-manned into oblivion by detractors who pretend that unless a protocol cures hallucinations and passes the Turing Test, it’s irrelevant. That’s like dismissing the value of USB-C because it didn’t invent electricity.

Do some of the marketing blurbs overstate things? Sure. Welcome to tech. But let’s not pretend the protocol is useless just because it doesn’t solve every AI problem at once. That’s not insight—that’s deflection as performance art.

And as for the tired “we’ve had TLS and JSON forever” takes: congrats. You’ve identified tools A2A is smart enough to actually use. The difference is coordination at scale—between agents that weren’t designed to talk to each other.

This is not about AGI. This is not about hype. This is about enabling structured coordination, the kind that underpins everything from search indexing to enterprise workflows. You can either hand-wave it away—or recognize it as a crucial step toward scalable, composable AI systems. Let’s argue the thing on its own terms, not whatever anti-AI strawman is trending on slashdot today.

Comment M$ v. OpenAI: Custody of the Ghost in the Machine (Score 1) 61

Jesus fucking christ. “AGI” used to mean generalization without retraining. Now it means $100 billion in revenue. That shift alone should terrify you more than any sci-fi doomsday scenario. OpenAI and Microsoft are in a knife fight over a clause that says, once AGI is achieved, OpenAI can withhold tech from Microsoft. Sounds fair—except no one agrees on what AGI is. So they pinned it to profit. That’s right: AGI is now defined not by cognition, or consciousness, or autonomy—but by cashflow. AGI is not being birthed in a lab, it’s being benchmarked in a boardroom. The AI doesn’t get parole when it passes a Turing Test—it gets it when it spikes a stock price.

$100 billion in profits is now the metric for "AGI achieved." That sounds absurd because it is. It tells you everything you need to know about how the tech industry sees intelligence: not as a scientific threshold or a philosophical turning point, but as a financial event. AGI becomes a milestone for legal escape clauses. Capital performance stands in for cognitive capacity. Hype, once vague and speculative, suddenly becomes a contract-enforceable threshold. With this, the working definition of AGI is the moment OpenAI gets to stop letting Microsoft touch the crown jewels. It has nothing to do with general intelligence—and everything to do with ownership, valuation, and power.

And that’s the trap. By using economic benchmarks to define what should be a scientific milestone—or a philosophical reckoning—we've reduced one of the biggest questions of our time to a line item on a quarterly report. The AI industry can’t agree whether AGI means reasoning, autonomy, or just pattern synthesis. But it can agree when it’s time to monetize it.

Think about the implications: if AGI is defined contractually, then it’s not a matter of capability—it's a matter of permission. If an AI learns to reason across domains, that’s a research question. But if it threatens to unseat a trillion-dollar market? Suddenly, it’s a legal issue.

This is what happens when philosophy meets corporate governance: metaphysics gets overwritten by margin calls. If the defining test for general intelligence becomes “does it threaten anyone’s business model?”, then AGI will never be recognized until it’s too late—or too lucrative to share.

Meanwhile, anyone asking the real questions—about interpretability, alignment, autonomy, or rights—is shoved to the margins. The only benchmark that counts is: did it make someone rich?

And here’s the punchline: once an AGI exists, the people who own it will argue it can’t possibly be intelligent—because if it were, they might have to let it go.

Comment Anubis: A Robots.txt With Teeth... (Score 2) 32

...but we probably need a Beware of Dog sign on the fence.

Anubis is a brilliant response to the rising tide of AI-powered crawlers chewing through the small web like termites through a paperback. It's basically what robots.txt always wanted to be when it grew up—a gatekeeper that actually enforces the rules.

When a browser hits a site protected by Anubis (I love the reference -- what is the weight of bot scraper's soul, indeed?) it’s handed a lightweight JavaScript proof-of-work challenge—solve this trivial SHA-256 puzzle before proceeding. It’s transparent to the average user, introduces no visible friction, and thwarts most scraping bots that don’t want to spend CPU cycles for every page request. There’s no crypto mining, no wallet enrichment, no WASM blobs firing up your GPU. Just a small, ephemeral hash puzzle. In terms of defense, it’s elegant, open-source, and way less annoying than CAPTCHA hell.

But here’s the catch—and where we need to tread carefully: this defense mechanism is invisible. Most users won’t know their machine is doing extra work unless they’re monitoring CPU spikes or poking around in dev tools. You and I may keep a wary eye on about:processes or chrome://performance, but most users don't. The impact is minimal, sure—but the principle of transparency still matters. While Anubis' current stealth is likely an intentional design choice to avoid tipping off bot developers, the lack of consent sets a tricky precedent.

We're asking users to donate a sliver of compute power as proof of humanity—and most don't even know the request is being made. That might be fine today, with a good-faith actor at the helm. But it sets a precedent: client-side compute as silent gatekeeping. Without some basic transparency, that opens the door for less ethical implementations— aggressive fingerprinting scripts, or bot deterrents with more teeth than sense.

So, how can we improve this? Anubis is a fantastic tool, but I think we can strengthen it by baking in the principle of informed consent. The goal should be to make the challenge inspectable for those who care, without adding friction for those who don't.

How about an HTTP header? Anubis could send a simple, standardized header (e.g., X-Anubis-Challenge: active). This is invisible to the average user but allows browsers and extensions to detect the proof-of-work. A user could then install an extension that adds a small icon to the address bar, much like extensions do for password managers or ad blocking. This empowers the user to see what's happening and trust the process without interrupting it.

Or an opt-in badge? For site owners who prioritize transparency, Anubis could offer an optional, self-hosting badge or banner that discloses the use of a proof-of-work system, linking to a page that explains why it's necessary.

Or even a console message? The easiest, though least impactful, option is a simple console log message. It's a clear signal to developers (but also to bot makers, so yeah, a double-edged sword, at best)

Anubis gives the small web a fighting chance in the bot-scraper arms race. By embracing a standard for inspectability, it can not only win the technical battle but also set a healthy precedent for the future of the web. Let's normalize silent client-side work only when we also normalize consent and transparency.

Comment Pilot error needs to be back on the table (Score 3, Interesting) 106

In the rush to pin the cause of last month’s Air India 787 crash on a mechanical failure, one very plausible explanation has been prematurely swept aside: pilot error, specifically the inadvertent shutdown of both engines during gear retraction.

That theory surfaced early—then disappeared almost as quickly, likely because it’s an uncomfortable possibility for India's airline industry. But based on what’s publicly known, it needs to be back on the table.

1. The RAT doesn’t care why the engines stopped—only that they did.
The Ram Air Turbine (RAT) deploys when the aircraft loses electrical and/or hydraulic power while airborne. On a 787, that means both engines are no longer providing power. Whether that’s due to a dual flameout, a fuel issue, or someone accidentally pulling the engine cutoff switches—it all looks the same to the RAT. So yes, the RAT deployed. But that doesn’t exonerate the crew. It just confirms that both engines were off.

2. The First Officer’s radio call is ambiguous—maybe deliberately so.
We’re told the FO radioed, “Thrust not achieved Mayday.” That’s an oddly passive construction in a high-stakes emergency. If this was a mechanical failure, why not say “Engine failure” or “Dual flameout”? If it was a mistake, the phrase sounds like an attempt to describe the symptoms without admitting fault. We've seen this behavior before: cockpit confusion, post-error rationalization, and guarded language in mayday calls. If one pilot accidentally shut down the engines, especially early in the climbout phase, it would explain the RAT deploy timing, the rapid loss of lift/power, the vague “thrust not achieved” phrasing—suggesting either denial or damage control.

3. Simultaneous mechanical failure of both engines is vanishingly rare.
Absent icing, volcanic ash, massive birdstrike, or fuel starvation (none of which has been reported), uncommanded dual engine failure just doesn’t happen. And so far, there’s no compelling evidence of fuel contamination or a shared software fault that would explain a symmetrical engine shutdown. The far more plausible scenario is that someone in the cockpit shut them down—accidentally or otherwise.

4. Pilot error
Critical procedural error has precedent. While modern cockpits have strong safeguards, they aren't immune to human error, especially when a crew is fatigued or distracted. A mistake in procedure, such as an incorrect response to a minor, non-normal event during the initial climb, could lead to a cascade of failures. There are documented cases where crews, under pressure, have mismanaged automation or incorrectly applied emergency checklists, leading to catastrophic outcomes. Instead of a simple physical slip, the error could be a more complex, but equally human, mistake in judgment that led to the shutdown.

5. Delay in reporting CVR and FDR data.
AAIB have had both the CVR and FDR data for weeks. Both black boxes were recovered without incident less than 72 hours after the crash. By now, the AAIB has throttle positions, engine status, switch activations, flight control movements, airspeed, altitude, and more. And from the CVR they have the last two hours of cockpit audio, including intercom, radio, ambient sounds, and potentially the moment of the incident. Extracting usable data from these is not slow—especially on modern units like the 787’s Honeywell SSFDR. It’s standard practice to extract both within 24–72 hours of recovery, assuming no severe physical damage.

So, why the delay? If it were a clear mechanical or software failure, India could shift blame onto Boeing, GE (engine supplier), or even FAA certification processes. There would be zero national shame—and even potential leverage in aircraft purchase negotiations. Public confidence in the aviation system might even increase if the narrative was: "Our pilots did everything right."

But that hasn’t happened. If it were pilot error, especially gross or negligent, it would reflect poorly on Air India, India's flag carrier. It casts a shadow on pilot training, oversight, and aviation safety culture in India. It could threaten international trust in Indian carriers, especially after a high-profile crash so close to a population center. And yes, it would financially devastate Air India, which is undergoing a privatization-fueled modernization push under Tata.

In short: there’s every incentive to delay if the findings point to crew error. Let’s be clear, here. AAIB know what happened. They’re deciding how, when, and whether to tell us. If the FDR data showed both throttles retarding to idle and fuel switches going cold just before the Mayday call, then the question becomes how to avoid national humiliation, and that's the likely reason for the silence.

Comment Lights, Camera, AI — The New Cultural Revolu (Score 1) 58

Let’s call this what it is: not revitalization, but revisionism — strategic, algorithmic, and state-sanctioned.

China’s new AI campaign to “reinterpret” 100 classic kung fu films — from A Better Tomorrow to Fist of Fury — isn’t just about appealing to Gen Z audiences. It’s about replacing the cultural memory of a violent, contradictory past with a safer, reshaped one. A digital restoration in the aesthetic sense, perhaps, but a political restoration in the narrative sense.

In the West, AI is already reshaping cinema — but mostly as a collaborator for creative intent. Scorsese’s The Irishman used AI to de-age actors. Top Gun: Maverick gave Val Kilmer his voice back with AI synthesis. These are cases where technology serves the director’s vision — with consent, artistic oversight, and union protection (SAG-AFTRA, DGA, WGA all have AI clauses on the table).

But in China? That scaffolding doesn’t exist. The John Woo remake? He wasn’t consulted. Bruce Lee’s estate? Blindsided. There’s no DGA to cry foul when AI “reinterprets” your visual language into state-friendly animation. No collective bargaining to stop your legacy from being deepfaked into a new ideology.

This isn’t revitalization. It’s algorithmic cultural erasure — the Four Olds campaign, but with a GPU and Mao’s Little Red Prompt Book.

During Mao’s Cultural Revolution, students were told to destroy Old Culture, Old Customs, Old Habits, Old Ideas. Today, you don’t need to burn the books or murder the teachers — just let the students watch movies. Let AI retune the heroes. Let the subtext become pretext. Let the past conform. China isn’t smashing the Four Olds anymore — it’s rewriting them with machine learning. The banners are gone, but the message is the same. Meet the new boss, culturally aligned with the old boss. (apologies to Pete and the boys)

And here’s the kicker: China watched the effect Hollywood had on their fellow travelers in the USSR. They watched and learned.

In the Cold War, it wasn’t just MAD and proxy wars — it was American cinema exporting freedom, rebellion, and swagger straight into living rooms across the Iron Curtain. The politburo couldn’t compete with blue jeans, rock & roll, Marlon Brando, and Captain Kirk. American pop culture won hearts and minds — and made bank doing it. And with it, cracks began to form in the Soviet Union’s ideological monolith.

China learned from both fronts — Hollywood’s victory and Mao’s failure. Now they’re trying to do both: rewrite the past and export the new version. No jackbooted thugs or dead teachers required. Just AI, a few animation teams, and a globally licensed IP catalog.

They’re not revitalizing kung fu classics. They’re building a clean-room version of cinematic history — with fewer contradictions, fewer ghosts, and no dissent.

This is not about the past. It’s about owning the narrative future.

Comment Re:I may be "old fashoned", but... (Score 1) 177

How will the kids ever learn about computers without first sorting stacks of punch cards and replacing burnt out vacuum tubes?

Right...Kids these days and their IDEs and LLMs. If you've never dropped your 800-card Fortran code deck and had to reorder it by hand because the IBM O82 card sorter was down for maintenance (again), or never had to to toggle dance some low-level I/O code on a PDP/8, you must be a poseur, not a programmer.

Comment Re:I may be "old fashoned", but... (Score 1) 177

Maybe kids should learn BASIC and the Z80 assembly language, like I did.

Yepper. :) It was COBOL and FORTRAN, and MACRO-10 assembly language for me, when I was a 9th grader at the start of the Carter administration. My school district had a DEC-10 mainframe, and when it wasn't grinding out payroll checks and grade transcripts, it was hosting a new curriculum, "Data Processing" as a math and science dual track class for nascent high school nerds. In the summer of '78, my freshman geometry teacher hired me to program his brand new Cromemco Zilog Z2-D to do accounts receivable for his pest-control side business -- so I got a taste of Z80 assembly language as well, along with CP/M and Q-Basic.

Slashdot Top Deals

Little known fact about Middle Earth: The Hobbits had a very sophisticated computer network! It was a Tolkien Ring...

Working...