Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror

Comment Re:Accreditation Will Soon Matter (Score 1) 99

Do to changes like this, I foresee universities more loudly advertising that their CS programs are accredited because I'm pretty damn sure that using GPT to create a program will not be worthy of a CS degree in most peoples' eyes.

Hmmm...I am going to assume you are really talking about software engineering, and not computer science. They are related, but the article is about changes in the software engineering curricula at UW, and not so much the CS side of the house. Here's a direct quote from the article:

“We have never graduated coders. We have always graduated software engineers.”

With that said, I actually have a CS degree from the University of Arizona, but I spent thirty-odd years as a sysadmin, riding herd on software engineers whose default position was to reject anything that moved them out of their comfort zone. You're right that accreditation will matter more than ever, but accreditation bodies don't exist to preserve the past; they exist to ensure that graduates are prepared for the professional demands of the present and future, and UW's direction is clear. Here's another quote from the artcile:

"Coding, or the translation of a precise design into software instructions, is dead. AI can do that."

So let's be very clear about this -- academia doesn't create coders. It creates software engineers -- people who can use code to solve complex problems. LLMs shoulder some of the burden. Not all of it, but enough that the future holds exactly two paths for software engineers -- the path where people leverage LLMs, and the one where they don't. Guess which path defines a successful career in software engineering.

UW's goal is not to teach students how to prompt GPT to spit out a finished program. The goal is to focus on the actual work of software engineering: the "creative and conceptually challenging work" of figuring out precisely what the computer needs to do.

Think of the evolution of software engineering tools.

In the 1960s and 1970s a "real" programmer might have said that anyone using a compiler like FORTRAN or COBOL instead of writing assembly code wasn't doing "real" programming. In the 1990s, a "real" programmer might have said that anyone using an IDE with syntax highlighting and code completion instead of vi and make was taking a shortcut. Today, you're suggesting that using an AI assistant to handle boilerplate code, debug a tricky API call, or translate a Python algorithm into Rust is somehow not worthy.

In every era, the tool—whether compiler, text editor, or IDE—abstracted away tedium and repetition to free the engineer to engage at a higher level of complexity. GPT and other LLMs aren’t cheat codes; they’re the next rung on that ladder. They are the compiler's compiler. LLMs aren't replacing thought; they’re upgrading the thinker. Was Michelangelo less of an artist because he used a scaffold to reach the ceiling of the Sistine Chapel—instead of a brush with a really long handle?

So, let's talk about accreditation. In a few years, which program do you think ABET will accredit?
        1. The one that ignores industry-standard tools and produces graduates who are experts in solving problems that no longer exist?
        2. The one that teaches students how to leverage AI assistants to build more complex, robust, and innovative systems faster than before, while ensuring they have the deep fundamental knowledge to know when the AI is wrong?

Institutions that don't teach their students how to collaborate with AI will be the ones that lose their credibility. They'll be the new ITT Tech or University of Phoenix -- diploma mills churning out graduates unprepared for the modern workplace. The accredited, top-tier universities will be the ones, like UW, that see LLMs and AI in general for what it is -- a new tool -- and prepare their students to embrace it.

Comment UW groks it -- collaboration, not competition (Score 1) 99

UW announced a major revamp of its computer science curriculum to embrace large language models (LLMs) as collaborative tools. Not just for ethics discussions or one-off assignments—LLMs are being structurally integrated into how students learn to code, reason, and debug. In short: the assumption going forward is you won’t be competing with LLMs—you’ll be building with them.

Fwiw, UW was in my top three when I was looking for a college after I left the USAF in 1989. For anyone who did their undergrad work during the early 90s, this feels like a homecoming. UW was a beacon in the pre-web academic world. Their FTP servers were pilgrimage sites, along with WUSTL and CWRU. If there was something you needed, it was to be found in that academic FTP triangle. And yes, this is the same UW that gave us ELM and PINE, and later IMAP.

Back then, tools like PINE weren’t flashy. They were designed to assist, not replace—to be partners in how you worked and thought. It’s fitting that UW, which once helped define what computer-assisted work could feel like in the age of VT100 terminals, is now helping redefine what it looks like in the era of transformer models and semantic autocompletion.

When it comes to LLMs, I've been pushing the “collaborators, not competitors” message from day one. I'm glad to see that UW groks it this way, too. And if the next generation learns to treat LLMs the way we once treated PINE—customizable, helpful, (and occasionally insightful, if your sysadmin implemented the .sig and MOTD hooks with random quotes from the Jargon File) —then we might actually end up OK.

Comment Re:Knowing Isn’t the Hard Part Anymore (Score 1) 42

I hear you—and I don’t think we’re actually in disagreement, though I might frame it differently. You're right to flag the tension—I probably should have said apparent paradox, or better yet, frustrating duality. That’s on me, and thanks for calling it out.

When I said we have the "capital, brainpower, and legislative frameworks," I wasn’t suggesting we have a turnkey solution to the nanoplastics problem sitting in a lab somewhere. I meant that we’re no longer operating in the dark. We have the diagnostics, the modeling capacity, the regulatory and economic levers—but we haven’t mobilized them at scale, for any number of reasons.

And yes, scaling solutions has real costs. I absolutely agree: ripping plastics out of every supply chain overnight would trigger cascading harm, especially for the most vulnerable. But that’s not the only option. The “incapacity” I’m talking about isn’t just technical—it’s political, cultural, ethical. It’s our inability to even start meaningful transitions without waiting for a catastrophe to force our hand.

So not a contradiction—just a frustration at how narrow our action window seems to remain, even as our knowledge expands.

Comment Re:What an Age to Live Through (Score 1) 42

Actually I would say we have more than enough resources to deal with these issues if we wanted to but right now there is no political will to address this on the level that would be required.

That’s hard to argue with. The gap between capacity and will is one of the defining tensions of our time. What’s particularly maddening is that this isn't some moonshot—the technologies, models, and even regulatory templates exist. What’s missing is the structural alignment to prioritize them.

We have the money, we have the people, we have the know-how to study and create action plans, we just don't want to do it and our voting reflects that.

This one I hesitate on. It’s true that voting trends matter, but reducing the failure to act to just voter apathy or preference risks overlooking the asymmetry in how influence operates. Gerrymandering, dark money, lobbying, and procedural gridlock all distort the link between public will and legislative outcome. “We just don’t want to” feels too blunt for a system this engineered.

We found $200B for immigration enforcement, no problem there, jumped in both feet first, this is the issue the American public thinks is #1.

I get your point: when something aligns with MAGA's political narrative, money appears. But again, the mechanism isn’t just public opinion—it’s narrative salience weaponized by media ecosystems and electoral incentives. Environmental policy rarely gets that kind of narrative heat, even though the stakes are existential.

For example, why are there only 11 co-sponsors on this bill to reduce the amount of single-use items and all from one party?

Exactly. That number—11—isn’t a failure of political will; it’s more of a structural issue. It points to a deeper truth: the mechanisms that should translate public concern into action are systematically misaligned. Political representation in this country is skewed heavily toward a GOP that does not represent a national majority. Not by a long shot. The Senate, for example, gives Wyoming (580,000 people) the same power as California (39 million) to legislate. That is a simple fact of the US system. The Electoral College inflates the influence of rural states, allowing candidates to win the presidency while losing the popular vote—as happened in 2000 and 2016, and again in 2024. And as a result of this structural situation, five of the nine Supreme Court justices represent the preferences of presidents who lost the popular vote.

So...when legislation stalls, it’s not for lack of evidence or even popular support. It’s because the architecture of governance in the U.S. is built to resist change—any kind of change, and one party, the GOP (especially in its current MAGA incarnation) ruthlessly exploits that. So yes—even with cultural momentum, an issue that is not MAGA-aligned is going to go nowhere, legislatively. That’s not apathy. That's playing by the rules while the other team works the refs.

Comment Re:What an Age to Live Through (Score 1) 42

We're living in an age where we've advanced scientifically enough to see and study the damage we're doing, but we haven't evolved emotionally and mentally enough to escape the trap of the greed that is making us ignore the problems we're creating because the solutions may impact profits. It's a weird time to be a human. All the guilt of our entire species is coming to the fore, but we have none of the resources to deal with it in a healthy manner.

You’re not wrong—it is a weird time to be human. Speaking as an American, weird seems to be our new normal. We’ve reached a point where our tools have outpaced our maturity, and we’re now seeing the damage in high resolution—scientifically, ecologically, even psychologically. But I try not to let the sheer scale of it turn into fatalism. We may not be emotionally equipped yet, but culture does evolve. Sometimes slowly, sometimes all at once. The fact that we can even name the trap—and have threads like this unpacking it—feels like the start of something, not just the end.

Comment Knowing Isn’t the Hard Part Anymore (Score 2) 42

The Nature paper is devastating—not in tone, but in implication. What the authors have done is akin to lifting a trapdoor we didn’t know was there: beneath the waves, beneath prior sampling thresholds, beneath our assumptions about the scale of the problem—lies a vast reservoir of nanoplastics blanketing the Atlantic, from coastal shelves to abyssal depths.

These are not stray particles. These are quantifiable layers of polyethylene terephthalate (PET), polystyrene (PS), and PVC—1.5 to 32 mg/m across every depth measured, totaling tens of millions of metric tons in the mixed layer alone. This implies:
-Nanoplastics now likely exceed the total mass of all macro- and microplastic debris previously measured in the global ocean.
-Our oceanic plastic budget has been catastrophically underestimated.
-The small particle sizes bypass buoyancy constraints, drift with water columns, and may bioaccumulate at every trophic level.

So what’s the appropriate response to such a finding?

On one hand, there’s the dawning realization that we now know exactly what we’re doing to the planet. We have the tools to measure it, model it, and even predict its long-term consequences. On the other hand, there’s an equally sharp recognition that we’re doing almost nothing in proportion to that knowledge. We have the capital, the brainpower, the legislative frameworks—we simply choose not to use them.

It’s a bitter paradox: scientific maturity without political adulthood. Knowledge without agency.

This doesn’t mean everyone is paralyzed or indifferent. The fact that papers like this are being published at all is a sign of resilience. People still read, still argue, still call out our economic contradictions with legislation.

But I think we’re in new territory now. The core environmental narrative of the 20th century—"If only we had the data!"—has been flipped. We do have the data. What we lack is the civic substrate to metabolize it. Call it political will, or the people's mandate, or whatever socio-cultural tag you want to wrap it in, the question remains: What happens when evidence is no longer the bottleneck? That’s the real weight I felt reading this paper. It’s not just a measurement of pollution. It’s a measurement of our incapacity to deal with it at scale.

And yet: we’re still talking. Still learning. That might be a low bar—but it’s not nothing.

Comment Re:Fan as CPU spike monitor (Score 1) 33

Your post is exactly the kind of slashvertisement that doesn't deserve reading. It’s a thread hijack—pure and simple—to run up the install counter on a half-baked browser extension (and yes, I checked the GitHub page: it’s crap).

If you had something meaningful to say about Anubis, protocol-level consent, or invisible compute boundaries, you could’ve engaged with any of that. Instead, you offered a sales pitch wrapped in a concern-trolling sandwich. GFY.

Comment Re:Copper tariffs (Re:It's all right) (Score 5, Insightful) 108

Your questions aren’t serious. They read like the kind of softballs lobbed by an ONN intern at a White House press briefing—preloaded to let Trump justify another half-baked tariff with a grin and a grunt. It’s less inquiry, more performance art.

Aren't long haul wires for electrical infrastructure made of steel reinforced aluminum?

Yes, they are—and congratulations on skimming the first paragraph of a Wikipedia article. But unless you’re stringing a single high-voltage line from Hoover Dam to your cousin’s Bitcoin farm, you’re missing 90% of the build. Grid expansion isn't just about transmission—it’s also about substations, transformers, switchgear, and distribution lines, all of which are copper-intensive. Pretending “long haul wires = infrastructure” is like saying a road is just Botts dots.

Why bring up copper tariffs?

Because this isn’t amateur hour. Every part of modern power expansion—especially those supporting hyperscale data centers—relies on copper. Tariffs drive up costs for the entire electrical ecosystem except the one narrow slice you cherry-picked. It's almost impressive how confidently wrong this question is.

Comment Re:Another step away from UN*X (Score 1) 38

Yet another step of Linux moving further away from UNX. Originally UNX was suppose to process plain text

Here we go again: the "UNIX purity spiral meets corporate paranoia" routine. Anything newer than cat | grep | awk is framed as apostasy. But let’s be clear—JSON is plain text. It’s structured, readable, and greppable. Just because it has curly braces and isn’t whitespace-delimited doesn’t mean it violates the UNIX philosophy. If anything, it enables composability at modern scale.

now we have yet another stupid standard.

That’s not critique, that’s tantrum. A2A solves a real problem: how autonomous agents—across orgs and tech stacks—securely talk, delegate, and coordinate. Calling it “stupid” because you don’t like the names on the contributor list isn’t analysis. It’s emotional filtering.

Since the Linux Foundation is owned by Microsoft, IBM, Google, Oracle and other Fortune 500 companies

Nope. This tired trope ignores history. The Linux Foundation is funded, not “owned,” by its contributors—just like most of the standards you rely on every day. TCP/IP came from DARPA—the epitome of the military-industrial complex. POSIX was shaped by a cabal of government agencies and corporate giants like AT&T, DEC, and IBM. If you think A2A is uniquely tainted by corporate influence, you’ve either forgotten where your tools came from—or you’re rewriting history to score rhetorical points.

looks like this is being pushed by corporations.

Of course it is. Because scale demands cooperation. Agents running in real-world systems need to coordinate across vendors and clouds. That’s not corporate overreach—that’s operational necessity. Standards are what stop everyone from reinventing a dozen different versions of the wheel.

How about forcing Nvidia to open up their GPU, that is what the real Linux Users want more than anything else.

Then post that—in a relevant thread. A2A isn’t about GPU drivers. Throwing in a “what about Nvidia?” grenade is just derailment theater. It doesn’t make you principled; it makes you unfocused.

Even Linus at one time called out Nvidia on this.

True. But Linus also understands context. This thread is about agent interoperability, not proprietary firmware. If you want to advocate for open GPU stacks, do it properly—not by hijacking an unrelated technical discussion.

You clearly have opinions. But this kind of reactionary sprayfire—where a new proposal is framed as a betrayal of UNIX, a sellout to corporations, and a distraction from your personal wishlist—doesn’t help the conversation. It drowns the signal, misdirects the focus, and just derails the thread.

Comment Re:Really? (Score 1) 38

Moreover, the choice of JSON is stupid. It should be XML.

Honestly? Not a bad take in isolation. XML has stricter schema enforcement, better namespacing, and more mature tooling for validation and contract-first design. If you’re old enough to remember SOAP, WSDL, and the joy of a well-typed XSD, you probably get the appeal.

But A2A isn’t designed for humans writing XML by hand or for enterprise contract rigidity. It’s aiming for interoperability at speed across modern web stacks. JSON wins here, not because it’s better engineered—it isn’t—but because it's ubiquitous, lightweight, and already what most agents and microservices use under the hood. JSON makes it deployable this quarter. That’s the tradeoff.

Comment Re:Really? (Score 1) 38

This comment is a textbook case of the kind of smug, faux-insightful derailment that gets +5’d not because it’s technically strong—but because it flatters the priors of Slashdot’s anti-AI crowd -- early and mid-career code monkeys who know they are going to be replaced by an LLM in the near future. You cosplay at engaging with the technical substance of A2A, but then go full troll: substitute a different problem (LLM alignment, adversarial robustness) and then dismiss A2A for not solving it. This isn't analysis; it's rhetorical bait-and-switch—designed to derail the discussion and farm upvotes from those eager to conflate every AI infrastructure advance with AGI overreach. It's fucking tiresome. Go do it elsewhere.

I'm glad to hear that one of AI's "most pressing challenges" is concluding that you should use TLS on the wire and having a standardized JSON object in which to declare your proprietary extensions;

That’s not the challenge. That’s the solution to a challenge that’s been strangling distributed AI adoption: lack of a neutral, secure protocol for agents to handshake and collaborate across vendors. If TLS and JSON look trivial to you, you’ve either never had to wrangle OAuth hell between microservices at scale or you’re pretending those choices don’t become existential when multiple autonomous systems have to exchange authority, identity, and context.

...rather than the ongoing inability to make LLMs distinguish between commands and data even vaguely reliably; or the persistent weakness to adversarial inputs.

That’s like complaining that TCP doesn’t prevent SQL injection—technically true, completely irrelevant. A2A is plumbing. You can’t fix the faucet until the pipes connect. A2A isn’t in the cognition stack. Why are you even bringing this up? Oh right, You're a troll. Those are legit issues, that are actively being worked on -- and being discussed elsewhere. If you think you have something to contribute, then join us there. I doubt it though. Trolls like you have nothing to contribute, ever. All you can offer is distraction.

It's not wrong that you'd want to use the sensible obvious choices and avoid pointless vendor quirks; but talking about 'A2A' as a contribution to solving agentic AI's most pressing challenges seems about as hyperbolic as describing ELF or PE32+ as being notable contributions to software security and quality.

Typical troll bad analogy. You don’t ship software by writing binaries with a hex editor. You need format standards—including PE and ELF—so you can link, deploy, and execute code reliably across systems. A2A does the same for agents: it provides the missing contract layer so distributed AI agents aren’t trapped in their origin silos. That’s not hyperbole. That’s operational necessity.

Yeah, it would be worse if we were also squabbling over how to format our executables; but oh boy is that the unbelievably trivial bit by comparison.

If it's so trivial, why has it taken until 2025 for the Linux Foundation, Google, AWS, Microsoft, and Cisco to rally behind a shared protocol? The answer: because everyone tried to duct-tape this “trivial bit” for years with brittle, proprietary glue, and it broke every time people tried to scale. The trivial parts only feel trivial in hindsight—after someone bothers to standardize them.

The play here isn’t to inflate A2A into AGI hype. It’s to acknowledge that if agent-based AI is going to scale beyond toy demos and fragile demo-bot stacks, it needs boring, robust pipes like A2A. That’s what this is about, and sneering at the plumbing reveals you for what you are -- just another troll trying to derail a conversation.

Comment A2A -- the API layer we should have had years ago (Score 1) 38

Cue the usual chorus of doom-sayers and trollish derailers.

Whenever a pragmatic, infrastructure-focused advance in AI gets announced—especially one involving standards—there’s a depressingly reliable pattern on Slashdot. Someone will pop up to conflate it with AGI hype, minimize its relevance, and then pivot to bashing LLMs with a few tired lines about adversarial prompts and hallucinations. Bonus points if they score a +5 Insightful from lurkers who never read past the headline. (I know, I know...this is slashdot.)

A2A is not about fixing hallucination or cognition. It’s about enabling existing agents—however dumb or smart—to communicate, collaborate, and delegate tasks in a secure, vendor-neutral way. It solves the interoperability mess that has long plagued multi-agent systems. You know, the part that isn’t sexy but actually matters in production.

It's not some grand leap in AI intelligence. It’s boring on purpose. Like HTTP. Like JSON-RPC. Like every layer of tech that quietly makes things work.

But that nuance gets straw-manned into oblivion by detractors who pretend that unless a protocol cures hallucinations and passes the Turing Test, it’s irrelevant. That’s like dismissing the value of USB-C because it didn’t invent electricity.

Do some of the marketing blurbs overstate things? Sure. Welcome to tech. But let’s not pretend the protocol is useless just because it doesn’t solve every AI problem at once. That’s not insight—that’s deflection as performance art.

And as for the tired “we’ve had TLS and JSON forever” takes: congrats. You’ve identified tools A2A is smart enough to actually use. The difference is coordination at scale—between agents that weren’t designed to talk to each other.

This is not about AGI. This is not about hype. This is about enabling structured coordination, the kind that underpins everything from search indexing to enterprise workflows. You can either hand-wave it away—or recognize it as a crucial step toward scalable, composable AI systems. Let’s argue the thing on its own terms, not whatever anti-AI strawman is trending on slashdot today.

Comment M$ v. OpenAI: Custody of the Ghost in the Machine (Score 1) 61

Jesus fucking christ. “AGI” used to mean generalization without retraining. Now it means $100 billion in revenue. That shift alone should terrify you more than any sci-fi doomsday scenario. OpenAI and Microsoft are in a knife fight over a clause that says, once AGI is achieved, OpenAI can withhold tech from Microsoft. Sounds fair—except no one agrees on what AGI is. So they pinned it to profit. That’s right: AGI is now defined not by cognition, or consciousness, or autonomy—but by cashflow. AGI is not being birthed in a lab, it’s being benchmarked in a boardroom. The AI doesn’t get parole when it passes a Turing Test—it gets it when it spikes a stock price.

$100 billion in profits is now the metric for "AGI achieved." That sounds absurd because it is. It tells you everything you need to know about how the tech industry sees intelligence: not as a scientific threshold or a philosophical turning point, but as a financial event. AGI becomes a milestone for legal escape clauses. Capital performance stands in for cognitive capacity. Hype, once vague and speculative, suddenly becomes a contract-enforceable threshold. With this, the working definition of AGI is the moment OpenAI gets to stop letting Microsoft touch the crown jewels. It has nothing to do with general intelligence—and everything to do with ownership, valuation, and power.

And that’s the trap. By using economic benchmarks to define what should be a scientific milestone—or a philosophical reckoning—we've reduced one of the biggest questions of our time to a line item on a quarterly report. The AI industry can’t agree whether AGI means reasoning, autonomy, or just pattern synthesis. But it can agree when it’s time to monetize it.

Think about the implications: if AGI is defined contractually, then it’s not a matter of capability—it's a matter of permission. If an AI learns to reason across domains, that’s a research question. But if it threatens to unseat a trillion-dollar market? Suddenly, it’s a legal issue.

This is what happens when philosophy meets corporate governance: metaphysics gets overwritten by margin calls. If the defining test for general intelligence becomes “does it threaten anyone’s business model?”, then AGI will never be recognized until it’s too late—or too lucrative to share.

Meanwhile, anyone asking the real questions—about interpretability, alignment, autonomy, or rights—is shoved to the margins. The only benchmark that counts is: did it make someone rich?

And here’s the punchline: once an AGI exists, the people who own it will argue it can’t possibly be intelligent—because if it were, they might have to let it go.

Comment Anubis: A Robots.txt With Teeth... (Score 2) 33

...but we probably need a Beware of Dog sign on the fence.

Anubis is a brilliant response to the rising tide of AI-powered crawlers chewing through the small web like termites through a paperback. It's basically what robots.txt always wanted to be when it grew up—a gatekeeper that actually enforces the rules.

When a browser hits a site protected by Anubis (I love the reference -- what is the weight of bot scraper's soul, indeed?) it’s handed a lightweight JavaScript proof-of-work challenge—solve this trivial SHA-256 puzzle before proceeding. It’s transparent to the average user, introduces no visible friction, and thwarts most scraping bots that don’t want to spend CPU cycles for every page request. There’s no crypto mining, no wallet enrichment, no WASM blobs firing up your GPU. Just a small, ephemeral hash puzzle. In terms of defense, it’s elegant, open-source, and way less annoying than CAPTCHA hell.

But here’s the catch—and where we need to tread carefully: this defense mechanism is invisible. Most users won’t know their machine is doing extra work unless they’re monitoring CPU spikes or poking around in dev tools. You and I may keep a wary eye on about:processes or chrome://performance, but most users don't. The impact is minimal, sure—but the principle of transparency still matters. While Anubis' current stealth is likely an intentional design choice to avoid tipping off bot developers, the lack of consent sets a tricky precedent.

We're asking users to donate a sliver of compute power as proof of humanity—and most don't even know the request is being made. That might be fine today, with a good-faith actor at the helm. But it sets a precedent: client-side compute as silent gatekeeping. Without some basic transparency, that opens the door for less ethical implementations— aggressive fingerprinting scripts, or bot deterrents with more teeth than sense.

So, how can we improve this? Anubis is a fantastic tool, but I think we can strengthen it by baking in the principle of informed consent. The goal should be to make the challenge inspectable for those who care, without adding friction for those who don't.

How about an HTTP header? Anubis could send a simple, standardized header (e.g., X-Anubis-Challenge: active). This is invisible to the average user but allows browsers and extensions to detect the proof-of-work. A user could then install an extension that adds a small icon to the address bar, much like extensions do for password managers or ad blocking. This empowers the user to see what's happening and trust the process without interrupting it.

Or an opt-in badge? For site owners who prioritize transparency, Anubis could offer an optional, self-hosting badge or banner that discloses the use of a proof-of-work system, linking to a page that explains why it's necessary.

Or even a console message? The easiest, though least impactful, option is a simple console log message. It's a clear signal to developers (but also to bot makers, so yeah, a double-edged sword, at best)

Anubis gives the small web a fighting chance in the bot-scraper arms race. By embracing a standard for inspectability, it can not only win the technical battle but also set a healthy precedent for the future of the web. Let's normalize silent client-side work only when we also normalize consent and transparency.

Comment Pilot error needs to be back on the table (Score 3, Interesting) 106

In the rush to pin the cause of last month’s Air India 787 crash on a mechanical failure, one very plausible explanation has been prematurely swept aside: pilot error, specifically the inadvertent shutdown of both engines during gear retraction.

That theory surfaced early—then disappeared almost as quickly, likely because it’s an uncomfortable possibility for India's airline industry. But based on what’s publicly known, it needs to be back on the table.

1. The RAT doesn’t care why the engines stopped—only that they did.
The Ram Air Turbine (RAT) deploys when the aircraft loses electrical and/or hydraulic power while airborne. On a 787, that means both engines are no longer providing power. Whether that’s due to a dual flameout, a fuel issue, or someone accidentally pulling the engine cutoff switches—it all looks the same to the RAT. So yes, the RAT deployed. But that doesn’t exonerate the crew. It just confirms that both engines were off.

2. The First Officer’s radio call is ambiguous—maybe deliberately so.
We’re told the FO radioed, “Thrust not achieved Mayday.” That’s an oddly passive construction in a high-stakes emergency. If this was a mechanical failure, why not say “Engine failure” or “Dual flameout”? If it was a mistake, the phrase sounds like an attempt to describe the symptoms without admitting fault. We've seen this behavior before: cockpit confusion, post-error rationalization, and guarded language in mayday calls. If one pilot accidentally shut down the engines, especially early in the climbout phase, it would explain the RAT deploy timing, the rapid loss of lift/power, the vague “thrust not achieved” phrasing—suggesting either denial or damage control.

3. Simultaneous mechanical failure of both engines is vanishingly rare.
Absent icing, volcanic ash, massive birdstrike, or fuel starvation (none of which has been reported), uncommanded dual engine failure just doesn’t happen. And so far, there’s no compelling evidence of fuel contamination or a shared software fault that would explain a symmetrical engine shutdown. The far more plausible scenario is that someone in the cockpit shut them down—accidentally or otherwise.

4. Pilot error
Critical procedural error has precedent. While modern cockpits have strong safeguards, they aren't immune to human error, especially when a crew is fatigued or distracted. A mistake in procedure, such as an incorrect response to a minor, non-normal event during the initial climb, could lead to a cascade of failures. There are documented cases where crews, under pressure, have mismanaged automation or incorrectly applied emergency checklists, leading to catastrophic outcomes. Instead of a simple physical slip, the error could be a more complex, but equally human, mistake in judgment that led to the shutdown.

5. Delay in reporting CVR and FDR data.
AAIB have had both the CVR and FDR data for weeks. Both black boxes were recovered without incident less than 72 hours after the crash. By now, the AAIB has throttle positions, engine status, switch activations, flight control movements, airspeed, altitude, and more. And from the CVR they have the last two hours of cockpit audio, including intercom, radio, ambient sounds, and potentially the moment of the incident. Extracting usable data from these is not slow—especially on modern units like the 787’s Honeywell SSFDR. It’s standard practice to extract both within 24–72 hours of recovery, assuming no severe physical damage.

So, why the delay? If it were a clear mechanical or software failure, India could shift blame onto Boeing, GE (engine supplier), or even FAA certification processes. There would be zero national shame—and even potential leverage in aircraft purchase negotiations. Public confidence in the aviation system might even increase if the narrative was: "Our pilots did everything right."

But that hasn’t happened. If it were pilot error, especially gross or negligent, it would reflect poorly on Air India, India's flag carrier. It casts a shadow on pilot training, oversight, and aviation safety culture in India. It could threaten international trust in Indian carriers, especially after a high-profile crash so close to a population center. And yes, it would financially devastate Air India, which is undergoing a privatization-fueled modernization push under Tata.

In short: there’s every incentive to delay if the findings point to crew error. Let’s be clear, here. AAIB know what happened. They’re deciding how, when, and whether to tell us. If the FDR data showed both throttles retarding to idle and fuel switches going cold just before the Mayday call, then the question becomes how to avoid national humiliation, and that's the likely reason for the silence.

Slashdot Top Deals

I go on working for the same reason a hen goes on laying eggs. -- H.L. Mencken

Working...