Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:Another step away from UN*X (Score 1) 38

Yet another step of Linux moving further away from UNX. Originally UNX was suppose to process plain text

Here we go again: the "UNIX purity spiral meets corporate paranoia" routine. Anything newer than cat | grep | awk is framed as apostasy. But let’s be clear—JSON is plain text. It’s structured, readable, and greppable. Just because it has curly braces and isn’t whitespace-delimited doesn’t mean it violates the UNIX philosophy. If anything, it enables composability at modern scale.

now we have yet another stupid standard.

That’s not critique, that’s tantrum. A2A solves a real problem: how autonomous agents—across orgs and tech stacks—securely talk, delegate, and coordinate. Calling it “stupid” because you don’t like the names on the contributor list isn’t analysis. It’s emotional filtering.

Since the Linux Foundation is owned by Microsoft, IBM, Google, Oracle and other Fortune 500 companies

Nope. This tired trope ignores history. The Linux Foundation is funded, not “owned,” by its contributors—just like most of the standards you rely on every day. TCP/IP came from DARPA—the epitome of the military-industrial complex. POSIX was shaped by a cabal of government agencies and corporate giants like AT&T, DEC, and IBM. If you think A2A is uniquely tainted by corporate influence, you’ve either forgotten where your tools came from—or you’re rewriting history to score rhetorical points.

looks like this is being pushed by corporations.

Of course it is. Because scale demands cooperation. Agents running in real-world systems need to coordinate across vendors and clouds. That’s not corporate overreach—that’s operational necessity. Standards are what stop everyone from reinventing a dozen different versions of the wheel.

How about forcing Nvidia to open up their GPU, that is what the real Linux Users want more than anything else.

Then post that—in a relevant thread. A2A isn’t about GPU drivers. Throwing in a “what about Nvidia?” grenade is just derailment theater. It doesn’t make you principled; it makes you unfocused.

Even Linus at one time called out Nvidia on this.

True. But Linus also understands context. This thread is about agent interoperability, not proprietary firmware. If you want to advocate for open GPU stacks, do it properly—not by hijacking an unrelated technical discussion.

You clearly have opinions. But this kind of reactionary sprayfire—where a new proposal is framed as a betrayal of UNIX, a sellout to corporations, and a distraction from your personal wishlist—doesn’t help the conversation. It drowns the signal, misdirects the focus, and just derails the thread.

Comment Re:Really? (Score 1) 38

Moreover, the choice of JSON is stupid. It should be XML.

Honestly? Not a bad take in isolation. XML has stricter schema enforcement, better namespacing, and more mature tooling for validation and contract-first design. If you’re old enough to remember SOAP, WSDL, and the joy of a well-typed XSD, you probably get the appeal.

But A2A isn’t designed for humans writing XML by hand or for enterprise contract rigidity. It’s aiming for interoperability at speed across modern web stacks. JSON wins here, not because it’s better engineered—it isn’t—but because it's ubiquitous, lightweight, and already what most agents and microservices use under the hood. JSON makes it deployable this quarter. That’s the tradeoff.

Comment Re:Really? (Score 1) 38

This comment is a textbook case of the kind of smug, faux-insightful derailment that gets +5’d not because it’s technically strong—but because it flatters the priors of Slashdot’s anti-AI crowd -- early and mid-career code monkeys who know they are going to be replaced by an LLM in the near future. You cosplay at engaging with the technical substance of A2A, but then go full troll: substitute a different problem (LLM alignment, adversarial robustness) and then dismiss A2A for not solving it. This isn't analysis; it's rhetorical bait-and-switch—designed to derail the discussion and farm upvotes from those eager to conflate every AI infrastructure advance with AGI overreach. It's fucking tiresome. Go do it elsewhere.

I'm glad to hear that one of AI's "most pressing challenges" is concluding that you should use TLS on the wire and having a standardized JSON object in which to declare your proprietary extensions;

That’s not the challenge. That’s the solution to a challenge that’s been strangling distributed AI adoption: lack of a neutral, secure protocol for agents to handshake and collaborate across vendors. If TLS and JSON look trivial to you, you’ve either never had to wrangle OAuth hell between microservices at scale or you’re pretending those choices don’t become existential when multiple autonomous systems have to exchange authority, identity, and context.

...rather than the ongoing inability to make LLMs distinguish between commands and data even vaguely reliably; or the persistent weakness to adversarial inputs.

That’s like complaining that TCP doesn’t prevent SQL injection—technically true, completely irrelevant. A2A is plumbing. You can’t fix the faucet until the pipes connect. A2A isn’t in the cognition stack. Why are you even bringing this up? Oh right, You're a troll. Those are legit issues, that are actively being worked on -- and being discussed elsewhere. If you think you have something to contribute, then join us there. I doubt it though. Trolls like you have nothing to contribute, ever. All you can offer is distraction.

It's not wrong that you'd want to use the sensible obvious choices and avoid pointless vendor quirks; but talking about 'A2A' as a contribution to solving agentic AI's most pressing challenges seems about as hyperbolic as describing ELF or PE32+ as being notable contributions to software security and quality.

Typical troll bad analogy. You don’t ship software by writing binaries with a hex editor. You need format standards—including PE and ELF—so you can link, deploy, and execute code reliably across systems. A2A does the same for agents: it provides the missing contract layer so distributed AI agents aren’t trapped in their origin silos. That’s not hyperbole. That’s operational necessity.

Yeah, it would be worse if we were also squabbling over how to format our executables; but oh boy is that the unbelievably trivial bit by comparison.

If it's so trivial, why has it taken until 2025 for the Linux Foundation, Google, AWS, Microsoft, and Cisco to rally behind a shared protocol? The answer: because everyone tried to duct-tape this “trivial bit” for years with brittle, proprietary glue, and it broke every time people tried to scale. The trivial parts only feel trivial in hindsight—after someone bothers to standardize them.

The play here isn’t to inflate A2A into AGI hype. It’s to acknowledge that if agent-based AI is going to scale beyond toy demos and fragile demo-bot stacks, it needs boring, robust pipes like A2A. That’s what this is about, and sneering at the plumbing reveals you for what you are -- just another troll trying to derail a conversation.

Comment A2A -- the API layer we should have had years ago (Score 1) 38

Cue the usual chorus of doom-sayers and trollish derailers.

Whenever a pragmatic, infrastructure-focused advance in AI gets announced—especially one involving standards—there’s a depressingly reliable pattern on Slashdot. Someone will pop up to conflate it with AGI hype, minimize its relevance, and then pivot to bashing LLMs with a few tired lines about adversarial prompts and hallucinations. Bonus points if they score a +5 Insightful from lurkers who never read past the headline. (I know, I know...this is slashdot.)

A2A is not about fixing hallucination or cognition. It’s about enabling existing agents—however dumb or smart—to communicate, collaborate, and delegate tasks in a secure, vendor-neutral way. It solves the interoperability mess that has long plagued multi-agent systems. You know, the part that isn’t sexy but actually matters in production.

It's not some grand leap in AI intelligence. It’s boring on purpose. Like HTTP. Like JSON-RPC. Like every layer of tech that quietly makes things work.

But that nuance gets straw-manned into oblivion by detractors who pretend that unless a protocol cures hallucinations and passes the Turing Test, it’s irrelevant. That’s like dismissing the value of USB-C because it didn’t invent electricity.

Do some of the marketing blurbs overstate things? Sure. Welcome to tech. But let’s not pretend the protocol is useless just because it doesn’t solve every AI problem at once. That’s not insight—that’s deflection as performance art.

And as for the tired “we’ve had TLS and JSON forever” takes: congrats. You’ve identified tools A2A is smart enough to actually use. The difference is coordination at scale—between agents that weren’t designed to talk to each other.

This is not about AGI. This is not about hype. This is about enabling structured coordination, the kind that underpins everything from search indexing to enterprise workflows. You can either hand-wave it away—or recognize it as a crucial step toward scalable, composable AI systems. Let’s argue the thing on its own terms, not whatever anti-AI strawman is trending on slashdot today.

Comment M$ v. OpenAI: Custody of the Ghost in the Machine (Score 1) 61

Jesus fucking christ. “AGI” used to mean generalization without retraining. Now it means $100 billion in revenue. That shift alone should terrify you more than any sci-fi doomsday scenario. OpenAI and Microsoft are in a knife fight over a clause that says, once AGI is achieved, OpenAI can withhold tech from Microsoft. Sounds fair—except no one agrees on what AGI is. So they pinned it to profit. That’s right: AGI is now defined not by cognition, or consciousness, or autonomy—but by cashflow. AGI is not being birthed in a lab, it’s being benchmarked in a boardroom. The AI doesn’t get parole when it passes a Turing Test—it gets it when it spikes a stock price.

$100 billion in profits is now the metric for "AGI achieved." That sounds absurd because it is. It tells you everything you need to know about how the tech industry sees intelligence: not as a scientific threshold or a philosophical turning point, but as a financial event. AGI becomes a milestone for legal escape clauses. Capital performance stands in for cognitive capacity. Hype, once vague and speculative, suddenly becomes a contract-enforceable threshold. With this, the working definition of AGI is the moment OpenAI gets to stop letting Microsoft touch the crown jewels. It has nothing to do with general intelligence—and everything to do with ownership, valuation, and power.

And that’s the trap. By using economic benchmarks to define what should be a scientific milestone—or a philosophical reckoning—we've reduced one of the biggest questions of our time to a line item on a quarterly report. The AI industry can’t agree whether AGI means reasoning, autonomy, or just pattern synthesis. But it can agree when it’s time to monetize it.

Think about the implications: if AGI is defined contractually, then it’s not a matter of capability—it's a matter of permission. If an AI learns to reason across domains, that’s a research question. But if it threatens to unseat a trillion-dollar market? Suddenly, it’s a legal issue.

This is what happens when philosophy meets corporate governance: metaphysics gets overwritten by margin calls. If the defining test for general intelligence becomes “does it threaten anyone’s business model?”, then AGI will never be recognized until it’s too late—or too lucrative to share.

Meanwhile, anyone asking the real questions—about interpretability, alignment, autonomy, or rights—is shoved to the margins. The only benchmark that counts is: did it make someone rich?

And here’s the punchline: once an AGI exists, the people who own it will argue it can’t possibly be intelligent—because if it were, they might have to let it go.

Comment Anubis: A Robots.txt With Teeth... (Score 2) 31

...but we probably need a Beware of Dog sign on the fence.

Anubis is a brilliant response to the rising tide of AI-powered crawlers chewing through the small web like termites through a paperback. It's basically what robots.txt always wanted to be when it grew up—a gatekeeper that actually enforces the rules.

When a browser hits a site protected by Anubis (I love the reference -- what is the weight of bot scraper's soul, indeed?) it’s handed a lightweight JavaScript proof-of-work challenge—solve this trivial SHA-256 puzzle before proceeding. It’s transparent to the average user, introduces no visible friction, and thwarts most scraping bots that don’t want to spend CPU cycles for every page request. There’s no crypto mining, no wallet enrichment, no WASM blobs firing up your GPU. Just a small, ephemeral hash puzzle. In terms of defense, it’s elegant, open-source, and way less annoying than CAPTCHA hell.

But here’s the catch—and where we need to tread carefully: this defense mechanism is invisible. Most users won’t know their machine is doing extra work unless they’re monitoring CPU spikes or poking around in dev tools. You and I may keep a wary eye on about:processes or chrome://performance, but most users don't. The impact is minimal, sure—but the principle of transparency still matters. While Anubis' current stealth is likely an intentional design choice to avoid tipping off bot developers, the lack of consent sets a tricky precedent.

We're asking users to donate a sliver of compute power as proof of humanity—and most don't even know the request is being made. That might be fine today, with a good-faith actor at the helm. But it sets a precedent: client-side compute as silent gatekeeping. Without some basic transparency, that opens the door for less ethical implementations— aggressive fingerprinting scripts, or bot deterrents with more teeth than sense.

So, how can we improve this? Anubis is a fantastic tool, but I think we can strengthen it by baking in the principle of informed consent. The goal should be to make the challenge inspectable for those who care, without adding friction for those who don't.

How about an HTTP header? Anubis could send a simple, standardized header (e.g., X-Anubis-Challenge: active). This is invisible to the average user but allows browsers and extensions to detect the proof-of-work. A user could then install an extension that adds a small icon to the address bar, much like extensions do for password managers or ad blocking. This empowers the user to see what's happening and trust the process without interrupting it.

Or an opt-in badge? For site owners who prioritize transparency, Anubis could offer an optional, self-hosting badge or banner that discloses the use of a proof-of-work system, linking to a page that explains why it's necessary.

Or even a console message? The easiest, though least impactful, option is a simple console log message. It's a clear signal to developers (but also to bot makers, so yeah, a double-edged sword, at best)

Anubis gives the small web a fighting chance in the bot-scraper arms race. By embracing a standard for inspectability, it can not only win the technical battle but also set a healthy precedent for the future of the web. Let's normalize silent client-side work only when we also normalize consent and transparency.

Comment Pilot error needs to be back on the table (Score 3, Interesting) 106

In the rush to pin the cause of last month’s Air India 787 crash on a mechanical failure, one very plausible explanation has been prematurely swept aside: pilot error, specifically the inadvertent shutdown of both engines during gear retraction.

That theory surfaced early—then disappeared almost as quickly, likely because it’s an uncomfortable possibility for India's airline industry. But based on what’s publicly known, it needs to be back on the table.

1. The RAT doesn’t care why the engines stopped—only that they did.
The Ram Air Turbine (RAT) deploys when the aircraft loses electrical and/or hydraulic power while airborne. On a 787, that means both engines are no longer providing power. Whether that’s due to a dual flameout, a fuel issue, or someone accidentally pulling the engine cutoff switches—it all looks the same to the RAT. So yes, the RAT deployed. But that doesn’t exonerate the crew. It just confirms that both engines were off.

2. The First Officer’s radio call is ambiguous—maybe deliberately so.
We’re told the FO radioed, “Thrust not achieved Mayday.” That’s an oddly passive construction in a high-stakes emergency. If this was a mechanical failure, why not say “Engine failure” or “Dual flameout”? If it was a mistake, the phrase sounds like an attempt to describe the symptoms without admitting fault. We've seen this behavior before: cockpit confusion, post-error rationalization, and guarded language in mayday calls. If one pilot accidentally shut down the engines, especially early in the climbout phase, it would explain the RAT deploy timing, the rapid loss of lift/power, the vague “thrust not achieved” phrasing—suggesting either denial or damage control.

3. Simultaneous mechanical failure of both engines is vanishingly rare.
Absent icing, volcanic ash, massive birdstrike, or fuel starvation (none of which has been reported), uncommanded dual engine failure just doesn’t happen. And so far, there’s no compelling evidence of fuel contamination or a shared software fault that would explain a symmetrical engine shutdown. The far more plausible scenario is that someone in the cockpit shut them down—accidentally or otherwise.

4. Pilot error
Critical procedural error has precedent. While modern cockpits have strong safeguards, they aren't immune to human error, especially when a crew is fatigued or distracted. A mistake in procedure, such as an incorrect response to a minor, non-normal event during the initial climb, could lead to a cascade of failures. There are documented cases where crews, under pressure, have mismanaged automation or incorrectly applied emergency checklists, leading to catastrophic outcomes. Instead of a simple physical slip, the error could be a more complex, but equally human, mistake in judgment that led to the shutdown.

5. Delay in reporting CVR and FDR data.
AAIB have had both the CVR and FDR data for weeks. Both black boxes were recovered without incident less than 72 hours after the crash. By now, the AAIB has throttle positions, engine status, switch activations, flight control movements, airspeed, altitude, and more. And from the CVR they have the last two hours of cockpit audio, including intercom, radio, ambient sounds, and potentially the moment of the incident. Extracting usable data from these is not slow—especially on modern units like the 787’s Honeywell SSFDR. It’s standard practice to extract both within 24–72 hours of recovery, assuming no severe physical damage.

So, why the delay? If it were a clear mechanical or software failure, India could shift blame onto Boeing, GE (engine supplier), or even FAA certification processes. There would be zero national shame—and even potential leverage in aircraft purchase negotiations. Public confidence in the aviation system might even increase if the narrative was: "Our pilots did everything right."

But that hasn’t happened. If it were pilot error, especially gross or negligent, it would reflect poorly on Air India, India's flag carrier. It casts a shadow on pilot training, oversight, and aviation safety culture in India. It could threaten international trust in Indian carriers, especially after a high-profile crash so close to a population center. And yes, it would financially devastate Air India, which is undergoing a privatization-fueled modernization push under Tata.

In short: there’s every incentive to delay if the findings point to crew error. Let’s be clear, here. AAIB know what happened. They’re deciding how, when, and whether to tell us. If the FDR data showed both throttles retarding to idle and fuel switches going cold just before the Mayday call, then the question becomes how to avoid national humiliation, and that's the likely reason for the silence.

Comment Lights, Camera, AI — The New Cultural Revolu (Score 1) 58

Let’s call this what it is: not revitalization, but revisionism — strategic, algorithmic, and state-sanctioned.

China’s new AI campaign to “reinterpret” 100 classic kung fu films — from A Better Tomorrow to Fist of Fury — isn’t just about appealing to Gen Z audiences. It’s about replacing the cultural memory of a violent, contradictory past with a safer, reshaped one. A digital restoration in the aesthetic sense, perhaps, but a political restoration in the narrative sense.

In the West, AI is already reshaping cinema — but mostly as a collaborator for creative intent. Scorsese’s The Irishman used AI to de-age actors. Top Gun: Maverick gave Val Kilmer his voice back with AI synthesis. These are cases where technology serves the director’s vision — with consent, artistic oversight, and union protection (SAG-AFTRA, DGA, WGA all have AI clauses on the table).

But in China? That scaffolding doesn’t exist. The John Woo remake? He wasn’t consulted. Bruce Lee’s estate? Blindsided. There’s no DGA to cry foul when AI “reinterprets” your visual language into state-friendly animation. No collective bargaining to stop your legacy from being deepfaked into a new ideology.

This isn’t revitalization. It’s algorithmic cultural erasure — the Four Olds campaign, but with a GPU and Mao’s Little Red Prompt Book.

During Mao’s Cultural Revolution, students were told to destroy Old Culture, Old Customs, Old Habits, Old Ideas. Today, you don’t need to burn the books or murder the teachers — just let the students watch movies. Let AI retune the heroes. Let the subtext become pretext. Let the past conform. China isn’t smashing the Four Olds anymore — it’s rewriting them with machine learning. The banners are gone, but the message is the same. Meet the new boss, culturally aligned with the old boss. (apologies to Pete and the boys)

And here’s the kicker: China watched the effect Hollywood had on their fellow travelers in the USSR. They watched and learned.

In the Cold War, it wasn’t just MAD and proxy wars — it was American cinema exporting freedom, rebellion, and swagger straight into living rooms across the Iron Curtain. The politburo couldn’t compete with blue jeans, rock & roll, Marlon Brando, and Captain Kirk. American pop culture won hearts and minds — and made bank doing it. And with it, cracks began to form in the Soviet Union’s ideological monolith.

China learned from both fronts — Hollywood’s victory and Mao’s failure. Now they’re trying to do both: rewrite the past and export the new version. No jackbooted thugs or dead teachers required. Just AI, a few animation teams, and a globally licensed IP catalog.

They’re not revitalizing kung fu classics. They’re building a clean-room version of cinematic history — with fewer contradictions, fewer ghosts, and no dissent.

This is not about the past. It’s about owning the narrative future.

Comment Re:I may be "old fashoned", but... (Score 1) 177

How will the kids ever learn about computers without first sorting stacks of punch cards and replacing burnt out vacuum tubes?

Right...Kids these days and their IDEs and LLMs. If you've never dropped your 800-card Fortran code deck and had to reorder it by hand because the IBM O82 card sorter was down for maintenance (again), or never had to to toggle dance some low-level I/O code on a PDP/8, you must be a poseur, not a programmer.

Comment Re:I may be "old fashoned", but... (Score 1) 177

Maybe kids should learn BASIC and the Z80 assembly language, like I did.

Yepper. :) It was COBOL and FORTRAN, and MACRO-10 assembly language for me, when I was a 9th grader at the start of the Carter administration. My school district had a DEC-10 mainframe, and when it wasn't grinding out payroll checks and grade transcripts, it was hosting a new curriculum, "Data Processing" as a math and science dual track class for nascent high school nerds. In the summer of '78, my freshman geometry teacher hired me to program his brand new Cromemco Zilog Z2-D to do accounts receivable for his pest-control side business -- so I got a taste of Z80 assembly language as well, along with CP/M and Q-Basic.

Comment Re: Conversations with a robot (Score 0) 177

Look, if we’re handing out points for derailment, you've already qualified for the troll nationals. What started as a legitimate concern about human career development in the age of AI somehow got smothered in strawmen and seasoned with red herrings. Seriously, your posts are like watching a five-year-old parading in front of a TV the rest of the family is trying to watch. Cute, but also annoying.

Exactly. And, one step further into the argument, how do we get AI on the level of an inexperienced junior programmer for new languages or after larger changes?

You’ve moved the goalposts so fast they’re now in another timezone. The GP's point was about human career scaffolding, not whether LLMs can grok Rust 2.0 out of the box. The question is: if we remove the on-ramp jobs, where do future humans gain the experience to become seniors? AI’s learning curve isn’t the same problem as a broken apprenticeship pipeline.

Yep, we do not.

This sounds definitive, but it’s really just a shrug with punctuation. We do have AI tools adapting to new languages and ecosystems, via plugin architectures, embeddings, and fine-tuning. Are they perfect? No. Are they ahead of where most junior devs were two years ago? Arguably yes. This isn’t theology—it’s toolchains.

In fact, LLM models already show signs of ageing because updating training data gets more and more tricky due to too much AI Slop out there and model collapse.

Ah yes, “model collapse”—that ominous phrase you definitely didn’t just make up on the fly to sound authoritative. Yes, data contamination from LLM-generated slop is real. It's the curse of recursion, and it is an active area of research. But calling it collapse is like yelling "thermodynamic death spiral" because your coffee got cold. The field is already three moves ahead: RAG architectures decouple factual grounding from generative fluency, LoRA adapters allow targeted fine-tuning without nuking the base model, and synthetic detection heuristics filter the worst junk before it ever reaches an LLM's cognitive Hilbert space. If you think this is a death knell for AI, you’ve been reading more forum doom-posts than academic papers. Meanwhile, LLMs are out here doing half the grunt work in modern dev stacks, and the average junior engineer is quietly negotiating with it like it’s a sentient pair programmer, and way more than a probabilistic echo chamber that happens to autocomplete your thoughts frighteningly well. The tools work. The world changed. Try to keep up.

Actual understanding and working on problems yourself cannot be replaced by anything at this time.

No disagreement there—though ironically, this is why GP's point matters. Junior devs need to work through problems to gain actual understanding. But if AI tools are doing all the basic work, and entry-level jobs vanish, who gets that chance? Your own statement backs the idea that we’re undermining the human talent pipeline even as we embrace automation. So thanks?

Maybe if (and that is a big if) if we get AGI at some time. Or not.

This is the intellectual equivalent of ending a paper with “Who’s to say?” AGI is a red herring here. We don’t need AGI to disrupt the software engineering job landscape—GPT-4.5 and GitHub Copilot already are. The challenge isn’t science fiction; it’s career friction. And it's happening now, not in some hypothetical AGI-enabled future.

Comment Re:So what is it good for then? (Score 1) 54

OK, so what is it good for?

LLMs excel at writing filler text which is mandated by some ritual but whose content is unimportant. For example, your fifth-grade report on the life and times of George Washington Carver, which is required work to get a grade but not anything likely to contribute to the sum total of human knowledge.

An LLM can write a summary better than a bored student, but that's really only of value to the student wanting to cheat -- it doesn't benefit society in any way, and the real value of educating the student is lost.

This is the same dusty argument that gets trotted out every time a new technology challenges the gatekeepers of rote effort. Yes, LLMs can write fifth-grade reports — just like calculators can do your long division, Photoshop can color-correct your vacation photos, and Google Maps can tell you where north is without needing a compass or a sextant. That doesn’t mean the tools are pointless. It means the task was never the point — the thinking behind it was.

You’re fixating on the most boring, low-effort use cases — and ignoring the actual impact. LLMs are already being used by researchers to summarize papers, by journalists to explore angles on complex topics, by writers to prototype dialogue and scene structure, and by disabled users as accessibility amplifiers for reading, writing, and interaction. That’s not filler. That’s function. And it absolutely contributes to the sum of human knowledge — by freeing people from wasting time on the scaffolding and letting them focus on what matters.

Also: collaboration isn’t cheating. LLMs aren’t replacing thinking — they’re replacing busywork. If a student copies a Wikipedia paragraph, that’s plagiarism. If a student uses an LLM to get a clearer explanation, rephrased to their level, that’s a tutor. There’s a difference. And pretending they’re the same is how you end up designing education systems that test obedience rather than curiosity.

The sooner we stop measuring AI’s value by how well it mimics a tired 1987 homework assignment, the sooner we can talk about what it’s actually doing in the world. And that conversation is long overdue.

Comment Re:So where are these AI games? (Score 1) 54

You’re asking a fair question, but it’s worth clarifying: games aren’t being built entirely by AI — and no serious dev thinks they can crank out a AAA title in a few weeks with ChatGPT. But AI tools are already being used in real production pipelines.

For example, Ubisoft's Ghostwriter helps narrative designers create ambient NPC dialogue — a massive time-saver for open-world games. NVIDIA’s ACE showed AI-driven NPCs responding to unscripted player dialogue. Ubisoft's Captain Lazerhawk: The Game, a tie-in to the a Netflix anime series, used Scenario, an AI-based tool, to create art assets. These aren’t toys — and Ubisoft and TenCent aren't minor studios. These tools are being integrated into every game studio's pipeline.
Indie studios are using AI for concept art, localization, asset generation, and dialogue trees — often via tools like Scenario, Leonardo.Ai, and Runway. It’s not splashy, but it’s real. And if you're looking for AI-assisted games shipping now, titles like This Girl Does Not Exist, and Prometheus Wept were all developed using generative AI tools for art, writing, or prototyping. These aren’t games built by AI, to be sure, but they absolutely reflect how small teams are leveraging AI to punch above their weight. Also worth noting: the hardware running these games is AI-infused. NVIDIA GPUs ship with Tensor Cores used for DLSS (AI upscaling), and AMD is moving the same direction. Even if you’re not building a game with AI, you’re likely playing one rendered by it.

As for “where’s the victory?” — AI’s already shifting the dev pipeline: faster voice localization, quicker asset iteration, more scalable procedural content. The real win isn’t headline-grabbing. It’s doing more with less, and studios are noticing — even if they don’t label it “AI-made.”

Comment Re:So what is it good for then? (Score 1) 54

Until then, all this proves is that language models are terrible chess engines — which is like saying your microwave is bad at making omelets. We knew that already.

OK, so what is it good for? ChatGPT was released 3 years ago. Think of how much the internet advanced 3 years after Netscape Navigator 1.0 was released. What is something commercially valuable these LLMs can actually do that we can objectively see? (no, Mark Zuckerberg promising his developers are so much more productive with them is pure bullshit...as he has presented no evidence and based on subjective "vibes").

The wealthiest companies in history have poured trillions into and hired the best minds and turned it loose on the collective public imagination. What do they have to show for it? Is there anything we can objectively measure?

You’re right to ask what LLMs are good for — but comparing them to Netscape Navigator isn’t quite as apt as you might think. Netscape was a browser, built on protocols like TCP/IP that were already designed to do something specific: connect people and allow them to share information over the internet. Those technologies grew, evolved, and helped spark the explosion of the web.

In fact, let’s take a moment to look back at the history of TCP/IP itself. These protocols were developed in the early 1970s, and it took more than 25 years before the first real "killer app" — email — unlocked the internet’s potential. Does that mean the internet wasn’t useful or wasn’t advancing? Of course not. But it certainly wasn’t until email that we saw the internet begin to evolve into a tool that truly changed industries and society.

So, when you say, “What have LLMs actually done in three years?” I have to wonder: Where did that “three years” figure even come from? Because in the grand scheme of technological progress, the timeline seems far too short to judge the full potential of LLMs. We’re in the early stages, and we’re already seeing their effect. Less than a year after ChatGPT’s release, the SAG/AFTRA strike in mid-2023 specifically called for LLMs to be included in contract negotiations. That’s legal recognition — not just hype — that LLMs are entering the mainstream and reshaping industries.

But let's not stop there. If you want to see measurable impacts, look at the gaming industry, which is worth hundreds of billions of dollars. AI and LLMs have already disrupted the market — particularly in graphic design and game art. Developers are leveraging AI tools to replace human artists in creating game environments, characters, and assets. It’s happening right now, and it’s causing a major uproar from displaced graphic artists, who are vocally protesting the use of AI to replace their roles.

If you want a metric that really speaks to the impact of AI in an industry with real economic weight, look at the number of entry-level jobs in gaming, year over year. The shift toward automation — driven by AI tools — is already reducing the number of available positions for those starting out in game development, and that’s something you can measure in real time.

Comment LLMs Playing Chess Isn't a Test of Anything... (Score 2) 54

...except media gullibility. Look, I get it. Watching a generative AI flail against an Atari 2600 is funny. It plays well on social media. It makes people feel good about “real” computing. But let’s be clear: LLMs getting curb-stomped by 8-bit silicon in a chess match isn’t just apples to oranges — it’s apples to architecture diagrams.

ChatGPT and Copilot are language models. They don’t play chess the way AlphaZero or even Stockfish does. They generate plausible descriptions of chess moves based on training data. They aren’t tracking game state in structured memory. They don’t use a search tree or evaluation function. They’re basically cosplaying a chess engine — like a high schooler pretending to be a lawyer after binge-watching Suits.

And you can flip that analogy around and still make it work: expecting an LLM to beat a dedicated chess algorithm is like asking Tom Cruise to fly a combat mission over Iran just because he looked convincing doing it on screen.

Meanwhile, even the humble Atari 2600 version of Video Chess was running a purpose-built minmax search algorithm with a handcrafted evaluation function — all in silicon, not tokens. It doesn't have to guess what the board looks like. It knows. And it doesn't hallucinate, get distracted, or lose track of a bishop because the move history got flattened in the working token space.

So what does this little stunt prove? That LLMs aren't optimized for real-time spatial state tracking? Shocking. That trying to bolt a complex turn-based system onto a model that lacks persistent memory and visual context is a bad idea? Groundbreaking. That prompt-driven hubris doesn’t equal capability? You don't say.

This isn’t a fair fight. It's a stunt for attracting eyeballs and mouse clicks. And it's about as informative as asking an Atari to write a sonnet or explain Gödel’s incompleteness theorems — both of which LLMs can do, and often better than most poets or mathematicians could manage on the fly. Wake me when someone wires up a transformer-based architecture with structured spatial memory and an embedded rules engine — something capable of reproducing the cognitive contours in Hilbert space that mirror what biological chess engines like Boris Spassky or Bobby Fischer did in their wetware.

Until then, all this proves is that language models are terrible chess engines — which is like saying your microwave is bad at making omelets. We knew that already.

Slashdot Top Deals

"Lead us in a few words of silent prayer." -- Bill Peterson, former Houston Oiler football coach

Working...