Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:Treating others as human (Score 1) 81

Thanks for the thought-provoking post. I’m one of those people who says “please” and “thank you” to chat bots, and I don’t find it weird at all. For me it’s mostly habit transfer from human conversation. I’ve always written that way in email and spoken that way in person, so my “LLM voice” ends up inheriting the same phatic fluff. “Please” and “thank you” are semantically null, but they’re not functionless. They’re social grease, not information payload.

With LLMs, there’s a secondary effect: style is part of the input. If you consistently talk to a model in a particular register, you nudge its response distribution in that direction. Over time that builds up a stylistic subspace: the model infers “this user likes terse answers,” or “this one likes long, nerdy digressions with footnotes,” or “this one always starts with 'Hey, can you help me'.” I like that my stylistic quirks can feed back into the session, even if it’s just as a faint bias in token probabilities.

I agree with you that some people do blur the line. There’s an entire micro-industry of YouTube channels role-playing “sentient” AI for engagement. If people are already inclined to anthropomorphize, and their feed is full of “ChatGPT had an EMOTIONAL BREAKDOWN” thumbnails, they are definitely going to treat a statistical text generator like a trapped ghost in the machine. All I can say here is that P.T. Barnum was not wrong about the birthrate.

I think your concern about habits leaking from bots to humans is not misplaced -- if we get used to barking orders at a Roomba, when do we start abusing our barista? That seems a plausible scenario to me. But I’d rather err on the side of reflexive courtesy, even with things that obviously can’t feel it. The real risk isn’t that saying “please” to a chat bot will erode our empathy; it’s that we outsource judgment to it and stop checking whether the answers make sense. Politeness is harmless, whether it is to your chat bot or your next door neighbor. Uncritical deference to either, though, is not. Are people gullible about AI? Sure, some are, and that can be (and already is being) exploited for profit. Does saying “please” to Bixby or Siri meaningfully contribute to that? I’m not convinced. For a lot of us, it’s just the same small-talk no-op we already use with humans to lubricate the social gears of conversation, now applied to a very fancy autocomplete that can impressively mimic human conversation.

Comment Re:Honest question (Score 1) 138

Your own comment highlights the gap. You buy it for if your property is targeted, however these vendors will generally share footage with law enforcement in all circumstances including when your property isn't targetted and without your agreement. That may not be an issue for you, and that's your decision, but it is definitely not something I would want for a camera on my property.

If someone commits a crime on my property I am more than capable of agreeing to release the footage to law enforcement myself so it's hardly a selling point that they can get it without me agreeing.

Comment Re: WiFi cameras are not recommended (Score 1) 138

Multi-layer security doesn't mean doing the same thing done multiple ways, for example you don't typically run multiple antivirus applications or multiple software firewalls on the same device do you? It's about layers of security: Security lights, decent locks, CCTV, intruder alarms etc.

There is nothing you outline here that doesn't make using WIFI cameras either less effective than using wired cameras OR make it no better but more expensive. If you have two cameras covering an area than would you make your best position WIFI just to lose it if jammed and only have the inferior wired feed. If you put the wired feed in the best location then what's the point in the jammable WIFI backup. You're delusional if you think burglers even think about if cameras are WIFI or not. Burglers aren't turning up and WIFI scanning to see if cameras are WIFI or not, they just turn the jammer on when they arrive at whatever home they are burgling.

Don't use WIFI cameras, if you must because it's literally impossible to network the location, AND it's very difficult to access, then make sure it has local storage that is sufficient to retain any footage from during a period where it is jammed.

Comment posting != speaking (Score 3, Informative) 81

What the Max Planck paper and these articles show is pretty specific: if you track a bunch of YouTube talks and podcasts over time, you can see a noticeable uptick in a small cluster of GPT-favored words -- delve, comprehend, boast, swift, meticulous, etc -- starting shortly after ChatGPT shows up. The authors call this a closed cultural feedback loop: we train the model on us, the model develops its own lexical quirks, then we start picking those quirks up in our own speech.

That’s interesting, but we should be careful about what it actually means. Swapping “dig into” for “delve” in a TED talk is not the same thing as creating a new spoken dialect. Even though the authors filtered for dialogue, the corpus is still academic talks and STEM-adjacent podcasts -- performative environments where people mimic the written word, read from notes, or stick to a professional register in their vocal delivery. That’s much closer to “spoken writing” than to how people actually talk over coffee or in a bar. Texting isn’t talking; neither is reading your substack post into a microphone and then using it as a vocieover on your vlog.

Language evolves, and LLMs are going to evolve with it -- the printing press standardized spelling. Strunk & White and AP style sheets homogenized prose. PowerPoint presentations in cubicle land gave us “going forward,” “at the end of the day,” and “leverage synergies.” LLMs are going to reflect those changes. But here's the thing that most people seem to miss when it comes to LLM generated text -- you’re going to see convergence on a limited palette of words – humans already use a very sparse active vocabulary. The OED documents over 600,000 words and word-forms in English, but the average native speaker actively uses maybe 5,000–10,000 and knows on the order of 30–40k at most. In a bucket, each of us is operating with a dramatically limited sub-vocabulary of what the language actually makes available. And it is not just English -- you see the same pattern in every language: gigantic dictionaries, tiny personal vocabularies, and a brutal frequency curve where a few thousand common words do almost all the work. Any LLM trained on human text is going to converge hard on that high-frequency core, no matter how clever the prompting is.

This vocabulary shift is mostly harmless; the content shift is a different beast. The AI slop problem is real, but it’s not fundamentally about words like “delve.” It’s about incentives. If karma, clicks, or ad impressions reward volume over thought, of course an LLM is going to become an industrial slop gun: rage-bait scenarios, synthetic drama in AITA subreddits, engagement spam, and low-effort rewrites to farm outrage (looking at you, slashdot trolls). Mods are responding with vibe-based detection because the platform's software filters are looking for spam, astroturf, and bad-faith pattern posting, not whether the post was the result of an LLM prompt.

Where I think people go off the rails is treating any LLM involvement as illegitimate by definition. There’s a huge difference between copy-pasting the first thing the bot spits out, versus using it as a drafting tool. In the latter case, the LLM is closer to a very fast, slightly overeager junior copy editor. The responsibility for clarity, nuance, and honesty still sits squarely on the human side of the keyboard. I think any kind of written communication needs an author and an editor. LLMs are good at the first role – connecting words in plausible ways – and terrible at the second without a human in the loop.

So yes, we can probably measure an LLM accent in online text, especially in semi-scripted speech, which is what the paper's authors focused on. I just don’t buy that as proof of cultural doom. It’s evolution in action: new clichés, new stock phrases, a new batch of verbal tics we’ll eventually mock the way we mock “at this point in time” and "It was a dark and stormy night." The real hazards that LLMs pose are the ones we already know about – spam economics, operant conditioning in the attention economy, and politically motivated disinformation campaigns at scale. Blaming the LLM while ignoring those human-driven incentives is like blaming the hammer for the spec house that never got a punch list walkthrough.

Comment Re:yay (Score 1) 58

Using /. as an empirical example, people are behaving poorly. A decade ago you'd never see personal attacks moderated up just because it was politically aligned with the person moderating. Today such behavior (both attacking and partisan moderation) is commonplace. People are just less civil. I see this as symptom of de-cohesion - no shared values, no imperative to act civil.

I think you’re right about the symptom and wrong about the diagnosis.

Something really has shifted. I’ve been around slashdot long enough to remember when naked personal attacks getting +5 Insightful was rare enough to be comment-thread drama instead of background noise. The de-cohesion you’re talking about is real in the sense that shared norms feel weaker and more fragile.

Where I disagree is jumping from “norms are under strain” to “no shared values” and “people are just less civil now.”

Every functional community on the net is still built around shared values, even if they’re ugly ones. 4chan has values. Reddit subs have values. Slashdot absolutely has values; we’ve just learned the hard way that “karma + mod points” is an imperfect proxy for them. The fact that a personal attack gets upmodded because it dunks on the right outgroup is itself a value signal: it’s telling you that partisan team loyalty is being rewarded more strongly than old-school /. norms like “attack the argument, not the commenter.”

To me, that’s less “no shared values” and more “the reward structure changed which values dominate in public.”

That’s where the attention economy comes in. For most of human history, public discourse happened in spaces with hard constraints and gatekeepers -- the Roman forum, Renaissance salons, editorial pages in the NYT and Le Mond -- where you had limited bandwidth and a lot of social cost if you crossed certain lines. On the net, we wired up systems that don’t really care about those old boundaries. They care about engagement, and anger is extremely good at generating it.

If you build platforms where short, outraged, partisan jabs get more visibility, more replies, and more dopamine pings than slow, boring civility, people will drift toward outrage. Not because they suddenly stopped believing in civility in the abstract, but because they’re being trained, via a million tiny operant-conditioning loops, that civility is low-reward and rage is high-reward. I’m not using ‘dopamine’ as a metaphor here — platforms are explicitly optimized to tickle the mesolimbic reward pathway, especially the VTA -> nucleus accumbens circuit, which is the same bit of neuroanatomy you see in addiction and reinforcement-learning studies. They are literally hijacking the brain’s reward system for adtech.

So yes: people are behaving worse online than they used to, and the net has absolutely accelerated the de-cohesion you’re describing. But I don’t think that shows an absence of shared values. It shows that the environment we built now systematically rewards values that used to be pushed to the margins, and punishes the ones that held older discourse spaces together.

The stupid, angry impulses were always there. The mistake, in my view, is treating what we’re seeing now as a simple reveal of “how people really are,” rather than a response to the way we’ve re-wired the incentive structure around them.

Comment Re: Is free speech the problem? (Score 1) 58

Yeah, this isn’t the devastating rejoinder you seem to think it is.

Since the social world you've built has no place for me, and if I try to point out I'm nonviolently not cooperating, your sensibilities are offended and you deny my attempts to petition for redress of grievances, why not acknowledge suicide should be legal?

Slashdot, and any other online forum for that matter, is not “the social world.” It’s one community, with shared norms, a karma system, and user-driven moderation. The people modding you down and tagging you as the troll you are are your peers, not some shadowy corporate cabal. Mod points are handed out by karma, and meta-moderation is done by anyone who hasn’t cratered their own karma, so the feedback you’re getting is literally the community reflecting you back at yourself. Getting pushback here, or even banned elsewhere, doesn’t mean “society has no place for me.” It means “this particular venue doesn’t want to be your 24/7 argument outlet.” That’s it. I was blissfully unaware of your existence until you showed up here with this ludicrous attempt to dress up your asocial behavior as “freedom of expression.” If I hadn’t already replied, I’d have happily spent a mod point to tag it for the trollish BS that it is. When you claim a forum ban means you’re “exiled from humanity,” you’re not being profound, you’re being a drama queen. Making it less likely that people have to see and read you isn’t oppression; it’s a minor public service to the Slashdot community.

Why should I want to live in a society where I am not wanted or needed?

You’re dragging suicidal ideation into a moderation discussion as a debating move. “Let me argue with everyone as I see fit, or my life has no meaning” is not a philosophy of free speech. It's a tactic grade-schoolers use when they get summoned to the principal's office for misbehaving in class. It doesn't work there...why do you think it would work here? Nobody here has the power or the obligation to build you a society where you never feel unwanted. All that’s on the table is whether a specific community has to host your compulsion to “respond to every post I disagree with.” Needless to say, they don't. That's called boundary setting, by the way. And like most grade schoolers, you seem to be uncomfortable with curbs on your asocial behavior. Threatening to kill yourself is exactly what a child would do, and it is exactly why you are about to be plonk-filed, if you keep it up. Welcome to the harsh world of adult life on the net.

Also, what if my arguments actually give rise to painful cognitive dissonance in you, and so rather than change your thinking to accommodate the uncomfortable truths I bring up, you just want to ban the messenger?

This is the flattering story you keep telling yourself: “I’m not being moderated because I’m tedious, I’m being moderated because I’m right and it hurts.” Occam’s razor says otherwise. If your posts were consistently insightful and valuable, you’d be sitting on a pile of +5s and enjoying the same free expression as everyone else. What the community sees is a pattern: one user who has to jump into every argument, everywhere, and won’t let things drop. That gets old fast, no matter how “truthful” you think you’re being. Cognitive dissonance is not what happens when someone hits “mod troll” on yet another of your drive-by counter-takes. It’s what happens when you insist you’re bravely “not cooperating” with the system, while also admitting that a simple ban on a web forum leaves you “suicidally depressed.” Those two self-images don’t line up. You can't simultaneously be a heroic dissident *and* a guy who falls apart if a mod tells him “no.” You do see the contradiction, right?

You keep trying to frame this as a grand struggle over free speech, when it’s much smaller and much simpler. You are not owed an infinite audience. Other people are not props in your “petition for redress” performance. And when a community tells you “enough, go cool off somewhere else,” that isn’t proof that your truths are too powerful for them — it’s proof that your behavior is more annoying than enlightening.

Comment Re:yay (Score 2) 58

If you build machines that relentlessly reward our dumbest, angriest impulses, you're going to get more of them. That's not a mirror; that's a factory.

No, by your own logic the machine only works if dumbness exists. Someone not susceptible to ragebait isn't ragebaited or magically starts raging. This isn't training, it's abusing an underlying characteristics.

We’re not actually far apart on the premise. Of course the machine only works if the vulnerability exists. Slot machines only work because humans have a reward system; cigarettes only work because we have nicotinic receptors. Here is where we start to diverge, though: The fact that a behavior or susceptibility pre-exists doesn’t mean the industry sitting on top of it is just exposing it. In behavioral psych, “repeatedly exploiting an existing reinforcement pathway to change how often a behavior happens” has a very boring name: training. People are being trained to be stupid. People fed a steady diet of dumbass takes in their social media feed are going to be stupider than they were before they started doomscrolling.

Rage bait is absolutely a geological survey (I like that analogy). The human impulses are very much there. The only thing this is doing now is algorithmically digging in the right place to bring those up impulses up.

And this is where the divergence in our points of view is no longer sustainable: A geological survey measures; a strip mine extracts. Modern rage-bait systems don’t just “measure” who is prone to dumb, angry engagement. They identify the people most responsive to it, feed them more of what keeps them hooked, and then reward them socially for producing more of the same. That's why I landed on the geological survey analogy; that’s not a passive core sample of human nature, it’s an active extraction-and-refinement process. We started with latent stupidity; we built machinery that concentrates it, monetizes it, and feeds it back into the system. You say people are stupid at the core, and I say they are being trained to be stupid. Whether you call that “abuse” or “training,” the entire rage-bait industry (twitter, instagram, twitch, youtube shorts) it’s doing more than just holding up a mirror -- it's selecting for, elevating, and rewarding a specific limbic pathway. It is why your uncle turned into a raging fox news nutbar and your nephew is flirting with fascism and thinks charlie kirk is some kind of martyr.

Comment Re:Is free speech the problem? (Score 2) 58

If I like to respond to every post I disagree with, why is that so offensive to mods that I get banned and feel suicidally depressed as a result, not because of the free expression of other posters, but because mods prevent me from responding as I see fit?

“If I like to respond to every post I disagree with”

You kind of answer your own question in the first clause.

What you’re describing isn’t “free expression.” It’s an asocial urge to jump into every disagreement, everywhere, all the time, and “respond as I see fit” with no real limit except your mood. Scale that up to dozens or hundreds of threads and from the moderator’s perspective it doesn’t look like participation, it looks like one guy trying to turn the entire site into his personal argument factory. Your attempt to fig leaf your asocial behavior with free speech arguments is a non-starter. Free speech != compulsory audience. You absolutely have the right to say what you think. But -- and this is why your free-expression argument fails -- the First Amendment doesn’t entitle you to a permanent microphone in an online forum, any more than it entitles you to barge into your neighbor’s living room because you overheard a comment through an open window and you feel compelled to argue with all their dinner guests “as you see fit.” You've articulated your own problem, very clearly, and it's why mods drop the ban hammer on you with what must be annoying regularity. From inside your own skull it feels like “I’m just speaking my mind.” From everyone else’s POV it’s “this idiot again, dragging the same fight into yet another thread.”

Mods don’t see your inner motives. What they see is your asocial behavior pattern: one user generating outsized friction across the site. When the pattern doesn’t change after nudges and warnings, the ban hammer comes out. That’s not them “preventing free expression.” That’s them protecting everyone else’s ability to participate without wading through your permanent counter-take on everything. Consequences aren’t persecution; you frame this as “mods prevent me from responding as I see fit,” as if that’s some outrageous injustice, but it's just mods doing their job, which is to tell you, in no uncertain terms, that “You can’t keep doing this here, in this way.” They aren't telling you you can never speak again, they are letting you know that you can’t keep using their bandwidth and their community as the arena for your asocial compulsion to reply to everything. You chose to treat “I want to respond to every post I disagree with” as a personal right that trumps everyone else’s time, attention, and enjoyment. This kind of asocial behavior is a red flag to any moderator. Mods chose to treat it as what it looks like: someone using “free speech” as a shield for asocial behavior. Their job is not to absorb your compulsions because you’ll threaten suicide if they don’t. Their job is to keep the place livable for everyone.

If your mental health is really that fragile, the answer isn’t for mods to let you keep arguing with every user on their forum. The answer is: step away, get help offline, and maybe treat “I feel driven to reply to everything” as a symptom of a personality defect, not a principle to defend. Rage-baiting is not a socially acceptable outlet for your compulsion. In the larger rage-bait discussion: platforms already act like Skinner boxes. They reward compulsive engagement — especially the kind that locks people into endless, angry back-and-forths. The more you give in to “I must respond to every post that annoys me,” the more you’re letting that machinery train you.

So in a bucket, why do mods ban you? Because from their POV, you aren't some tragic free-speech martyr. You're just a user with asocial traits that you should have left behind when you left kindergarten -- a lack of impulse control and a need for attention -- that is degrading the experience for everyone else. You’re not being punished for having opinions. You’re being punished for acting like a child.

Comment Re:yay (Score 1) 58

We are getting stupider.....

Sadly no, we're simply exposing our existing latent stupidity. People like to argue. Ragebait makes arguing more likely. Arguing is considered "engagement".

Prove me wrong.

Okay, challenge accepted. If you build machines that relentlessly reward our dumbest, angriest impulses, you’re going to get more of them. That’s not a mirror; that’s a factory. You’re treating rage bait as if the internet were a geological survey: we dig, and whatever stupidity we find was just lying there all along. That’s way too generous to the system. What we actually built is more like a Skinner box. If you pay people in attention and dopamine hits specifically for knee-jerk, bad-faith hot takes, you’re not just “exposing” a latent tendency – you are actively training it. We’ve built feedback loops where shallow outrage gets shown more, thoughtful nuance gets buried, and the humans inside the loop update their behavior accordingly. That’s not revealing stupidity; it’s literally manufacturing it as a side-effect of the revenue model. Platforms like YouTube, TikTok, and Instagram have hijacked reading and replaced it with a variable-ratio dopamine dispenser — basically a slot machine jacked into your limbic system. These platforms make revenue by giving their users an infinite scroll engineered to stimulate the brain’s reward system with novelty, outrage, or cleavage — sometimes all three at once. The content that survives in that environment isn’t what makes us smarter; it’s whatever keeps us twitch-scrolling.

Comment Re:Unleashed animal runs into street? (Score 2) 169

And?

I'm all in favour of investigations into the cause of the incident and the points you raise about it are mostly valid but it's a single example of an animal in a road being killed by an automotive; this is happening constantly with human drivers and isn't even close to being considered news. If the car did something especially concerning or there was a statistical trend that was concerning about animal fatalities and self-driving cars then fine, but short of that this kind of exceptional treatment of events like this is unhelpful.

Comment Re:Umm, what about theft? (Score 1) 18

When the AI steals the ideas of others and presents it as a new idea to the AI user, it's still theft and the inventor is the original inventor and the ideas were on the open internet to be scraped by the AI and so is prior art.

You’re packing three different bodies of law into one spooky word, “theft,” and that muddies the water more than it helps. If it was your intent to muddy the waters, congratulations -- now go troll some other thread.

Think about it this way: If “the ideas were on the open internet,” then you’ve already answered the patent part yourself. Anything publicly disclosed before the filing date is prior art. If an AI regurgitates something that’s already out there, it doesn’t create a new patent right for the AI user – it just means the application is dead on arrival as either not novel or obvious. That’s exactly what the patent system is designed to prevent: locking up what’s already in the public domain.

More to the point: “Theft” is the wrong label for what you are talking about. Patent law doesn’t care whether you “stole” an idea; it cares who first conceived it and whether it was already disclosed. If someone copies an existing invention and tries to patent it, it’s not a clever AI loophole – it’s just an invalid patent application. At worst it’s fraud, and after the PTO kills it, the applicant will certainly get a visit from a process server with a civil tort, and (depending on how politically connected the corporation they are attempting to defrauded is) the DOJ.

Finally, copyright and scraping training data are not Issues that the USPTO deals with. At all. It is a different fight altogether. Training an AI on public data doesn’t magically transfer someone’s patent rights to the model’s owner, and it doesn’t erase prior art. Even in the most brain-dead interpretation of the scraping wars, you don’t get to turn “LLM saw a paper” into “we now own a patent on what that paper taught.” If the model outputs something substantially identical to a copyrighted work, that’s a copyright issue, not a patent issue. Neither of those scenarios gives the AI user a valid patent.

In a bucket, the new USPTO guidance doesn’t bless any of this. It basically says: AI is lab equipment, only humans can be inventors, and prior art is still prior art, no matter whether a human or a GPU found it. If your worry is “big players will try to slip bad patents through on things that already existed,” you’re late to that party – they’ve been doing that with manual searches, interns, and buzzwordy specs for decades. AI doesn’t change the core legal filters; it just changes how fast you can search and how easy it is to generate garbage that those filters are supposed to catch.

Slashdot Top Deals

"Regardless of the legal speed limit, your Buick must be operated at speeds faster than 85 MPH (140kph)." -- 1987 Buick Grand National owners manual.

Working...