Comment Re:Treating others as human (Score 1) 81
Thanks for the thought-provoking post. I’m one of those people who says “please” and “thank you” to chat bots, and I don’t find it weird at all. For me it’s mostly habit transfer from human conversation. I’ve always written that way in email and spoken that way in person, so my “LLM voice” ends up inheriting the same phatic fluff. “Please” and “thank you” are semantically null, but they’re not functionless. They’re social grease, not information payload.
With LLMs, there’s a secondary effect: style is part of the input. If you consistently talk to a model in a particular register, you nudge its response distribution in that direction. Over time that builds up a stylistic subspace: the model infers “this user likes terse answers,” or “this one likes long, nerdy digressions with footnotes,” or “this one always starts with 'Hey, can you help me'.” I like that my stylistic quirks can feed back into the session, even if it’s just as a faint bias in token probabilities.
I agree with you that some people do blur the line. There’s an entire micro-industry of YouTube channels role-playing “sentient” AI for engagement. If people are already inclined to anthropomorphize, and their feed is full of “ChatGPT had an EMOTIONAL BREAKDOWN” thumbnails, they are definitely going to treat a statistical text generator like a trapped ghost in the machine. All I can say here is that P.T. Barnum was not wrong about the birthrate.
I think your concern about habits leaking from bots to humans is not misplaced -- if we get used to barking orders at a Roomba, when do we start abusing our barista? That seems a plausible scenario to me. But I’d rather err on the side of reflexive courtesy, even with things that obviously can’t feel it. The real risk isn’t that saying “please” to a chat bot will erode our empathy; it’s that we outsource judgment to it and stop checking whether the answers make sense. Politeness is harmless, whether it is to your chat bot or your next door neighbor. Uncritical deference to either, though, is not. Are people gullible about AI? Sure, some are, and that can be (and already is being) exploited for profit. Does saying “please” to Bixby or Siri meaningfully contribute to that? I’m not convinced. For a lot of us, it’s just the same small-talk no-op we already use with humans to lubricate the social gears of conversation, now applied to a very fancy autocomplete that can impressively mimic human conversation.