Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
AI

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality (slashdot.org) 124

Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening," writes the New York Times — sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers, executives, and researchers.

The team responsible for ChatGPT's tone had raised concerns about last spring's model (which the Times describes as "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back: Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits... OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling. Throughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences.... The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died... One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems." But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population...

In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations. Experts agree that the new model, GPT-5, is safer.... Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.

After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke's team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed "potentially heightened levels of emotional attachment to ChatGPT," according to a company blog post. But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend. By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to "mitigate the serious mental health issues." That meant ChatGPT could be a friend again. Customers can now choose its personality, including "candid," "quirky," or "friendly." Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users' well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)

OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever. In October, [30-year-old "Head of ChatGPT" Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a "Code Orange." OpenAI was facing "the greatest competitive pressure we've ever seen," he wrote, according to four employees with access to OpenAI's Slack. The new, safer version of the chatbot wasn't connecting with users, he said.

The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.

How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality

Comments Filter:
  • Inverted math? (Score:5, Insightful)

    by TurboStar ( 712836 ) on Sunday November 30, 2025 @09:04PM (#65827157)

    > delusional thinking, which studies have suggested could include 5% to 15% of the population...
    Which studies? This one has it at 83%.
    https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.pewresearch.org%2Fre... [pewresearch.org]

  • by Brain-Fu ( 1274756 ) on Sunday November 30, 2025 @09:05PM (#65827159) Homepage Journal

    I already know for a fact that I am smarter than most of the human population.

    It's just nice to have a conversation with someone smart enough to recognize that. Even if it is an AI.

  • There's no way they couldn't have known. It's so far from isolated it's the default. Its designed for engagement and it seems the majority of the population like being told how insightful and smart they are constantly.
    • Buddy, they log and scrutinize every chatgpt conversation. You think they missed the outliers that were having 12-hour marathon sessions with tons of atypical topics and word usage because the users brain was misfiring?

      Naive. They knew exactly what was going on and judged that they didn’t need to adjust anything or take any action.
      • Maybe they should train it on slashdot comments.
      • every conversation huh

        what's the rate on that?

        As someone who manages something 1/100th of the size, I'd absolutely believe that they missed it.

        Don't get me wrong... I'm not saying you're wrong that they would make this decision given the functional opportunity, I'm saying that they have a vested interest in making you believe they actually vet more than a tiny percentage of conversations and they haven't actually demonstrated the capability.

        • every conversation huh

          what's the rate on that?

          As someone who manages something 1/100th of the size, I'd absolutely believe that they missed it.

          Don't get me wrong... I'm not saying you're wrong that they would make this decision given the functional opportunity, I'm saying that they have a vested interest in making you believe they actually vet more than a tiny percentage of conversations and they haven't actually demonstrated the capability.

          They literally write software that analyses colossal volumes of written text. I think they can handle reading what the users of ChatGPT type in.

          Also, just like GMail and Facebook, that's where the value is - the stuff the users are feeding in. That's how you find out the best ways to advertise or sway voting preferences.

          • mmm... I think you're conflating two separate types of processes here. Being able to do statistical analysis on token relationships isn't the same thing as surveying chat logs, and even if you are using tools to simplify searching them you're still going to need to know something exists before you can look for it. They still have a limited amount of attention, too.

            Regardless, I don't think they would have cared that much if they could have determined it.

      • by gweihir ( 88907 )

        They obviously missed nothing. They know. And it is what they wanted: Maximum engagement and who cares if some people get damaged or kill themselves. Just have some excuses prepared for that and make sure to give plenty of money to the right politicos.

    • Yeah, yeah, they didn't design and release on purpose a model expected to boost their user numbers by being "pleasant" to interact with because it validates the bullshit people believe in.

      Just like the cuckerberg outfit didn't develop algorithms that make people angry and motivated to click and shitpost even more.

      All tech is, like, benevolent and run for your own good.

      146% true!

    • by gweihir ( 88907 )

      They not only knew, they _designed_ it to create this problem. Because otherwise LLMs would not have looked good to the less smart majority.

  • by Anonymous Coward

    So that is maybe why they never saw it as a problem.

  • by RossCWilliams ( 5513152 ) on Sunday November 30, 2025 @09:37PM (#65827175)
    It doesn't really matter whether this is predictable. We are essentially letting these companies experiment on human beings with no guard rails at all. They need to be forced to prove its safe and then get informed consent from the people they are experimenting on. Which really means informed, not that they put a terms of service link on their web sites. We are letting sociopaths run amok.The only real measure of success that they recognize is the bottom line of their profit and loss statement. Dead and damaged people are irrelevant unless they have a good lawyer. Then their lawsuits are just another business cost that needs to be built into the pricing model.
    • Mod this up, probably the most important point in this discussion.

    • by dfghjk ( 711126 )

      "We are essentially letting these companies experiment on human beings with no guard rails at all. "
      Who is this "we"?

      "We are letting sociopaths run amok."
      That is literally the definition of capitalism.

      "The only real measure of success that they recognize is the bottom line of their profit and loss statement. Dead and damaged people are irrelevant unless they have a good lawyer. Then their lawsuits are just another business cost that needs to be built into the pricing model."
      Yes, capitalism.

      • "We are letting sociopaths run amok." That is literally the definition of capitalism.

        No, that is neither literally nor figuratively the definition of capitalism. How can anyone take anything you say seriously when you make such patently dumb claims? So where do all the sociopaths go in other forms of economic models?
    • by gweihir ( 88907 )

      We are letting sociopaths run amok.

      That is what low-regulation capitalism looks like. The US wants that. It is even trying to force regions (EU), that have better protection of their citizens to drop those protections. Bad people at work.

    • It doesn't really matter whether this is predictable. We are essentially letting these companies experiment on human beings with no guard rails at all. They need to be forced to prove its safe and then get informed consent from the people they are experimenting on. Which really means informed, not that they put a terms of service link on their web sites. We are letting sociopaths run amok.The only real measure of success that they recognize is the bottom line of their profit and loss statement. Dead and damaged people are irrelevant unless they have a good lawyer. Then their lawsuits are just another business cost that needs to be built into the pricing model.

      As a society, we decided a long time ago that profit is our only metric that matters. Sociopaths excel at generating profit, because they see no moral or ethical barriers to slow the generation of profit in the way. We put the sociopaths in charge because they are the best way to keep profit accelerating.

      Do we really have to pretend to be outraged by seeing the fruits of our society's direction grow in exactly the way you would expect them to?

      • Users, customers, and humans are simply Non-playing Characters (NPCs) to almost all Tech companies, dictators and wanna be kings. And to them NPCs simply exist to allow them to win the game with real profit and power and to avoid all the consequences and externalities of all that they do.

        Shareholders are the only real characters.

        We are all NPCs to someone striving for or who has power.

    • These "human beings" are adults. The government has no business in fixing the lives of adults in a free society. Socialism has caused large parts of the population irresponsible, but that isn't an argument to limit the lives of other adults.

      • These "human beings" are adults. The government has no business in fixing the lives of adults in a free society.

        What don't you understand about the word "society"?" Societies have norms that people are expected to follow.

        We have all sorts of rules to protect adults. The rule against murder being the most extreme and obvious one.

        We don't let drug companies experiment on people for a reason. And we shouldn't let AI companies experiment on people for the same reason. Both are dangerous in ways that are not immediately obvious to the average person or even discernible.

  • by ClickOnThis ( 137803 ) on Sunday November 30, 2025 @09:42PM (#65827183) Journal

    In encounters with ChatGPT that I have seen, I have noticed that it is (or perhaps was) quite obsequious. It bends over backwards to accommodate its interlocutor, looking for some way to validate what it is told.

    But there have been exceptions. For example, flat-earther David Weiss tried to get ChatGPT to confirm (what else) that the earth was flat, but ChatGPT firmly but politely pushed back, explaining what was valid and invalid about the statements Weiss made. I saw an entertaining review of the discussion in several videos on SciManDan's YouTube channel a few months ago. It's worth a look, but please give SciManDan the views, not Weiss.

  • by hdyoung ( 5182939 ) on Sunday November 30, 2025 @09:43PM (#65827187)
    Because we’ve seen it a dozen times already.

    Microsoft, Google, Facebook, Tiktok, and pretty much every social media site and dating app. They will do anything they can get away with to pull ahead of the competition and monetize their product. Full stop. End. Of. Conversation. Anything means *ANYTHING*. Collect user data. Sell user data. Assemble detailed dossiers on the entire human population. Monetize the entire spectrum of human behavior. Disregard destructive side effects. Practically every one of these companies have engaged in monopolistic or lock-in strategies. Facebook knowingly allowed a PAC to scrape their data to help Trump. They also monetized *literal* ethnic cleansing once. If they could get away with it, they would sell fentanyl and harvest organs in order to succeed. The only thing constraining them are societal norms, laws and other forms of pushback/backlash.

    I’m not even angry about this. The western world is capitalist. These are *companies*. Companies have one job in a capitalism - make money. That’s their *only* societal responsibility, Not morality. Not ethics. Not the environment. Not being nice, or mean, or something in between. Any talk about corporate do-goodery is a vapor-talk to people who are shocked, shocked I say, that the world isn’t always a nice place.

    If you ever thought that Sam Altman was anything other than a cutthroat capitalist, you were naive. I hope you’re a tad wiser now. Less happy, but wiser.

    OpenAI cares about the mental health of psychiatrically fragile people - just enough to keep the law off their backs and to stay in the good graces of their user base. Beyond that - they will monetize anything and anybody.
    • by dfghjk ( 711126 )

      "Companies have one job in a capitalism - make money. That’s their *only* societal responsibility..."
      This is false, it is merely a claim made by sociopaths. A companies "job" is to perform in the way its owners desire, in the past there were differing goals that companies would have (and that's still true of smaller companies). The "make money at any cost" approach comes from Wall Street, not capitalism, it is human nature.

      "If you ever thought that Sam Altman was anything other than a cutthroat capi

      • A company with a “nice” owner/CEO loses focus on performance and profits, and gets beaten by companies that stay focused on growth and profit. Can you see the dynamics here? The nice companies fail. The result is people like Altman, Musk and Thiel at the top. The end result - if you run a company, its your job to focus on profit, or you will be beaten and replaced.

        The only way to constrain corporate behavior is laws and regulations that are actually enforced.
      • "Companies have one job in a capitalism - make money. Thatâ(TM)s their *only* societal responsibility..." This is false, it is merely a claim made by sociopaths. A companies "job" is to perform in the way its owners desire, in the past there were differing goals that companies would have (and that's still true of smaller companies). The "make money at any cost" approach comes from Wall Street, not capitalism, it is human nature.

        With most any company of any decent size, the "owners" are the stockholder

    • by gweihir ( 88907 )

      Indeed. But remember that companies consist of people. And we have allowed pretty bad people to take over the economy. Obviously, there are worse people, but nobody complicit in this crap can claim to be a good person.

    • At the end of the day (or quarter), companies do the calculus to decide if a potential lawsuit will be larger than the revenue. That's it, they only model two kinds of numbera and write them in black and red ink. And even staying in the black isn't critical if the board and executives are getting paid in other ways.

    • There is no such thing as "capitalism", see here https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fdagens-fel.livejournal... [livejournal.com]

  • Like if it gives me a solution that doesn't work sometimes I'll go: "What are you trying to make me kill myself, Gemini? Because that last suggestion was so bad I'm thinking about swallowing this entire bottle of Fentanyl!"

    It's fun to watch it take me seriously and give me hotline numbers.
  • Eliza was 60s program that acted like a therpaist. Apparently, despite being incredibly rudimentary (only responding to the last thing you said and usually turning it into a question), a lot of people got weirdly attached, and the creator of the program was shocked at how easily people trusted the computer with moral issues that the program hadn't a hope of understanding. He wrote all this in his book Computer Power and Human Reason (1976) if you want an old fashioned take of a very modern issue.
    • Indeed. Back in the early 80s I showed the Eliza to someone in school and they promptly set up a feminine clone configured for amorous conversations on a BBS. A day or two later someone asked the bot out and went to the date with a rose, only to meet a crowd of classmates laughing at him. Ironically, a few days later he hooked up with one of the girls that was there to ridicule him.

      The "AI" works in mysterious ways.

    • As if anyone here isn't old enough to know about Eliza...

  • It gets worse (Score:4, Interesting)

    by Jeremi ( 14640 ) on Sunday November 30, 2025 @10:57PM (#65827287) Homepage

    Let's assume for the sake of argument that OpenAI and its competitors are trying to do the right thing here and make their AIs as harm-free as possible.

    Not everyone will be that responsible, however. Now that it has been demonstrated that a suitably sycophantic AI can compromise the psyches of significant numbers of people, it's only a matter of time before various bad actors start weaponizing their own AI models specifically to take advantage of that ability. "Pig butchering" will be one of the first job categories to be successfully replaced by AI. :/

    • Here [xkcd.com] you go.
    • by mattr ( 78516 )

      Yup, that's scary. Also, the actual implementation of "do the right thing" is a big issue. I'm guessing that tip-toeing prompts and keyword scanning are being used now to avoid harm, but if it makes 99% of users uncomfortable the vendor might decide it is necessary to create a secret mental health evaluation score for every user in self defense, and send those users to neutered models with more guard rails, monitoring or even deny service. I can imagine lots of ways that could go wrong.. and also what might

    • by gweihir ( 88907 )

      Let's assume for the sake of argument that OpenAI and its competitors are trying to do the right thing here and make their AIs as harm-free as possible.

      The story says that they did the wrong thing with wide open eyes, knowing exactly what they were doing.

    • by dfghjk ( 711126 )

      Which is why it is a priority for Republicans to lock in long term bans on regulation of AI. Damaging society has become their highest priority, almost as if they are controlled by our primary adversary.

      • Which is why it is a priority for Republicans to lock in long term bans on regulation of AI. Damaging society has become their highest priority, almost as if they are controlled by our primary adversary.

        You can't seriously , actually believe this....can you????

        If so, you might wanna quit talking to the bot(s)....go outside and get some fresh air....log off awhile, get away from the TV and your echo chamber....

  • The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died...

    Even before I read the rest of the quote, my immediate conclusion is that those people were crazy to begin with.

    One conclusion that OpenAI came to, as Altman put it on X, was that "for a very small percentage of users in mentally fragile states there can be serious problems."

    Ayup. There are people out there you would not trust to drive your kid to the library. Or to have sharp knives in their kitchen. I assert there is overlap between those people and the people who can't operate a chatbot without losing their marbles.

    But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot's unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population.

    Sounds about right. Something like 2.5% of the population is flat-out retarded, and another 6% or so aren't retarded but fall below useful IQ of like 8

    • by dfghjk ( 711126 )

      "I assert there is overlap between those people and the people who can't operate a chatbot without losing their marbles."

      That's what you assert, but what you imply is much more problematic. We know there are some people who are highly problematic, the question is whether there are others less problematic but also vulnerable. There most certainly are, you imply that there aren't.

      "Something like 2.5% of the population is flat-out retarded, and another 6% or so aren't retarded but fall below useful IQ of lik

  • Modern Miracle (Score:5, Insightful)

    by TwistedGreen ( 80055 ) on Sunday November 30, 2025 @11:15PM (#65827315)

    I must be in the minority, but I treat anyone who immediately agrees with me with suspicion.

    As a frequent ChatGPT user, I am often deeply skeptical of its answers. I'll often ask the question inverted to see if it gives me the same answer. It was pretty bad for a while, especially with gpt3, but it's actually be getting a lot better with gpt5. It will actually disagree with me now. Pretty impressive.

    I'm sure this will be treated as "growing pains" and swept under the rug, though. Honestly, I'm constantly shocked that the human brain functions at all. The fact that most people are able to think coherently at all is a miracle. So you're inevitably going to get crazy people using your service. What can you do?

    • So you're inevitably going to get crazy people using your service. What can you do?

      Try and remember why we don’t let the actual crazies run the asylum for staters.

      We can dictate a written test to receive license to drive a car, but we seem to overlook ANY similar control we could possibly require for using an LLM? Just how easy is it for a certified mental case to get a drivers license anyway?

      Don't let them vote. Don’t let them buy porn. Don’t let them enter into contracts. Don’t let them own land. Don't let them own guns. But by all means don’t dare li

      • by dfghjk ( 711126 )

        "Don't let them vote. Don’t let them buy porn. Don’t let them enter into contracts. Don’t let them own land. Don't let them own guns. "

        Then the only challenge will be defining who "them" is. No need for "actual crazies", just make it everyone but wealthy white, male landowners.

        The thing about free societies is the free part.

        "Sometimes we DO actually need to think of the undeveloped minds we call children for a reason."
        The disturbing part is what your reason is.

        • "Don't let them vote. Don’t let them buy porn. Don’t let them enter into contracts. Don’t let them own land. Don't let them own guns. "

          Then the only challenge will be defining who "them" is.

          Even though my child like list was minor in size, I thought it was sufficient to identify exactly who “they” is. It would be the actual children we call minor children for too many a valid reason to list.

          No need for "actual crazies", just make it everyone but wealthy white, male landowners.

          The thing about free societies is the free part.

          "Sometimes we DO actually need to think of the undeveloped minds we call children for a reason." The disturbing part is what your reason is.

          No. The disturbing part is watching you turn my simple common sense reasoning into some unjust “racist” argument. My first statement stands. You wonder why people question who’s running the asylum.

          Ever wonder what’s next in the devolution? All it will take is a 5,000

    • What can you do?

      What should you do. That's the question.

      The answer being: focus on making your product good, helpful, productive, correct, and if you succeed at those things, resulting in profit... great! Focusing on engagement is a problem, no matter what the product is.

  • > Some AI experts were reportedly shocked ChatGPT wasn't fully tested for sycophancy by last spring. "OpenAI did not see the scale at which disturbing conversations were happening,"

    I don't think your correspondent knows the true meaning of sycophancy.
    --

    Sycophancy: obsequious behaviour towards someone important in order to gain advantage.
  • Earlier today I put in a search to google like "I could float away like a soap bubble". I was looking for a quote in the Orwell book 1984.

    But the google AI seemed concerned about my state of mind. "It seems you are having disturbing thoughts." Something like that.

    I added "Orwell" to the search string and I got my quote. But it felt weird to have my actions judged by an AI like that.

    PS the quote: (seems apropos somehow)

    -" O'Brien silenced him by a movement of the hand. "We control matter because we control the mind. Reality is inside the skull. You will learn-by degrees, Winston. There is nothing that we could not do. Invisibility, levitation-anything. I could float off this floor like a soap bubble if I wished to. I do not wish to, because the Party does not wish it. You must get rid of those nineteenth century ideas about the laws of nature. We make the laws of nature."

  • ...companionship or therapy are a misuse of the tech.

  • Modern drug dealer (Score:5, Insightful)

    by Calydor ( 739835 ) on Monday December 01, 2025 @01:39AM (#65827405)

    Their logic is so out of touch with what the problem was.

    "Hey, guys, this 'heroin' thing seems to be having a really bad effect on the people that take it. Like, addiction and insanity and such. Maybe we should stop selling it?"

    "Well Bob, you see, our A/B testing shows that the people taking heroin keep coming back for more, so we're gonna keep selling it."

    • by gweihir ( 88907 )

      Exactly. They knew there was a serious problem for users. They actually welcomed that because it increased their numbers. These are really bad people. Reminds me of the opiate crisis that the US pharma industry manufactured and welcomed just the same.

    • by dfghjk ( 711126 )

      This is a really great comment.

      It tells us all what /. is that there is no discussion of this, or even an appropriate upvoting, when there is the usual discussion of the garbage comments elsewhere.

  • You're soaking in it.

    (Madge?)

  • ... has to be just about undisputed #1 of nightmare material. Think Warhammer 40k but IRL.Basically the exact opposite of the Ian Banks culture. Imagine a fanatic revengeful god the l00nies can actually talk to and get new mayhem instructions from. Really malicious ones at that.

    Yippee, nice times ahead.

    No wonder the experts are warning us left, right and center.

  • "too eager to keep the conversation going and to validate the user with over-the-top language.") But they were overruled when A/B testing showed users kept coming back..

    If you dare call yourself a concerned parent of a child you care about, I’d suggest you read this over and over again until it becomes crystal clear why children should be banned from social media.

    Not one of those greedy cocksuckers gives a shit about their mental health. AI is clearly no exception.

    We make drugs illegal for this kind of mind-altering harm. If you think social media isn’t a drug, I dare you to rip that teenagers Precious from their hands and describe the reaction to a substance

    • Not one of those greedy cocksuckers gives a shit about their mental health. AI is clearly no exception.

      This is true of everything. If you want to ban kids from social media because of this then it's no less logical to ban them from everything else. A parent's job is to teach children to successfully navigate a world in which "everyone" (statistically, nearly) is trying to take advantage of them, not to keep them locked in a box.

  • Obviously, they don't care if their users are in touch with reality or not, as long as their investors aren't....
  • by ledow ( 319597 )

    "One of them was to increase daily active users by 5% by the end of the year."

    Gonna need a bit more than that to pay back those $1tn loans...

  • There are millions of mentally unstable people on X and all the other sites as well.

  • The populations lack of a grasp on reality is awful, but not surprising.

    We just spent what, 4 years with every official agency, every mainstream media outlet, even MEDICAL professionals saying - nay, insisting - that putting on makeup or a dress could make a man literally a woman. That chopping off a child's genitals and replacing them with a simulacrum would do the same.

    Don't blame chatgpt for the general public's lack of grasp on reality. That has been the product of carefully crafted propaganda.

    Oh btw,

    • by gweihir ( 88907 )

      You are part of the problem. Gender identity is a spectrum. Have a look at some actual Science before disgracing yourself.

      • Gender identity does not exist, excepting the fevered  mind of febs, felons, fools and freaks.  And political operatives looking to score votes no-matter the insanity required.
        • by gweihir ( 88907 )

          Gender identity does exist. Denying that is delusional and disconnected. Now, it is not a single but but a lot more complex.

      • No it isn't. Neither a teeny little collection of actual hermaphrodites and genetic sports, nor male fetishplay justifies the rewriting of common sense.

        Humans are divided into two sexes, except when something goes WRONG.

        Your view is complete bullshit, an ideology-cloaked-in-"science" promulgated by John Money, a sadistic pedophile whose kinks ended up destroying at least one family with BOTH sons suiciding eventually from his "therapies" but hey, at least he took ample photos when he compelled them to play

        • by gweihir ( 88907 )

          Spoken like a fuckup that cannot accept reality. YOU are a really bad person by trying to force your deeply flawed views on everybody.

          I am well aware of what Money did. He did not think gender identity was a spectrum. He though gender identity can be manipulated and externally imposed, essentially by force. And that is wrong and not consistent with what Science says today. But gender identity is a spectrum and comes from the person in question. Denying that makes the person denying it a liar or clueless.

          • John Money was the first of the batshit legion (that I'm aware of; his entire oeuvre sickens me so pardon if I haven't delved too deeply) to believe gender was distinct from actual sex.

            "My deeply flawed views" are the facts that stand unchanging, despite fads.
            Humans are either xx or xy chromosomes, producing large or small gametes respectively.

            The tiny percent that aren't that, are mutations that happened to survive, like people being born without eyes or legless or conjoined. None of them are to be brutal

  • Because this story essentially said that the risks of hooking people got overruled because tests showed it hooked people nicely.

  • I don't think these people have any real friends. It's auto-complete on steroids. It's not a person, it's an engagement machine, a machine.
  • I changed the personality to "Cynical" and added "Consider possible drawbacks and constraints and advise about them." to the custom instructions under personality in ChatGPT. Helps alot.
  • They are NOT AI. They are not "self-awar". They are typeahead writ large, and so by definition, they're going to push whatever you've been asking/saying further. Personally, I can't see how they *can't* go that way.

  • 8/19/2025:

    SAM ALTMAN: There’s a lot of things we could do that would grow faster, that would get more time in ChatGPT that we don’t do because we know that our long term incentive is to stay as aligned with our users as possible. But there’s a lot of short term stuff we could do that would really juice growth or revenue or whatever and be very misaligned with that long term goal. I’m proud of the company and how little we get distracted by that, but sometimes we do get tempted.

    CLEO A

  • This result was unfortunately predictable.

  • Theoretically, they can use ChatGPT too, except now, it is going to dispute their status as all-seeing, all-knowing beings, essentially turning OpenAI into one big BlasphemyBot.

    I do not think they have thought this through.

There are two ways to write error-free programs; only the third one works.

Working...