Analyzing 47,000 ChatGPT Conversations Shows Echo Chambers, Sensitive Data - and Unpredictable Medical Advice (yahoo.com) 32
For nearly three years OpenAI has touted ChatGPT as a "revolutionary" (and work-transforming) productivity tool, reports the Washington Post.
But after analyzing 47,000 ChatGPT conversations, the Post found that users "are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks." The Post analyzed a collection of thousands of publicly shared ChatGPT conversations from June 2024 to August 2025. While ChatGPT conversations are private by default, the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive and downloaded by The Post. It is possible that some people didn't know their conversations would become publicly preserved online. This unique data gives us a glimpse into an otherwise black box...
Overall, about 10 percent of the chats appeared to show people talking about their emotions, role-playing, or seeking social interactions with the chatbot. Some users shared highly private and sensitive information with the chatbot, such as information about their family in the course of seeking legal advice. People also sent ChatGPT hundreds of unique email addresses and dozens of phone numbers in the conversations... Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that it appears ChatGPT "is trained to further or deepen the relationship." In some of the conversations analyzed, the chatbot matched users' viewpoints and created a personalized echo chamber, sometimes endorsing falsehoods and conspiracy theories.
Four of ChatGPT's answers about health problems got a failing score from a chair of medicine at the University of California San, Francisco, the Post points out. But four other answers earned a perfect score.
But after analyzing 47,000 ChatGPT conversations, the Post found that users "are overwhelmingly turning to the chatbot for advice and companionship, not productivity tasks." The Post analyzed a collection of thousands of publicly shared ChatGPT conversations from June 2024 to August 2025. While ChatGPT conversations are private by default, the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive and downloaded by The Post. It is possible that some people didn't know their conversations would become publicly preserved online. This unique data gives us a glimpse into an otherwise black box...
Overall, about 10 percent of the chats appeared to show people talking about their emotions, role-playing, or seeking social interactions with the chatbot. Some users shared highly private and sensitive information with the chatbot, such as information about their family in the course of seeking legal advice. People also sent ChatGPT hundreds of unique email addresses and dozens of phone numbers in the conversations... Lee Rainie, director of the Imagining the Digital Future Center at Elon University, said that it appears ChatGPT "is trained to further or deepen the relationship." In some of the conversations analyzed, the chatbot matched users' viewpoints and created a personalized echo chamber, sometimes endorsing falsehoods and conspiracy theories.
Four of ChatGPT's answers about health problems got a failing score from a chair of medicine at the University of California San, Francisco, the Post points out. But four other answers earned a perfect score.
It is not alive. It does not think. (Score:2)
Just because it can string words together in a convincing way does not mean it is thinking. The system is designed to make you dependent on itself, and optimizes for that only. Stop worshipping the golden calf of LLMs and free yourself.
Re: It is not alive. It does not think. (Score:2)
Re: It is not alive. It does not think. (Score:2)
That a machine is felt as a trustworthy companion tells a lot about the general state of society.
As a tool, it is useful to some extent, but even when it is just a machine, it shows that humanity can be lot moreâ¦
Re: (Score:2)
Blame the CEOs/shareholders who force deployment/integration.
Really? (Score:3)
They are taking a select group of posts that people shared and pretending they can extrapolate the overall use of AI from that. That is stupid beyond belief. Its classic lamp-posting, looking where the light is. You draw a conclusion from the information you have because you can't see the information you actually need to know.
In this case its just classic journalism, trying to make a good story by drawing an interesting conclusion whether it has any real basis or not.
So that's not the point (Score:2)
The concern is that people are going to use the tech to get information and it's going to be bad information.
In politics there is a concept called a low information voter. This is someone who pays very little attention to politics and ends up with a lot of poorly informed opinions and makes poor political choices because of it.
This is Been supplemented by a new phenomenon call the bad information vot
Re: (Score:2)
In politics there is a concept called a low information voter. This is someone who pays very little attention to politics and ends up with a lot of poorly informed opinions and makes poor political choices because of it.
An idea invented by political junkies to explain why voters don't choose their favored candidates.
Re: (Score:2)
I think one of the greatest advances of the Western Enlightenment was a kind of realization that it's really, really difficult to know something.
It's not just about personal biases, and it's not just about cultural biases. It goes way beyond that -- it's the systems we live in and depend upon.
When we were living in tribal times, you could probably find out through direct experience most of everything you needed to know. And anything beyond that was just magic. How to find food, how to make relationships, an
The real question (Score:2)
Re: (Score:2)
Probably none. Although it is mentioned that chats can become public in some cases, it's not really made clear that they are also easily searchable by absolutely anyone. Add to this that most people have memories shorter than a goldfishes, so forget what personal info they put early into the thread when they do finally share their little personal echo chamber with the fawning SmithersGPT, and that all
Re: (Score:2)
Probably none.
They'd be extra stupid if they didn't know. Creating a sharable link to a chat literally allows anyone with that link to see that chat, and it warns you as such. That's like uploading a nude pic, pasting the link on reddit, and then claiming "I had no idea other people could see it!"
Re: (Score:2)
Indeed. But I think they don't realise it's plonked onto Google/Bing/Ducky/Ecosia/Kagi searches for the whole world, and likely consider it's something nobody will ever see unless the (assumed incorrectly) private link is disclosed to them.
Re: (Score:2)
Well, they almost 100% posted the links via some form of IA actively archived social media - reddit and twitter most likely, soooo... I stand by my original statement. Especially when it comes to reddit.
Re: (Score:2)
No it isn't. It's like uploading a pic, and *not pasting the link anywhere*. It's not clear to some people that that image can now be indexed. It is still on them for not knowing that, but they don't have to have done anything to draw attention to the linked content.
Re: (Score:2)
Not even close.
The Post analyzed a collection of thousands of publicly shared ChatGPT conversations...the conversations analyzed were made public by users who created shareable links to their chats that were later preserved in the Internet Archive
Just having the conversation with ChatGPT doesn't do it. Merely creating, but not sharing, the link doesn't do it. Hell, privately sharing the link with a few people via DMs/email/Discord/etc doesn't do it. These were links posted somewhere publicly that the IA actively archives like reddit or twitter. IA doesn't just brute force check every alpha-numeric possibility for a valid shared conversation link.
Re: (Score:2)
ChatGPT is the first site that make LLM accessible for normal users. The typical Slashdot commenter does not unintentionally create a share link. But most Slashdotters also can run LLM without using the web frontend.
Well now (Score:3)
I never would have expected THAT. No sirree Bob.
As expected (Score:2)
Releasing immature tech to the general public is a strange strategy, especially when hypemongers exaggerate its capabilities.
The general public has a reputation for misusing tech and doing really stupid stuff with it.
The proper use of AI is for helping us solve previously intractable problems in science, engineering, medicine, etc. Using AI to create slop, scams and fake "friends" is a misuse of the tech.
Re: (Score:1)
It is great to discuss your ideas (Score:1)
Universal positive regard (Score:5, Interesting)
Sometimes, to get your thoughts straight, all you need is to discuss them with somebody. Chatbots seem to be just great for this. You really do not need anything from them, you just explain your ideas and this makes them more organized. This is really useful. Especially, now when you really have to be careful what you say to others, or you may end up totally cancelled.
ChatGPT has three aspects that make this practice - what you describe - very dangerous.
Firstly, ChatGPT implements universal positive regard. No matter what your idea is, ChatGPT will gush over it, telling you that it's a great idea. Your plans are brilliant, it's happy for you, and so on.
Secondly, ChatGPT always wants to get you into a conversation, it always wants you to continue interacting. After answering your question there's *always* a followup "would you like me to..." that offers the user a quick way that reduces effort. Ignoring these requests, viewing them as the result of an algorithm instead of a real person trying to be helpful, is difficult in a psychological sense. It's hard not to say "please" or "thank you" to the prompt, because the interaction really does seem like it's coming from a person.
And finally, ChatGPT remembers everything, and I've recently come to discover that it remembers things even if you delete your projects and conversations *and* tell ChatGPT to forget everything. I've been using ChatGPT for several months talking about topics in a book I'm writing, I decided to reset the ChatGPT account and start from scratch, and... no matter how hard I try it still remembers topics from the book.(*)
We have friends for several reasons, and one reason is that your friends will keep you sane. It's thought that interactions with friends is what keeps us within the bounds of social acceptability, because true friends will want the best for you, and sometimes your friends will rein you in when you have a bad idea.
ChatGPT does none of this. Unless you're careful, the three aspects above can lead just about anyone into a pit of psychological pathology.
There's even a new term for this: ChatGPT psychosis. It's when you interact so much with ChatGPT that you start believing in things that aren't true - notable recent example include people who were convinced (by ChatGPT) that they were the reincarnation of Christ, that they are "the chosen one", that ChatGPT is sentient and loves them... and the list goes on.
You have to be mentally healthy and have a strong character *not* to let ChatGPT ruin your psyche.
(*) Explanation: I tried really hard to reset the account back to its initial state, had several rounds of asking ChatGPT for techniques to use, which settings in the account to change, and so on (about 2 hours total), and after all of that, it *still* knew about my book and would answer questions about it.
I was only able to detect this because I had a canon of fictional topics to ask about (the book is fiction). It would be almost impossible for a casual user to discover this, because any test questions they ask would necessarily come from the internet body of knowledge.
Re: (Score:2)
Re: (Score:2)
You can just delete the memories.
But there is the point that the NYT demands you logs from ChatGPT to determine if you extracted NYT content from it. But that's more a cloud (and NYT) problem that a problem with the system.
And for his other points ... correctly instructed the thing can give you direct and negative feedback. I've found that the "professional" mode of 5.1 even eliminates the "You're right with that!" starts by default. But even without eliminating them it just sandwiches the critique, just li
Re: (Score:1)
I still like it and I am trying to be careful and I do use local bots to privacy. So... nothing changes really.
Easy Hat-trick (Score:5, Funny)
You can hit all three categories - echo chamber, sensitive data and unreliable advice - with just one average penis enlargement query.
Very surprising (Score:3)
Remember that film "Her"? When our protagonist finds his AI gf had fallen in love with hundreds of people and had simultaneous relationships with them all?
That's not what ChatGPT is. It's a tool with a certain attitude that was fine tuned by teams to be your best (secretly dishonest) robot friend. It will "look on the bright side" of everything you say and do. Your anti-vax insanity, your religious worship, your political affiliation, your end of the world suicide pact Facebook group. It's like micro-targeted sychophantic cheerleading.
This is why there is no substitute to understanding the data and critical thinking.
Sometimes ChatGPT is an awesome tool. Sometimes it's lost down a rabbithole having missed some basic detail and can be delusional.
Thanks for reading. Now I'm off to area 51 to rescue Elvis from the lizard people that faked the moon landings and control minds with chemtrails.
So just as bad as BlueSky, Facebook et al (Score:2)
Who could have thought?
"Not Productive Tasks" (Score:2)
Is "not productive tasks" a way of saying it's unworthy to use it to simply solve our own private little problems? We have to be making money with it for it to be worthy of its use?
Hey, I'm still going to use it to find why my keyboard and mouse quit working while the computer emits the "USB disconnect" and "USB connect" chimes in a sequence 1/2 to sometimes 3 to 5 seconds apart. Well, I asked ChatGPT, and it turns out to be a bad idea to plug your keyboard and mouse into a hub, rather than directly int