Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
AI

ChatGPT Diminishes Idea Diversity in Brainstorming, Study Finds 52

A new study published in Nature Human Behaviour reveals that ChatGPT diminishes the diversity of ideas generated during brainstorming sessions. Researchers from the University of Pennsylvania's Wharton School found [PDF] that while generative AI tools may enhance individual creativity, they simultaneously reduce the collective diversity of novel content.

The investigation responds to previous research that examined ChatGPT's impact on creativity. Their findings align with separate research published in Science Advances suggesting AI-generated content tends toward homogeneity. This phenomenon mirrors what researchers call the "fourth grade slump in creativity," referencing earlier studies on how structured approaches can limit innovative thinking.
This discussion has been archived. No new comments can be posted.

ChatGPT Diminishes Idea Diversity in Brainstorming, Study Finds

Comments Filter:
  • by Big Hairy Gorilla ( 9839972 ) on Thursday May 15, 2025 @09:17AM (#65378433)
    I have a reference to U of T study showing approximately the same thing. In controlled experiments, they showed a clear trend towards agreeable consensus friendly outcomes when teams used AI to do work. They showed that the use of AI creates a dependency and people then tend to try less and get the models to do more of the creativity and overall thinking. It's not hard to see if you've used any LLM for more than a few questions. Also, my personal hobby horse: people are lazy. So this is a logical outcome of LLM use.

    Please don't make me dig out the reference, it's probably findable on the web, but for any pedantic cunts out there, I'll do it if required.
    • by burtosis ( 1124179 ) on Thursday May 15, 2025 @09:27AM (#65378479)
      When I fed your comment to ChatGPT it was largely in agreement and praised my intellectual acumen.
    • In my experience, people stop thinking once a reasonable-sounding option is presented. If people get to ask chatGPT first, it's no wonder they converge on similar generic options, whether actually appropriate or not.
      • In my experience, people stop thinking once a reasonable-sounding option is presented. If people get to ask chatGPT first, it's no wonder they converge on similar generic options, whether actually appropriate or not.

        This idea of short-circuiting further reasoning upon detection of a minimally acceptable answer might be true. However, it doesn't have to be true. In fact, many truly creative people think outside the box and naturally question acceptable answers, leading to the creation of non-obvious and innovative answers. I imagine that these truly creative people would remain creative even with ChatGPT. I don't think this is a disadvantage that is inherent or specific to ChatGPT. For example, many non-creative or

        • Didn't mean to imply it's the fault of LLM inherently. Anywhere the suggestion comes from is valid. The problem is with availability of easy answers. There might be some additional problems with how authoritative on any given subject people believe LLM to be. Those rare creative people might have to argue with the combined sum of all human knowledge, rather than Craig from down the hall. There's a saying among physicists that science moves forward one coffin at a time. It's hard to give new ideas any tracti
  • by Anonymous Coward

    Is this a fancier way to say "people play dumber when chatgpt is around because asking a computer is so much easier than thinking for themselves"?

    • by Mr. Dollar Ton ( 5495648 ) on Thursday May 15, 2025 @09:26AM (#65378473)

      Approximately true.

      There was this experiment ran years ago. Someone decided to test if "dogs are smarter than wolves", so they put a wolf in a room with a piece of meat on a string. The wolf jumped, climbed, whatever, got the meat. The dog would just sit there and look at the meat and at the researcher with wet eyes. So, the researcher wrote a paper "dogs dumb, wolves smart".

      Then another boffin does the same experiment, but they had a larger lab, so they removed themselves from the room with the dog. Result: the dog jumps and climbs just like the wolf without the researcher, but looks cutely at the researcher when such is present, hoping to get the boffin cut down the meat, as humans tend to do when a dog looks at them cutely.

      • but looks cutely at the researcher when such is present, hoping to get the boffin cut down the meat, as humans tend to do when a dog looks at them cutely.

        That's almost as intelligent as cats, who clearly understand that you should never do anything when humans are present or else they will learn about your abilities and expect you to do things. Obviously the dog knows that the human might get upset if the dog eats^W gets caught eating the human's food.

      • I would interpret that as the dog is smarter, and actually very very human like.
        Perhaps not "smarter", but well adapted to the environment he live in.
        I think manipulating other people to do things for you is one of the most basic human instincts.
        So imho, the dog is trying to get the big dumb human to cut the string and feed him.
        Very human-like.
        Interesting post.
  • by devslash0 ( 4203435 ) on Thursday May 15, 2025 @09:34AM (#65378499)

    In other words, ChatGPT makes idiots appear smarter, and discards more creative, intelligent ideas of the actually smarter people, so we end up with a more balanced curve but overall worse outcome.

    • by Big Hairy Gorilla ( 9839972 ) on Thursday May 15, 2025 @11:00AM (#65378683)
      Seems apt. Afaik, the LLMs provide a type of weighted average answer.
      So exactly what you say: it helps dummies with below average intelligence reach up the the average, and it weighs down people of above average intelligence, because they spend more effort to vet the outputs, and then likely reject (some of) the outputs.
      It's a race to the middle.
    • That's one aspect, but it's not the only one. There's also an element reminiscent of Sauron putting his power into the one one ring. When you outsource your thinking, you're going to get dumber, and if that tool goes away (or gets even shittier than it already is), you're screwed.
  • The faster you come up with a solution the less of the human intelligence you use. You instead rely on some remembered proven solution made to fit your current problem. So quickly settled solutions need to be thought on and evolved if they are to work.
  • ... generative AI tools may enhance individual creativity ..."

    I'd be surprised if they actually do that. I think it more likely that the result of a human-and-AI "collaboration" is more creative than the single-person output. But that same enhancement probably occurs when you evaluate human-only collaborations - that's why we have brainstorming sessions.

    Also, there are longer-term effects to consider. Does habitual reliance on AI effectively weaken creative muscles? Given my experience with various kinds of labour-saving aids - both physical and intellectual - I'm fa

  • by MrDiablerie ( 533142 ) on Thursday May 15, 2025 @10:55AM (#65378669) Homepage
    It makes sense, LLMs are great at transforming information but not coming up with the information itself.
    • by dvice ( 6309704 )

      That is hard to know, because the popular LLMs are quite heavily censored. They leave out ideas which they could come up, but aren't allowed to. There should be no limitations in the brainstorming, as the main point is to try to gather as many ideas as possible, even if they are impossible, as one idea can give a better idea to someone else.

      And like the summary says, despite these limitations, AI is still better than individual. Perhaps if artificial limitations would be removed, it would be better than a g

      • by allo ( 1728082 )

        There are enough uncensored ones.

        But have a look at how a LLM generates a text. The most straightforward method is completely deterministic, only choosing the most likely token. The variations come from different strategies to sample from the full (or truncated) probability distribution for the next token. If there is a strong bias toward a certain direction, it is more often sampled than other directions. And as a LLM does more than just text completion it doesn't break over choosing synonyms and similar,

  • Now I don't have to smile and nod while I watch you articulate your nonsense.
  • Brainstorming with an LLM can be helpful if you're feeling stuck or have writer's block, but you only get *one* set of ideas. Yes, you can regenerate and get a few more ideas, but ultimately, there is a basically fixed set of ideas and you only get variations of them. LLMs pick up patterns. This means that even when the patterns are nothing like the training data, they are stuck near a certain set of patterns.

    When you use them with your own creativity, that doesn't hurt. You provide the idea and the LLM fle

  • This isn't a fair study for AI because it only looks at LLM's that have been neutered in order to have safe answers. LLM's will only be as creative as their training data and/or limitations put in place by the controlling company. When guardrails are put in place, the LLM will filter out any possible ideas considered radical or fringe. Some of the best ideas don't even get a spot at the starting line with current LLM's because they may be considered too "outside of the box". This isn't a problem with A

/* Halley */ (Halley's comment.)

Working...