I see this less as people willingly referring to LLMs for that kind of information, and more as simply an obvious side effect of shiving it in people's faces everywhere.
Most of us here are techies, we know what the "AI overview" is and what that implies about its factuality and reliability. Although I sometimes fall into the trap of reading it myself, absent-mindedly, because it's so front and center on any browser not specifically configured to get rid of it. Most people just Google something and look what turns up. That used to be an okay approach, too, before companies decided that LLM slop should push out references to actually worthy sources.
I put 99% of the blame for this on Google & Co. and laugh in the face of their promises to "improve accuracy". No, they're not going to turn a glorified Markov chain generator into something that somehow starts being an adequate replacement for authoritative sources. They didn't understand the problem, and apparently they don't understand the basics of their own technology.