If you give any tool to any large group of people, some of them will use it in harmful ways. The knife slips and cuts the user, the chainsaw kicks back, the LLM hallucinates superficially credible gibberish.
Once people get over the shock of how impressive LLMs are, they'll see how far we still are from AGI. Because we don't have an artificial *general* intelligence yet, we can't *generally* replace humans. But we can replace them in many labor intensive tasks that don't require common sense, experience with the real world, and advanced thinking skills.
Take copy editing -- a common and labor intensive task in any kind of publishing, public affairs, content management, and corporate communications. Today you can hand a terrible mess of prose to an LLM and it will tidy it up, correcting spelling, punctuation, and grammatical mistakes like subject verb agreement and confusing homophones like "they're" and "their' and "there". The output will be superficially perfect, but you still need a human to judge whether it does the job needed; someone with actual experience and understanding of the human audience.
I think this will be the story of AI between now and the day that we finally achieve AGI, if we ever do: the need for humans with advanced cognitive skills will actually increase as the jobs for people with fundamental cognitive skills like copy editing will decline. Those two things happening together is a big problem. If we don't do something about education in advanced cognitive skills, we will find ourselves in a pickle, because basic cognitive labor is the pipeline that produces people who can perform advanced cognitive tasks.