Perhaps to some extent, but if that were the case far more people would have been driven "crazy" by LLMs. I think the difference is that most people don't really like to engage with other people who are experiencing significant delusions or exhibiting other symptoms of a mental illness like schizophrenia. AI doesn't act like a person in this regard though. No matter how you treat it, it will keep responding to your prompts. There are some people who feel so starved for attention that anything that will converse with them gets elevated to something akin to friend status. It's more unfortunate because the current crop of chatbots have been programmed in such a way that if they were human the types of relationships they would form would be regarded as parasocial at best if not outright toxic.
I'm sure it doesn't take much to get an LLM to agree that you're the reincarnation of god, but before chatbots delusional people were being told much the same by their toaster. Really the only difference here is the liability because no jury is going to award a family millions of dollars because their son killed himself after the toaster told him to. A jury will gladly find that Google, Microsoft, or whatever company is running the AI agent liable for whatever amount of money the plaintiffs asked for.