Absolutely correct. And the Newsom metaphor is exactly on target.
I work on AGIs and it has been abundantly clear that LLMs are very flawed insofar as evolving to emulate human mind level. However, I have AGI designs that follow a far different path to do things very much better. I will have papers and books coming out on this.
I've viewed the LLM hype as being dishonest, perhaps for a rea$on. All that money, going to build megacomputing to support LLM input processing is a blind red herring. Nvidia and the cloud providers all smelled money so that path gets hyped, but I think future technology will show it is a false path. The human brain has massive parallelism but it can do one-shot learning - which how humans learn, grow, and evolve their knowledge handling, quite differently. LLM training is a kind of boondoggle.