Statistical matching, which is all AI does, is 100% incapable of thinking outside the math that drives it. More accurately, AI is 100% incapable of thinking. End of story.
Indeed. But, you know, I am beginning to think that most people (outside of the about 10-15% independent thinkers and the additional 5% or so that can be convinced using rational arguments) are actually incapable of rational thinking or actively chose not to do it. People that do not understand the difference between an implication and a correlation. People think that people that do some thing makes them responsible for something entirely different with no causation chain present, but they have this fuzzy association. MAGAs that are keyword-operated and cannot do anything beyond reacting to simple keyword-triggers. And all the people that can only do yes/no and do not understand that most things are in degrees and shades of grey.
For these people, who can essentially only do unreliable statistical correlation instead of actual reasoning, an LLM may indeed look like it has insight, because they do not understand what insight actually is. And that LLM has a far larger "knowledge"-base.