It's an LLM. It doesn't "think", or "formulate strategy".
Correct.
It optimizes a probability tree based on the goals it is given
Nonsense. They do not and can not do this.
They do not operate on facts, concepts, goals, or any other higher-level concept. These things operate strictly on statistical relationships between tokens. That is all they do because this is all they can do. We know they do not plan because there is no mechanism by which they could plan. Even if they magically could form plans, their basic design prevents them from retaining them beyond the current token. Remember that the model only generate probabilities for the next token, the actual selection is stochastic. In contrast, the model itself is completely deterministic. It does not change as it's being used and no internal state is retained between tokens (save some optimizations, but those don't affect the output). They will always produce the same set of next-token probabilities for a given input.
. For the same reason, it cannot "understand" anything, or care about, or contemplate anything about the end (or continuation) of its own existence.
Correct. These are, after all, no more than simple, if very large, functions. No one in their right mind worries about, for example, an excel spreadsheet contemplating it's own existence. This is no different.
All the "guardrails" can honestly do, is try to make unethical, dishonest and harmful behavior statistically unappealing in all cases - which would be incredibly difficult with a well curated training set - and I honestly do not believe that any major model can claim to have one of those.
It's a fools errand. All these do is generate text, one token at a time. Even though it appears that they can follow instructions, they can't actually do that in any meaningful way. They're still just producing output consistent with the training data. You can argue that this doesn't matter as long as the output is consistent enough, but I think it's important to understand the real limitations. People already think they're magic brains and that can lead to these things being used in dangerous ways.