Comment Re: psychiatrist for AI (Score 1) 64
This is "absolutely without question" incorrect. One of the most useful properties of LLMs is demonstrated in-context learning capabilities where a good instruction tuned model is able to learn from conversations and information provided to it without modifying model weights.
You're ignorance is showing. The model does not change as it's used. Full stop. Like many other terms related to LLMs, "in context learning" is deeply misleading. Remove the wishful thinking and it boils down to "changes to the input cause changes to the output", which is obvious and not at all interesting.
Who cares?
People who care about facts and reality, not their preferred science-fiction delusion. I highlight the deterministic nature of the model proper and where the random element is introduced in the larger process to dispel some of the typical magical thinking you see from ignorant fools like you. The model does not and can not behave in the ways that morons like you image.
This is pure BS, key value matrices are maintained throughout.
Do you get-off on humiliation? While some caching is done as an optimization, this has absolutely no effect on the output. Give the same input at any point to a completely different instance of the model and you'll get the exact same results.
Again with determinism nonsense.
LOL! You think that the model isn't deterministic? Again, the only thing the model does is produce a list of next-token probabilities. It does this deterministically. The only non-deterministic part here is the final token selection, which is done probabilistically.
That you believe otherwise suggests that you're either even more ignorant that even I thought possible, or you think that LLMs or NNs are magical. What a fucking joke you are.
These word games are pointless.
The only one playing 'word games' here is you, ignorant troll.