Comment Re:Really? (Score 1) 34
Keep in mind that the whole "LLM" thing is a giant marketing and propaganda campaign built mostly on lies. This article here fits nicely in that, playing on FOMO and the illusion that things will get very useful.
Keep in mind that the whole "LLM" thing is a giant marketing and propaganda campaign built mostly on lies. This article here fits nicely in that, playing on FOMO and the illusion that things will get very useful.
It does. Another thing that will never make it onto my systems.
Hahaha, no. It is not actually possible to hide.
Indeed. If China bans it, the stuff must be really nasty.
Naa, that would be un-American! Profits rule and who cares if a few people die that would not have had to and a few others require expensive medical treatments. In fact, even better for profits!
At "slightly" less the costs and risks. Your point?
Indeed. Actively hindering is required because it is now cheaper to be climate friendly than otherwise. But the planet-destroyers will find a way to line their very temporary coffers at the expense of everybody. These people are insane and an existential threat.
It is not actually that easy. Data transfers leave traces. There are enough people qualified to find and interpret them that are not under government control.
At this time, Gemini is an app. Do not install it (r any other not trustworthy app) and you should be fine. All I get is an occasional chat offer from Gemini in the messages, which I then proceed to ignore. On the other hand, I am in Europe and sending any of my data to Google servers without positive and informed consent would be a criminal act. So maybe YMMV if you are in the US.
I have authenticator (TOTP, no cloud, not MS) on it and also 2nd factor for banking. Hence I regard it as part of my security equipment and nothing else will get on it and certainly no AI that can mess with my stuff.
I looked at authenticator as stand-alone and separate phone with only a data SIM and noting besides that 2nd banking factor on it, but decided that at this time I am willing to accept that risk. I do revisit that decision from time to time.
Doing that and trusting AI in that way may be the new peek of stupidity in this space. "ChatGPT psychosis" looks like a sane thing in comparison.
My take is the difference is that lesson is more externally provided/imposed, while learning is more of an internal (and voluntary) process.
I have seen a pattern here for a long time, and well before the Web hype.
How are people so duped by these companies? Is it just blind optimism? Why are we so predisposed to falling for this hype cycle?
People in general are dumb, cannot fact-check and try to conform to expectations of others. That makes them easy to manipulate. Countless examples are available of that effect. For commercial hypes (other than political or religious or moral panics or the like), you always find some scummy people that want to get rich and maybe get power and hence design and fuel the hype. Many hypes do not take off and nobody really notices. By some do. And AI is a repeat offender here.
It is a well-known effect called "model collapse" when you do this with training data. With temporary data ("context") you see something similar, namely that LLMs only ever capture a relatively small part of the input data and add noise and statistical correlations to it. You also see that no fact-checking or consistency checking is being done ("does this make sense?"), bacause LLMs are incapable of logical reasoning. After a few iterations, crap accumulates and original meaning vanishes and things turn to mush.
It was time a bunch or really stupid people go to work on that.
While this could be done in a way that teaches kids to be careful with AI and make sure to not atrophy their own skills, that is not how this will be going.
Where are the calculations that go with a calculated risk?