Yeah, I'm hung up on that too. You can come up with some outrageously huge numbers for mass and angular velocity, but once I multiply them by zero distance... I'm missing something.
the outer edge of the mass exceeding the speed of light
That intuitively makes sense, but I thought part of the black hole cheat is that it doesn't have an edge. I thought they were literally singularities, with a circumference of zero. Apparently not the case?
How a thing with a circumference of zero could meaningfully "rotate" is beyond me, but I thought this (and many other suspected properties of rotating black holes) was supposed to be beyond my ignorant layman understanding!
Well, yes -- the lies and the exaggerations are a problem. But even if you *discount* the lies and exaggerations, they're not *all of the problem*.
I have no reason to believe this particular individual is a liar, so I'm inclined to entertain his argument as being offered in good faith. That doesn't mean I necessarily have to buy into it. I'm also allowed to have *degrees* of belief; while the gentleman has *a* point, that doesn't mean there aren't other points to make.
That's where I am on his point. I think he's absolutely right, that LLMs don't have to be a stepping stone to AGI to be useful. Nor do I doubt they *are* useful. But I don't think we fully understand the consequences of embracing them and replacing so many people with them. The dangers of thoughtless AI adoption arise in that very gap between what LLMs do and what a sound step toward AGI ought to do.
LLMs, as I understand them, generate plausible sounding responses to prompts; in fact with the enormous datasets they have been trained on, they sound plausible to a *superhuman* degree. The gap between "accurately reasoned" and "looks really plausible" is a big, serious gap. To be fair, *humans* do this too -- satisfy their bosses with plausible-sounding but not reasoned responses -- but the fact that these systems are better at bullshitting than humans isn't a good thing.
On top of this, the organizations developing these things aren't in the business of making the world a better place -- or if they are in that business, they'd rather not be. They're making a product, and to make that product attractive their models *clearly* strive to give the user an answer that he will find acceptable, which is also dangerous in a system that generates plausible but not-properly-reasoned responses. Most of them rather transparently flatter their users, which sets my teeth on edge, precisely because it is designed to manipulate my faith in responses which aren't necessarily defensible.
In the hands of people increasingly working in isolation from other humans with differing points of view, systems which don't actually reason but are superhumanly believable are extremely dangaerous in my opinion. LLMs may be the most potent agent of confirmation bias ever devised. Now I do think these dangers can be addressed and mitigated to some degree, but the question is, will they be in a race to capture a new and incalculably value market where decision-makers, both vendors and consumers, aren't necessarily focused on the welfare of humanity?
An adversary can coerce a proprietary software producer to compromise the code. That's what we're going to see here.
An adversary cannot time-travel to when a protocol was invented, and compromise the protocol. (Though I guess the NSA can come kind of close to that, by "helping" as it's being developed, w/out the time-travel part.) That's what we're not going to see here.
Ergo, proprietary apps will remain unable to provide secure messaging, but secure messaging will remain available to people who want it.
Is your job running? You'd better go catch it!