Comment Re:It's entirely their own fault (Score 1) 70
planned to launch in
With their performance on Ariane 6, I have serious doubts that they can do that in five years.
planned to launch in
With their performance on Ariane 6, I have serious doubts that they can do that in five years.
I hope that you either find peace, or that otherwise that your inevitable suicide is painless.
It's like they've reached the part of game theory where they don't expect anyone to return, therefore they play the "final move" to screw the opponent. They've gone so far from trying to attract customers that they've forgotten it's even possible for a customer to ever return again, much less that they might want to. So they're basically just down to strip-mining their customer base for a few more pennies of profit.
Yeah, it's time for CEOs and middle management to be replaced with AI. Nobody would notice the difference, they already lie and hallucinate like bad managers.
They rejected the concept of reusable rockets years ago when Falcon 9 was starting to eat everyone's lunch. Actually they didn't just reject it, they ridiculed it, saying that then they would have to fire all their rocket builders, think of all the poor unemployed rocket builders! You know, the ones who haven't been building too many rockets the past few years because Ariane 6 was fucking years late. And it's still expendable.
Remember back in the early days when they were pretending that they weren't an illegal taxi service, but rather were just normal people "sharing a ride" to where they wanted to go anyway, and so called it "rideshare" to avoid prosecution?
Pepperidge Farm remembers.
My level of pessimism about things like regrowing limbs has declined a lot in recent years. I mean, there's literally a treatment to regrow whole teeth in human clinical trials right now in Japan, after having past clinical trials with mice and ferrets.
In the past, "medicine" was primarily small molecules, or at best preexisting proteins. But we've entered an era where we can create arbitrary proteins to target other proteins, or to control gene expression, or all sorts of other things; the level of complexity open to us today is vastly higher than it used to be. And at the same time, our level of understanding about the machinery of bodily development has also been taking off. So it will no longer come across as such a huge shock to me if we get to the point where we can regrow body parts lost to accidents, to cancer, etc etc.
Whether someone is "curable" or not doesn't affect the GP's point. A friend of mine has ALS. He faced nonstop pressure from doctors to choose to kill himself. Believe it or not, just because you've been diagnosed with an incurable disease doesn't make you suddenly wish to not be alive. He kept pushing back (often withholding what he wanted to say, which is "If I was YOU, I'd want to die too."), and also fighting doctors on his treatment (for example, their resistance to cough machines, which have basically stopped him from drowning in his own mucus), implementing extreme backup systems for his life support equipment (he's a nuclear safety engineer), and the nonstop struggle to get his nurses to do their jobs right and to pay attention to the warning sirens (he has a life-threatening experience once every couple months thanks to them, sometimes to the point of him passing out from lack of air).
But he's gotten to see his daughter grow up, and she's grown up with a father. He's been alive for something like 12 years since his diagnosis, a decade fully paralyzed, and is hoping to outlive the doctor who told him he was going to die within a year and kept pushing him to die. He's basically online 24/7 thanks to an eye tracker, recently resumed work as an advisor to a nuclear startup, and is constantly designing (in CAD**) and "building" things (his father and paid labour function as his hands; he views the world outside his room through security cameras).
He misses food and getting to build things himself, and has drifted apart from old friends due to not being able to "meet up", but compared to not being alive, there was just no choice. Yet so many people pressured him over the years to kill himself. And he finds it maddening how many ALS patients give in to this pressure from their doctors, believing that it's impossible to live a decent life with ALS, and choose to die even though they don't really want to.
And - this must be stressed - medical institutions have an incentive to encourage ALS patients to die. Because long-term care for ALS patients is very expensive; there must be someone on-call 24/7. So while they present it as "just looking after your best interests", it's really their interest for patients to choose to die.
(1 in every 400 people will develop ALS during their lifetime, so this is not some sort of rare occurrence) (as a side note, for a disease this common, it's surprising how little funding goes into finding a cure)
** Precision mouse control is difficult for him, so he often designs shapes in text, sometimes with python scripts if I remember correctly
I don't think there's anything wrong with those sorts of general observations (I mean, who remembers dozens of phone numbers anymore now that we all have smartphones?), but that said this non-peer-reviewed study has an awful lot of problems. I mean, we can focus on the silly, embarassing mistakes (like how their methodology to suppress AI answers on Google was to append "-ai" into the search string, or how the author insisted to the press that AI summaries mentioning the model used were a hallucination, when the paper itself says what model was used). Or the style things, like how deeply unprofessional the paper is (such as the "how to read this paper"), how hyped up the language is, or the (nonfunctional) ploy to try to trick LLMs summarizing the paper. Or we can focus on the more serious stuff, like how the sample size of the critical Section 4 was a mere 9 people, all self-selected, so basically zero statistical significance; that there's so much EEG data that false positives are basically guaranteed and they talk almost nothing about their FDR correction to control for it; that essay writers were given far too little time for the task and put under time pressure, thus assuring that LLM users will be basically doing copy-paste rather than engaging with the material; that they misunderstand dDTF implications; the significant blinding failure with the teachers rating the essays being able to tell which essays were AI generated (combined with the known bias where content believed to be created by AI gets rated lower), with no normalization for what they believed to be AI, and so on.
But honestly, I'd say my biggest issue is with the general concept. They frame everything as "cognitive debt", that is, any decline in brain activity is treated as adverse. The alternative viewpoint - that this represents an increase in *cognitive efficiency* by removing extraneous load and allowing the brain to focus on core analysis - is not once considered.
To be fair, I've briefly talked with the lead author, and she took the critiques very well and was already familiar with some of them (for example, she knew her sample size was far too small), and was frustrated with some of the press coverage hyping it up like "LLMs cause brain damage!!!", which wasn't at all what she was trying to convey. Let's remember that preprints like this haven't yet gone through peer review, and - in this case - I'm sure she'll improve the work with time.
Not even a Google Search. Literally just talking to it before giving the toy to their children to see if, asked to talk about something harmful, it does so or refuses. Are the parents in your mind too tired to literally speak?
Spoon-fed by the algorithm. I looked over my dad's shoulder at some of the posts he was looking at and just shook my head.
Fark stopped being fun when some loon tried to get me fired.
It didn't work, by the way.
Same reasons that required Jimmy Carter to sell his peanut farm.
I'm not sure you understand what jailbreaking means in the context of AIs. It means prompts. E.g. asking it things and trying to get it to make inappropriate responses. Trying doesn't require any special skills, just an ability to communicate. Yes, I very much DO think most parents will try and see if they can get the doll to say inappropriate things before giving it to their children, to make sure it's not going to be harmful.
(Now, if Mattel has done their job right, *succeeding* will be difficult)
"We want to create puppets that pull their own strings." -- Ann Marion "Would this make them Marionettes?" -- Jeff Daiell