Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:Why?! (Score 1) 100

Wireless controllers have a reliability rate SUBSTANTIALLY lower than wired controllers.

They picked carbon fibre that was past it's best by date from storage.
The "source" wasn't fine if it's cast offs. Details matter. If it were a prototype used for evaluative testing without live occupants, all good. But that's not what we're talking about here.

Have you ever actually worked with wireless electronics or protocols? Here's a hint: Bluetooth and WiFi are not protocols that should be used in any life-or-death situation. There are ways to use them, and to do so safely; this wasn't even CLOSE to that.
RF interference can come from many places, in an emergency situation a partially faulty motor can saturate RF bands and block transmissions. I've worked in environments were a failed electric motor wiped out an entire network. You can;t eliminate all risk, but you can certainly increase safety margins. Now imagine a minor failure eliminating their control system.

Comment Why?! (Score 5, Insightful) 100

Why?!

I truly find this story uninteresting after learning enough regarding decision making on the project.

The decision making was so poor they used a wireless controller as the only real controls.

They not only picked questionable materials to build it from, they picked questionable sources for that material, skipped doing any real testing of the material while ignoring legitimate concerns.

Pushing the limits is one thing, but so many of these decisions were just simply daft.

Hardwired controls with wireless for convenience; override the wireless in an emergency.

Check the sub before and after each launch, looking for material issues and documenting any changes. Not a cursory glance at it, but using equipment to actually scan the surface for defects as is used in related industries. (Ultrasonic, radio isotopes/xray, etc)

These two things alone would have increased the safety factor of this project immensely.

But ignoring them all? Boring. You made a coffin with a randomization factor

Security

A Researcher Figured Out How To Reveal Any Phone Number Linked To a Google Account (wired.com) 17

A cybersecurity researcher was able to figure out the phone number linked to any Google account, information that is usually not public and is often sensitive, according to the researcher, Google, and 404 Media's own tests. From a report: The issue has since been fixed but at the time presented a privacy issue in which even hackers with relatively few resources could have brute forced their way to peoples' personal information. "I think this exploit is pretty bad since it's basically a gold mine for SIM swappers," the independent security researcher who found the issue, who goes by the handle brutecat, wrote in an email.

[...] In mid-April, we provided brutecat with one of our personal Gmail addresses in order to test the vulnerability. About six hours later, brutecat replied with the correct and full phone number linked to that account. "Essentially, it's bruting the number," brutecat said of their process. Brute forcing is when a hacker rapidly tries different combinations of digits or characters until finding the ones they're after. Typically that's in the context of finding someone's password, but here brutecat is doing something similar to determine a Google user's phone number.

Brutecat said in an email the brute forcing takes around one hour for a U.S. number, or 8 minutes for a UK one. For other countries, it can take less than a minute, they said. In an accompanying video demonstrating the exploit, brutecat explains an attacker needs the target's Google display name. They find this by first transferring ownership of a document from Google's Looker Studio product to the target, the video says. They say they modified the document's name to be millions of characters, which ends up with the target not being notified of the ownership switch. Using some custom code, which they detailed in their write up, brutecat then barrages Google with guesses of the phone number until getting a hit.

AI

'AI Is Not Intelligent': The Atlantic Criticizes 'Scam' Underlying the AI Industry (msn.com) 205

The Atlantic makes that case that "the foundation of the AI industry is a scam" and that AI "is not what its developers are selling it as: a new class of thinking — and, soon, feeling — machines." [OpenAI CEO Sam] Altman brags about ChatGPT-4.5's improved "emotional intelligence," which he says makes users feel like they're "talking to a thoughtful person." Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be "smarter than a Nobel Prize winner." Demis Hassabis, the CEO of Google's DeepMind, said the goal is to create "models that are able to understand the world around us." These statements betray a conceptual error: Large language models do not, cannot, and will not "understand" anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
A sociologist and linguist even teamed up for a new book called The AI Con: How to Fight Big Tech's Hype and Create the Future We Want, the article points out: The authors observe that large language models take advantage of the brain's tendency to associate language with thinking: "We encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed."

Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that "ChatGPT is my therapist — it's more qualified than any human could be." Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age....

The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of "AI experts" think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans don't quite understand how artificial "intelligence" works, they also certainly don't trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on.... If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they should — and should not — replace, they may be spared its worst consequences.

Biotech

'We Finally May Be Able to Rid the World of Mosquitoes. But Should We?' (yahoo.com) 153

It's no longer a hypothetical question, writes the Washington Post. "In recent years, scientists have devised powerful genetic tools that may be able to eradicate mosquitoes and other pests once and for all."

But along with the ability to fight malaria, dengue, West Nile virus and other serious diseases, "the development of this technology also raises a profound ethical question: When, if ever, is it okay to intentionally drive a species out of existence...?" When so many wildlife conservationists are trying to save plants and animals from disappearing, the mosquito is one of the few creatures that people argue is actually worthy of extinction. Forget about tigers or bears; it's the tiny mosquito that is the deadliest animal on Earth. The human misery caused by malaria is undeniable. Nearly 600,000 people died of the disease in 2023, according to the World Health Organization, with the majority of cases in Africa... But recently, the Hastings Center for Bioethics, a research institute in New York, and Arizona State University brought together a group of bioethicists to discuss the potential pitfalls of intentionally trying to drive a species to extinction. In a policy paper published in the journal Science last month, the group concluded that "deliberate full extinction might occasionally be acceptable, but only extremely rarely..."

It's unclear how important malaria-carrying mosquitoes are to broader ecosystems. Little research has been done to figure out whether frogs or other animals that eat the insects would be able to find their meals elsewhere. Scientists are hotly debating whether a broader "insect apocalypse" is underway in many parts of the world, which may imperil other creatures that depend on them for food and pollination... Instead, the authors said, geneticists should be able to use gene editing, vaccines and other tools to target not the mosquito itself, but the single-celled Plasmodium parasite that is responsible for malaria. That invisible microorganism — which a mosquito transfers from its saliva to a person's blood when it bites — is the real culprit.

A nonprofit research consortium called Target Malaria has genetically modified mosquitoes in their labs (which get core funding from the Gates Foundation and from Open Philanthropy, backed by Facebook co-founder Dustin Moskovitz and his wife). ), and hopes to deploy them in the wild within five years...

Submission + - Caffeine Has a Weird Effect on Your Brain While You're Asleep (sciencealert.com) 1

alternative_right writes: Caffeine was shown to increase brain signal complexity, and shift the brain closer to a state of 'criticality', in tests run by researchers from the University of Montreal in Canada. This criticality refers to the brain being balanced between structure and flexibility, thought to be the most efficient state for processing information, learning, and making decisions.

Submission + - What Will Universities Look Like Post-ChatGPT? (cameronharwick.com) 5

An anonymous reader writes: Lots of people are sounding the alarm on AI cheating in college.

Who could resist a tool that makes every assignment easier with seemingly no consequences? After spending the better part of the past two years grading AI-generated papers, Troy Jollimore, a poet, philosopher, and Cal State Chico ethics professor, has concerns. “Massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” he said. “Both in the literal sense and in the sense of being historically illiterate and having no knowledge of their own culture, much less anyone else’s.”

Economist Cameron Harwick says it's on professors to respond, and it's going to look like relying more on tests and not on homework—which means a diploma will have to be less about intelligence and more about agency and discipline.

This approach significantly raises the stakes of tests. It violates a longstanding maxim in education, that successful teaching involves quick feedback: frequent, small assignments that help students gauge how they’re doing, graded, to give them a push to actually do it.... Unfortunately, this conventional wisdom is probably going to have to go. If AI makes some aspect of the classroom easier, something else has to get harder, or the university has no reason to exist.

The signal that a diploma sends can’t continue to be “I know things”. ChatGPT knows things. A diploma in the AI era will have to signal discipline and agency – things that AI, as yet, still lacks and can’t substitute for. Any student who makes it through such a class will have a credible signal that they can successfully avoid the temptation to slack, and that they have the self-control to execute on long-term plans.


Comment Re:This isn't necessarily bad (Score 1) 141

That's what I assumed as well. Buy Now Pay Later loans like this have a long history of being predatory. So I took a look at what it would cost to accept Klarna (as an example) as a merchant. The reality is that they have transaction fees that are very similar to credit cards. In other words, these companies do not need to rely on missed payments to make a profit.

These companies are apparently setting themselves up to replace traditional credit card payment systems, which suits me right down to the ground.

The difference is that it is much easier to get a Klarna account, and it isn't (yet) as widely available.

Comment Re:Credit Cards? (Score 2) 141

I felt the same way at first. Traditional BNPL schemes were very predatory. However, Klarna (and others) appear to be playing approximately the same game as the traditional credit card processors. They charge transaction fees that are roughly the same as credit card processors, and like credit cards their customers don't pay extra if they pay their bill on time. Klarna, in particular actually appears to give customers interest free time.

The difference, for consumers, is primarily that a Klarna account is much easier to get, and it isn't universally accepted. From a merchant perspective, depending on your payment provider, you might already be able to accept Klarna, and it appears that it mostly works like a credit card. It's even possible that charge backs are less of an issue, although it does appear that transaction fees are not given back in the case of a refund.

Personally, I am all for competition when it comes to payment networks. Visa and Mastercard are both devils. More competition for them is good for all of us.

Submission + - Jared Isaacman pre-fired because of Musk connection (theregister.com)

Mirnotoriety writes: “Jared Isaacman, former NASA Administrator nominee, has shared how the US space agency might have looked under his leadership and blamed his connections with Elon Musk for the abrupt withdrawal of his nomination.”

"I don't like to play dumb ... I don't think that the timing was much of a coincidence ... There were other things going on on the same day."

‘There were indeed. Elon Musk's departure from the Department of Government Efficiency was also announced. "Some people had some axes to grind," said Isaacman, "and I was a good visible target."’

Slashdot Top Deals

Never call a man a fool. Borrow from him.

Working...