Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:Good but insufficient (Score 1) 64

The spec it came up with includes: which specific material is used for which specific component, additional components to handle cases where there's chemically incompatible or thermally incompatible materials in proximity, what temperature regulation is needed where (and how), placement of sensors, pressure restrictions, details of computer network security, the design of the computers, network protocols, network topology, design modifications needed to pre-existing designs - it's impressively detailed.

I've actually uploaded what it's produced to GitHub, so if the most glorious piece of what is likely engineering fiction intrigued you, I would be happy to provide a link.

Comment They do need to worry about turning off customers. (Score 1) 156

If that was true, they would wait for sentences to end before slapping your round the face with a wet fish - I mean showing an advert for something you would never buy in your life.

Creators could mark breakpoints where it be less intrusive - or go "whole 1950's American" and actually say "Now here is a word from our sponsor ...". And it only needs to be done once on upload.

It is not hard for AI to detect where to split the content so the interruption is slightly less intrusive.

Here in the UK, commercial TV is allowed to show 6 mins of advertising per hour - normally split into 4 off 1.5 min breaks, but sometimes 2 off 3 min breaks or one 6 min break.
On one occasion I saw the three min break at the end of one hour concatenated onto the start of the next hour's six minutes! I presume this was made illegal, since it has not been done again.

I can only assume is due to bribery and corruption is the reason why this does not apply to the internet here.

Submission + - Chinese PhD student arrested smuggling biological materials, deleting evidence (foxnews.com)

schwit1 writes: Federal authorities expose Chinese national's attempt to bring concealed worm specimens to American laboratory

"The alleged smuggling of biological materials by this alien from a science and technology university in Wuhan, China—to be used at a University of Michigan laboratory—is part of an alarming pattern that threatens our security," U.S. Attorney Jerome F. Gorgon, Jr. said. "The American taxpayer should not be underwriting a PRC-based smuggling operation at one of our crucial public institutions."

This is less than a week after two Chinese nationals were arrested on federal charges for bringing 'head blight' fungus into US.

Submission + - Vaccine skeptic RFK Jr. ousts entire CDC vaccine advisory board (apnews.com)

skam240 writes: Well known vaccine skeptic and US Secretary of Health and Human Services Robert F. Kennedy Jr. on Monday removed every member of a scientific committee that advises the Centers for Disease Control and Prevention on how to use vaccines and pledged to replace them with his own picks https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fapnews.com%2Farticle%2Fken... ,

Kennedy has an extensive history of vaccine skepticism https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2F... , frequently furnishing false or misleading data dressed up to look scientific to the public.

Comment Re:They've never heard of llama.cpp? (Score 1) 27

You realize of course that Exams are also highly tuned for one specific thing, like "20th Century English Literature", "Linear Algebra", "Criminlology: Overview of the Prison System", and "Axiomatic Meta-logic".

Eigenvalues are all but guaranteed to show up in the the 2nd exam, and all but guaranteed not to be mentioned in the other three.

PS That last one is highly recommended, but its a little dense.

Submission + - Apple Lets Developers Tap Into Its Offline AI Models (techcrunch.com)

An anonymous reader writes: Apple is launching what it calls the Foundation Models framework, which the company says will let developers tap into its AI models in an offline, on-device fashion. Onstage at WWDC 2025 on Monday, Apple VP of software engineering Craig Federighi said that the Foundation Models framework will let apps use on-device AI models created by Apple to drive experiences. These models ship as a part of Apple Intelligence, Apple’s family of models that power a number of iOS features and capabilities.

“For example, if you’re getting ready for an exam, an app like Kahoot can create a personalized quiz from your notes to make studying more engaging,” Federighi said. “And because it happens using on-device models, this happens without cloud API costs [] We couldn’t be more excited about how developers can build on Apple intelligence to bring you new experiences that are smart, available when you’re offline, and that protect your privacy.”

In a blog post, Apple says that the Foundation Models framework has native support for Swift, Apple’s programming language for building apps for its various platforms. The company claims developers can access Apple Intelligence models with as few as three lines of code. Guided generation, tool calling, and more are all built into the Foundation Models framework, according to Apple. Automattic is already using the framework in its Day One journaling app, Apple says, while mapping app AllTrails is tapping the framework to recommend different hiking routes.

Comment Good but insufficient (Score 1) 64

I've mentioned this before, but I had Gemini, ChatGPT, and Claude jointly design me an aircraft, along with its engines. The sheer intricacy and complexity of the problem is such that it can take engineers years to get to what all three AIs agree is a good design. Grok took a look at as much as it could, before running out of space, and agreed it was sound.

Basically, I gave an initial starting point (a historic aircraft) and had each in turn fix issues with the previous version, until all three agreed on correctness.

This makes it a perfectly reasonable sanity check. If an engineer who knows what they're doing looks at the design and spots a problem, then AI has and intrinsic problem with complex problems, even when the complexity was iteratively produced by the AI itself.

Comment Re:Bollocks (Score 4, Interesting) 168

Natural NNs appear to use recursive methods.

What you "see" is not what your eyes observe, but rather a reconstruction assembled entirely from memories that are triggered by what your eyes observe, which is why the reconstructions often have blind spots.

Time seeming to slow down (even though experiments show that it doesn't alter response times), daydreaming, remembering, predicting, etc, the brain's searching for continuity, the episodic rather than snapshot nature of these processes, and the lack of any gap during sleep, is suggestive of some sort of recursion, where the output is used as some sort of component of the next input and where continuity is key.

We know something of the manner of reconstruction - there are some excellent, if rather old, documentary series, one by James Burke and another by David Eagleman, that give elementary introductions to how these reconstructions operate and the physics that make such reconstructions necessary.

It's very safe to assume that neuroscientists would not regard these as anything better than introductions, but they are useful for looking for traits we know the brain exhibits (and why) that are wholly absent from AI.

Comment Re:Books (Score 2) 168

You will find that books written by the infinite monkeys approach are less useful than books written by conscious thought, and that even those books are less useful than books written and then repeatedly fact-checked and edited by independent conscious thought.

It is not, in fact, the book that taught you things, but the level of error correction.

Comment Re:Frenetic churn (Score 1) 168

You are correct.

When it comes to basic facts, if multiple AIs that have independent internal structure and independent training sets state the same claim as a fact, then that's good evidence that it's probably not a hallucination but something actively learned, but it's not remotely close to evidence of it being fact.

Because AIs have no understanding of semantics, only association, that's about as good as AI gets.

Slashdot Top Deals

"But this one goes to eleven." -- Nigel Tufnel

Working...