Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Re:study confirms expectations (Score 1) 184

That's actually a good question. Inks have changed somewhat over the past 5,000 years, and there's no particular reason to think that tattoo inks have been equally mobile across this timeframe.

But now we come to a deeper point. Basically, tattoos (as I've always understand it) are surgically-engineered scars, with the scar tissue supposedly locking the ink in place. It's quite probable that my understanding is wrong - this isn't exactly an area I've really looked into in any depth, so the probability of me being right is rather slim. Nonetheless, if I had been correct, then you might well expect the stuff to stay there. Skin is highly permeable, but scar tissue less so. As long as the molecules exceed the size that can migrate, then you'd think it would be fine.

That it isn't fine shows that one or more of these ideas must be wrong.

Comment Re:What's old is new again (Score 1) 43

That wasn't *all* I said, but it is apparently as far as you read. But let's stay there for now. You apparently disagree with this, whnich means that you think that LLMs are the only kind of AI that there is, and that language models can be trained to do things like design rocket engines.

Comment Re:What's old is new again (Score 5, Informative) 43

Here's where the summary goes wrong:

Artificial intelligence is one type of technology that has begun to provide some of these necessary breakthroughs.

Artificial Intelligence is in fact many kinds of technologies. People conflate LLMs with the whole thing because its the first kind of AI that an average person with no technical knowledge could use after a fashion.

But nobody is going to design a new rocket engine in ChatGPT. They're going to use some other kind of AI that work on problems on processes that the average person can't even conceive of -- like design optimization where there are potentially hundreds of parameters to tweak. Some of the underlying technology may have similarities -- like "neural nets" , which are just collections of mathematical matrices that encoded likelihoods underneath, not realistic models of biological neural systems. It shouldn't be surprising that a collection of matrices containing parameters describing weighted relations between features should have a wide variety of applications. That's just math; it's just sexier to call it "AI".

Comment Re:Wrong question. (Score 1) 195

Investment is a tricky one.

I'd say that learning how to learn is probably the single-most valuable part of any degree, and anything that has any business calling itself a degree will make this a key aspect. And that, alone, makes a degree a good investment, as most people simply don't know how. They don't know where to look, how to look, how to tell what's useful, how to connect disparate research into something that could be used in a specific application, etc.

The actual specifics tend to be less important, as degree courses are well-behind the cutting edge and are necessarily grossly simplified because it's still really only crude foundational knowledge at this point. Students at undergraduate level simply don't know enough to know the truly interesting stuff.

And this is where it gets tricky. Because an undergraduate 4-year degree is aimed at producing thinkers. Those who want to do just the truly depressingly stupid stuff can get away with the 2 year courses. You do 4 years if you are actually serious about understanding. And, in all honesty, very few companies want entry-level who are competent at the craft, they want people who are fast and mindless. Nobody puts in four years of network theory or (Valhalla forbid) statistics for the purpose of being mindless. Not unless the stats destroyed their brain - which, to be honest, does happen.

Humanities does not make things easier. There would be a LOT of benefit in technical documentation to be written by folk who had some sort of command of the language they were using. Half the time, I'd accept stuff written by people who are merely passing acquaintances of the language. Vague awareness of there being a language would sometimes be an improvement. But that requires that people take a 2x4 to the usual cultural bias that you cannot be good at STEM and arts at the same time. (It's a particularly odd cultural bias, too, given how much Leonardo is held in high esteem and how neoclassical universities are either top or near-top in every country.)

So, yes, I'll agree a lot of degrees are useless for gaining employment and a lot of degrees for actually doing the work, but the overlap between these two is vague at times.

Comment Re:Directly monitored switches? (Score 1) 54

There is a possibility of a short-circuit causing an engine shutdown. Apparently, there is a known fault whereby a short can result in the FADEC "fail-safing" to engine shutdown, and this is one of the competing theories as the wiring apparently runs near a number of points in the aircraft with water (which is a really odd design choice).

Now, I'm not going to sit here and tell you that (a) the wiring actually runs there (the wiring block diagrams are easy to find, but block diagrams don't show actual wiring paths), (b) that there is anything to indicate that water could reach such wiring in a way that could cause a short, or (c) that it actually did so. I don't have that kind of information.

All I can tell you, at this point, is that aviation experts are saying that a short at such a location would cause an engine shutdown and that Boeing was aware of this risk.

I will leave it to the experts to debate why they're using electrical signalling (it's slower than fibre, heavier than fibre, can corrode, and can short) and whether the FADEC fail-safes are all that safe or just plain stupid. For a start, they get paid to shout at each other, and they actually know what specifics to shout at each other about.

But, if the claims are remotely accurate, then there were a number of well-known flaws in the design and I'm sure Boeing will just love to answer questions on why these weren't addressed. The problem being, of course, is that none of us know which of said claims are indeed remotely accurate, and that makes it easy for air crash investigators to go easy on manufacturers.

User Journal

Journal Journal: Audio processing and implications 1

Just as a thought experiment, I wondered just how sophisticated a sound engineering system someone like Delia Derbyshire could have had in 1964, and so set out to design one using nothing but the materials, components, and knowledge available at the time. In terms of sound quality, you could have matched anything produced in the early-to-mid 1980s. In terms of processing sophistication, you could have matched anything produced in the early 2000s. (What I came up with would take a large comple

Comment Re:Don't blame the pilot prematurely (Score 4, Insightful) 54

It's far from indisputable. Indeed, it's hotly disputed within the aviation industry. That does NOT mean that it was a short-circuit (although that is a theory that is under investigation), it merely means that "indisputable" is not the correct term to use here. You can argue probabilities or reasonableness, but you CANNOT argue "indisputable" when specialists in the field in question say that it is, in fact, disputed.

If you were to argue that the most probable cause was manual, then I think I could accept that. If you were to argue that Occam's Razor required that this be considered H0 and therefore a theory that must be falsified before others are considered, I'd not be quite so comfortable but would accept that you've got to have some sort of rigorous methodology and that's probably the sensible one.

But "indisputable"? No, we are not at that stage yet. We might reach that stage, but we're not there yet.

Comment Re:It WILL Replace Them (Score 4, Insightful) 45

The illusion of intelligence evaporates if you use these systems for more than a few minutes.

Using AI effectively requires, ironically, advanced thinking skills and abilities. It's not going to make stupid people as smart as smart people, it's going to make smart people smarter and stupid people stupider. If you can't outthink the AI, there's no place for you.

Comment Re:Oh, Such Greatness (Score 1, Interesting) 297

Lincoln was a Free Soiler. He may have had a moral aversion to slavery, but it was secondary to his economic concerns. He believed that slavery could continue in the South but should not be extended into the western territories, primarily because it limited economic opportunities for white laborers, who would otherwise have to compete with enslaved workers.

From an economic perspective, he was right. The Southern slave system enriched a small aristocratic elite—roughly 5% of whites—while offering poor whites very limited upward mobility.

The politics of the era were far more complicated than the simplified narrative of a uniformly radical abolitionist North confronting a uniformly pro-secession South. This oversimplification is largely an artifact of neo-Confederate historical revisionism. In reality, the North was deeply racist by modern standards, support for Southern secession was far from universal, and many secession conventions were marked by severe democratic irregularities, including voter intimidation.

The current coalescence of anti-science attitudes and neo-Confederate interpretations of the Civil War is not accidental. Both reflect a willingness to supplant scholarship with narratives that are more “correct” ideologically. This tendency is universal—everyone does it to some degree—but in these cases, it is profoundly anti-intellectual: inconvenient evidence is simply ignored or dismissed. As in the antebellum South, this lack of critical thought is being exploited to entrench an economic elite. It keeps people focused on fears over vaccinations or immigrant labor while policies serving elite interests are quietly enacted.

Comment Re:Computers don't "feel" anything (Score 1) 56

It's different from humans in that human opinions, expertise and intelligence are rooted in their experience. Good or bad, and inconsistent as it is, it is far, far more stable than AI. If you've ever tried to work at a long running task with generative AI, the crash in performance as the context rots is very, very noticeable, and it's intrinsic to the technology. Work with a human long enough, and you will see the faults in his reasoning, sure, but it's just as good or bad as it was at the beginning.

Comment Re:Computers don't "feel" anything (Score 3, Informative) 56

Correct. This is why I don't like the term "hallucinate". AIs don't experience hallucinations, because they don't experience anything. The problem they have would more correctly be called, in psychology terms "confabulation" -- they patch up holes in their knowledge by making up plausible sounding facts.

I have experimented with AI assistance for certain tasks, and find that generative AI absolutely passes the Turing test for short sessions -- if anything it's too good; too fast; too well-informed. But the longer the session goes, the more the illusion of intelligence evaporates.

This is because under the hood, what AI is doing is a bunch of linear algebra. The "model" is a set of matrices, and the "context" is a set of vectors representing your session up to the current point, augmented during each prompt response by results from Internet searches. The problem is, the "context" takes up lots of expensive high performance video RAM, and every user only gets so much of that. When you run out of space for your context, the older stuff drops out of the context. This is why credibility drops the longer a session runs. You start with a nice empty context, and you bring in some internet search results and run them through the model and it all makes sense. When you start throwing out parts of the context, the context turns into inconsistent mush.

Slashdot Top Deals

"In the face of entropy and nothingness, you kind of have to pretend it's not there if you want to keep writing good code." -- Karl Lehenbauer

Working...