Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror

Comment Re:enough energy to knock something off a shelf (Score 4, Insightful) 29

Not like this with this - the energy here equates to a couple hundredths of a joule. Now, the "Oh My God! Particle" had a much higher energy, about three orders of magnitude higher. That's knock-photos-over sort of energy (and a lot more than that). The problem is that you can't deposit it all at once. A ton of energy does get transferred during the first collision, but it's ejecting whatever it hit out of whatever it was in as a shower of relativistic particles that - like the original particle - tend to travel a long distance between interactions. Whatever particle was hit is not pulling the whole target with it, it's just buggering off as a ghostly energy spray. There will be some limited chains of secondary interactions transferring more kinetic energy, but not "knock pictures over" levels of energy transferred.

Also, here on the surface you're very unlikely to get the original collision; collisions with the atmosphere can spread the resultant spray of particles out across multiple square kilometers before any of them reaches the surface.

Comment True (Score 1) 47

There are only 24 hours in a day, and a human brain can only process so much sensory input.

But there is a lot of "addressable market" left - just think of all that blank space on the walls, floors and ceilings around you, failing to be distracting.

Speaking of TAM and related buzzwords,

"I think it's good to preserve optionality"

You can always spot people being promoted too quickly - they haven't learned how to operate the role's vocabulary yet.

Comment Re:xAI, power gobbler (Score 3, Insightful) 11

The average ICE car burns its entire mass worth of fuel every year. Up in smoke into our breathing air, gone, no recycling.

The average car on the road lasts about two decades, and is then recycled, with the vast majority of its metals recovered.

The manufacturing phase is not the phase you have to worry about when it comes to transportation.

Comment Re:xAI, power gobbler (Score 4, Funny) 11

Any sources for this

Anonymous (2021). "How My Uncle’s Friend’s Mechanic Proved EVs Are Worse." International Journal of Hunches, 5(3), 1-11.

Backyard, B. (2018). "EVs Are Worse Because I Said So: A Robust Analysis." Garage Journal of Automotive Opinions, 3(2), 1-2.

Dunning, K. & Kruger, E. (2019). "Why Everything I Don’t Like Is Actually Bad for the Environment." Confirmation Bias Review, 99(1), 0-0.

Johnson, L. & McFakename, R. (2022). "Carbon Footprint Myths and Why They Sound Convincing After Three Beers." Annals of Bro Science, 7(2), 1337-42.

Lee, H. (2025). "Numbers I Felt Were True". Global Journal of Speculative Engineering, 22(1), 34-38.

Outdated, T. (2015, never revised). "EVs Are Bad Because of That One Study From 2010 I Misinterpreted." Obsolete Science Digest, 30(4), 1-5.

Tinfoil, H. (2020). "Electric Cars Are a Government Plot (And Other Things I Yell at Clouds)." Conspiracy Theories Auto, 5(5), 1-99.

Trustmebro, A. (2019). "The 8-Year Rule: Why It’s Definitely Not Made Up." Vibes-Based Research, 2(3), 69-420.

Wrong, W. (2018). "The Art of Being Loudly Incorrect About Technology." Dunning-Kruger Journal, 1(1), 1-?.

Comment Re:Better Results (Score 5, Informative) 238

I know this is an alien concept to most people here, but it would be nice if people would actually, you know, read the papers first? I know nobody does this, but, could people at least try?

First off, this isn't peer reviewed. So it's not "actual, careful research", it's "not yet analyzed to determine whether it's decent research".

Secondly, despite what they call it, they're not dealing with LLMs at all. They're dealing with Transformers, but in no way does it have anything to do with "language", unless you think language is repeated mathematical transforms on random letters.

It also has nothing to do with "large". Their model that most of the paper is based on is minuscule, with 4 layers, 32 hidden dimensions, and 4 attention heads. A typical large frontier LLM has maybe 128 layers, >10k hidden dimensions, and upwards of 100 or so attention heads.

So right off the bat, this has nothing to do with "large language models". It is a test on a toy version of the underlying tech.

Let us continue: "During the inference time, we set the temperature to 1e-5." This is a bizarrely low temperature for a LLM. Might as well set it to zero. I wonder if they have a justification for this? I don't see it in the paper. Temperatures this low tend to show no creativity and get stuck in loops, at least with "normal" LLMs.

They train it with 456976 samples, which is.... not a lot. Memorization is learned quickly in LLMs, while generalization is learned very slowly (see e.g. papers on "grokking").

Now here's what they're actually doing. They have two types of symbol transformations: rotation (for example, ROT("APPLE", 1) = "BQQMF") and cyclic shifts (for example, CYC("APPLE", 1) = "EAPPL".

For the in-domain tests, they'll say train on ROT, and test with ROT. It scores 100% on these. It scores near-zero on the others:

Composition (CMP): They train on a mix of two-step tasks: ROT followed by ROT; ROT followed by CYC; and CYC followed by ROT. They then test with CYC followed by CYC. They believe that the model should have figured out what CYC is doing on its own and be able to apply CYC twice on its own.

Partial Out-of-Distribution (POOD): They train on simply ROT followed by ROT. They then task it to perform ROT followed by CYC. To repeat: it was never traiend to do CYC.

Out-of-Distribution (OOD): They train simply on ROT followed by ROT They then task it to do CYC followed by CYC. Once again, it was never trained to do CYC.

The latter two seem like grossly unfair tests. Basically, they want this tiny toy model with a "brain" smaller than a dust mite's to zero-shot an example it's had no training on just by seeing one example in its prompt. That's just not going to happen, and it's stupid to think it's going to happen.

Re, their CMP example: the easiest way for the (minuscule) model to learn it isn't to try to deduce what ROT and CYC mean individually; it's to learn what ROT-ROT does, what ROT-CYC does, and what CYC-ROT does. It doesn't have the "brainpower", not was it trained to, to "mull over" these problems (nor does it have any preexisting knowledge about what a "rotation" or a "cycle" is); it's just learning: problem 1 takes 2 parameters and I need to do an an offset based on the sum of these two parameters. Problem 2... etc.

The paper draws way too strong of conclusions from its premise. They do zero attempt to actually insert any probes in their model to see what their model is actually doing (ala Anthropic). And it's a Karen's Rule violation (making strong assertions about model performance vs. humans without actually running any human controls).

The ability to zero-shot is not some innate behavior; it is a learned behavior. Actual LLMs can readily zero-shot these problems. And by contrast, a human baby who has never been exposed to concepts like cyclic or rotational transformation of symbols could not. One of the classic hard problems is how to communicate with an alien intellect - if we got a message from aliens, how could we understand it? If we wanted to send one to them, how could we get them to understand it? Zero-shotting communicative intent requires a common frame of reference to build off of.

Comment Re: Racket (Score 1) 61

I wonder if they realize that the money they get through deals like his are still subject to Congressional budgetary controls. The Reagan administration didnâ(TM)t either ( or chose to ignore the constitutional limits on presidential power) when they tried to use money from clandestine sales of arms to the Iranians to set up a fund they could use to spend without Congressional control.

Comment Write your will (Score 3, Insightful) 71

Seems like a straightforward thing to write into your will.

In related news, most people should write a will way earlier than they usually start thinking about it. If you have enough to worry about it being taken, it is enough for people to fight about.

You absolutely should think about your digital ephemera left behind. I don't just mean encrypting your porn directory and not leaving that key in your list. If you have data you think your family or whoever should have interest in, make sure it is curated, findable, readable, etc. by the folks you expect to deal with it. Some people love digging through family cruft, but most don't. How interested do you think your folks are in your unedited directory of 40K photos?

Comment Price, value and rivalry (Score 1) 111

There was enormous value to the old internet because the marginal price for access was zero.

LLMs provide a mechanism to access the same information in a radically more energy-intensive way, which was the missing mechanism to put a price on that value.

A price tag means the data has to be made into a rivalrous good or you can't sell it. Then the old data has to be made unavailable.

Comment Re:Breaking news (Score 1) 222

100% this. I'm a vegetarian, the sort of person they think should be buying their products, but their products disgust me, because they remind me of meat. I don't want to be reminded of an animal corpse while I'm trying to enjoy a tasty meal. Why mimic the thing I don't want to eat?

(I'll only speak re: vegetarians below, but I expect vegans are similar)

I would ask non-vegetarians: imagine that you live in a world where people ate toddler meat. Real human toddlers, slaughtered for their meat. The vast majority of people in your situation would of course avoid eating them. Some may be radical anti-toddler-meat campaigners. Others may silently accept that they're not going to change the rest of the world. Either way, you let people know that you don't eat toddler meat so they don't serve it to you. But hey, some friendly cooks feel bad that you don't get to enjoy toddler meat! So they make a baby meat substitute that looks and tastes exactly like toddler meat! They package it in packages with pictures of dead toddlers on it, but with labels "No toddler included!" And then they expectantly wait for you to thank them and praise them for finally making toddler meat that you can eat - rather than being disgusted by the whole concept and wanting some non-baby related food that doesn't make you think about dead toddlers while you eat.

That's not the situation *all* vegetarians are in, but it is the situation that a *lot*, dare I say most, vegetarians are in.

I think a lot of non-vegetarians cooking for vegetarians are just frankly confused about what to offer us, as they have trouble picturing of a meal without meat. It's really simple: you know how fatty, salty, carby, umami-rich stuff tastes really really good and leaves you feeling satiated? Yeah, just make something that's fatty, salty, carby, and umami-rich that doesn't involve meat, and your vegetarian friends will be happy ;) Like, pick anything carby, cook it with a tasty fat that pairs well with it, add salt and something umami-rich (things like mushrooms, nuts, tomatoes, yeast, nori, olives, nuts, cheese (vegetarians only), spices, etc etc), and viola, that's a good vegetarian dish ;)

  Of course, *to make it healthier*, and a more "adult" taste, you'll want to include non-carby veggies (which, per unit *dry mass*, are actually highly protein rich, with e.g. freeze-dried broccoli being well more protein rich than your average grade of ground beef without its water, and freeze-dried watercress being up there with fish - they're just heavily watered down). Veggies also add umami. You can also - optionally, but it's not at all a requirement - include high-protein things like tofu, tempeh, seitan, TVP, etc. But protein deficiency is not common among vegetarians or vegans in western society (the main risk is iron deficiency, particularly for vegans, and - exclusively for vegans - B12 deficiency, but only if they don't eat anything fortified with B12, though B12 fortification is common).

Comment Re:80 to 100 years (Score 1) 58

Am I the only person getting whiplash that we're rediscussing the exact same thing when this concept was already proposed as Breakthrough Starshot, and was big in the press at the time, incl. on this site?

Anyway, you still have to have the energy to transmit back, which was proposed to be from a RTG: My hot take: since you already have to have a (comparably) big sail anyway, which means booms to maintain its structure, use 232U to get many times the energy per gram as 238Pu (38,9 MeV vs. 5,6 MeV), at the cost of a hard gamma from 208Tl, and put it out on the ends of the booms, with appropriately radiation-hardened electronics. You could also do double-duty with an alpha sail (alpha emitter backed by a thin low-Z material to create net thrust from alpha emission)

Also, if the 232U is in the form of a compound that's sufficiently soft for fast diffusion at its equilibrium temperature, you can diffuse out the 220Rn and avoid 208Tl's hard gamma altogether. This costs you (you only capture 16,6 MeV of alphas), but not only does it avoid the hard gamma, but it also means that you don't retain the mass of the stable decay products, so your craft gets significantly lighter over time. (At one point I was considering urania aerogels to lose the radon instead of "soft" high-diffusion materials, but the data was suggesting that the aerogel would quite quickly self-pulverize and densify)

232U is readily producible (indeed, it's a waste product in thorium reactors); main issue is just that it's a pain to handle. But for something like this, you're dealing with microscopic amounts.

Comment Liability law (Score 2) 87

Yes, it is a major concern, and the reason is how US liability law works.

Specifically, foreseeability.

In the US, instead of proscribing how products have to be made, in most categories you can sell whatever you want, but are exposed to liability if things go horribly wrong. We expect lawsuits to police the market.

So, you're suing Ford because you got your tongue stuck in the carburetor. One way to do that is to show that Ford should have reasonably foreseen that you'd stick your tongue in there and done something to prevent that.

Of course, Ford doesn't want to be sued, so they do what they can to avoid it. Like sticking NO USER-LICKABLE PARTS INSIDE stickers on the carb.

Trying to block explosives-howtos is OpenAI's 'no licking' sticker. It isn't going to stop a determined tongue, but it might stop a lawsuit.

Slashdot Top Deals

"Pascal is Pascal is Pascal is dog meat." -- M. Devine and P. Larson, Computer Science 340

Working...