Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror

Comment Re:If all of AI went away today (Score 1) 125

Way beyond golems - tons of old religions have notions of "craftsmen deities" making mechanical beings (like Hephestus making Talos, the Keledones, the Kourai Khryseai, etc) or self-controlled artifacts (such as Vishvakarma making an automated flying chariot, Hephaestos making self-moving tripods to serve the gods at banquets, etc), or even things that (mythological) humans created, such as the robots that guarded the relics of the Buddha, or a whole city of wooden robots made by a carpenter mentioned in the Naravahanadatta. And then you have actual early human attempts at automatons, such as robotic orchestras and singers in ancient China, a robotic orchestra and mechanical peacocks in the Islamic world, etc (China also had some mythological ones in addition to actual ones, such Yan Shi's automaton, who enraged King Mu by winking at his concubines).

Humans have been thinking about robots and "thinking machines" since time immemorial.

Comment Re:If all of AI went away today (Score 1) 125

AI isn't going to disappear just because the stock prices of these companies crash, or even if they close together. It's too late. The models already exist, inference is dirt cheap to run (and can even be run on your own computer), and vast numbers of people demonstrably find it useful (regardless of whether you, reader, do).

It's funny, when you see "The AI bubble will collapse", you get two entirely different groups of people agreeing - one thinking, "AI is going to go away!", and the other thinking "Inference costs are going to zero!". Namely, because all the investors who spent their money building datacentres are going to lose their shirts, but those datacentres will still exist - and with much less demand for YOLOing new leading-edge foundations, it'll mainly be "inference for the lowest bidder, so long as they can bid more than the power cost at that point in time". Mean power costs for an AI datacentre are like a third of the total amortized cost of a datacentre, but with datacentres in broad geographic regions, spot prices can go well under that due to local price fluctuations. And any drop in prices triggers Jevon's Paradox.

Comment Re:ah, get off them already (Score 1) 125

The question for investors is really the correction timing, not whether it will happen. IMHO, as weird as it sounds, it likely has to do with highly visible inflation (groceries, fuel, etc). Inflation leads to voter rage, which leads to politicians pursuing anti-inflation strategies, which dry up capital in the market, which cause capital-hungry growth fields (like AI) to starve. Once investors catch wind that their previous growth field is no longer going to be in growth mode, they bail, causing a collapse in stock prices.

It was rate hikes that caused the internet bubble to pop.

Right now, Trump seems obsessed with rate cuts to juice the stock market, but at some point, the administration's chaotic, pro-inflation policies (tariffs, hits to the ag and construction labour supply, the war on wind and solar, etc) will catch up with them.

Comment Re: What's the problem? (Score 1) 262

The problem is when an answer is long and involves nuance, and that doesn't work in debates.

The problem with not engaging, however, is that if you don't engage with an issue, you'll just get endlessly sniped on it. And the alternative approach - embrace the opposite side's positions to shut them up - also doesn't work, because you get the worst of both worlds (you tick off your side, while not winning over votes from the opposite side). It's a strategic error to run from difficult conversations.

Comment Re:Pfff, my 2009 iMac can run at 212F/100C (Score 4, Interesting) 15

A lot of people misunderstand the market for the DGX Spark.

If you want to run a small model at home, or create a LoRA for a tiny model, you don't want to do it on this - you want to do it on gaming GPUs.

If you want to create a large foundation model, or run commercial inference, you don't want to do it on this - you want to do this on high-end AI servers.

This fits the middle ground between these two things. It gives you a far larger memory than you can get on gaming GPUs (allowing you to do inference on / tune / train much larger models, esp. when you combine two Sparks). It sacrifices some memory bandwidth and FLOPs and costs somewhat more, but it lets you do things that you simply can't do in any meaningful way on gaming GPUs, that you'd normally have to buy / rent big expensive servers to do.

The closest current alternative is Mac Studio M2 or M3 Ultras. You get better bandwidth on the macs, but way worse TOPS. The balance of these factors depends greatly on what sort of application you're running, but in most cases they'll be in the ballpark of each other. For example, one $7,5k Mac M3 Ultra with 256GB is said to run Qwen 3 235B GGUF at 16 tok/s, while two linked $4,2k DGX Sparks with the same total 256GB are said to do it at 12 tok/s, with similar quantization. Your mileage may vary depending on what you're doing.

Either way, you're not going to be training a big foundation model or serving commercial inference on either, at least not economically. But if you want something that can work with large models at home, these are the sort of solutions that you want. The Spark is the sort of system that you train your toy and small models on before renting out a cluster for a YOLO run, or to run inference a large open model for your personal or office internal use.

Slashdot Top Deals

"Nuclear war can ruin your whole compile." -- Karl Lehenbauer

Working...