Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Comment Re:US also used ~21GW for data-centers in 2024... (Score 3, Informative) 32

The data centers will be supplied by nuclear or geothermal which is best suited to supplying a steady amount of power. Solar is better than burning coal, but it's not well suited to all problems and trying to force it into areas where it's not well suited is foolish and only breeds resentment. We should be more focused on getting solar into residential installations where it works great.

Nuclear and solar are actually similar sources of electricity. They're classed as non-dispatchable. which means they cannot change with demand. They're just on opposite ends of the same.

Nuclear takes hours to ramp up and down - you have to plan for increases and decreases in consumption hours ahead of time. Solar and wind just suddenly start and stop generating. So you under-run a nuclear plant (it only supplies most of the current demand), while your curtail renewable production (i.e., solar/wind always produce too much for current demand). The grid gets destabilized if you cannot turn down nuclear production, or you cannot ramp up production should solar/wind falter.

Coal, geothermal, hydro, natural gas plants are dispatchable in that their output takes minutes to change - you can ramp them up and down even from cold within 15 minutes or so, which is sufficient. Batteries are even faster since they can respond in under a second.

Datacenters while most of their demand is static, do have variable amounts of demand as well - it's why your laptop can go from a day's worth of battery life to 3 hours if you play a game or something. Likewise, an idle server may consume maybe 100W, while one fully loaded jumps up to 1.5kW.

The key with AI loads is nuclear can work, but you have to schedule it. If you know you have a major processing load to do, you can tell your nuclear plant to prepare for it in advance and have it ready hours later to run your task. And as it completes, it can ramp down as well.

But datacenters powering things like cloud computing are much less predictable - you can tell the plant that Black Friday to expect higher loads as instances are spun up to deal with the influx of demand, but it's a lot more variable and if demand spikes you might not be able to handle it. Or if demand fails to materialize it can be devastating (and expensive). .

Comment Re:I just did it...it doesn't compile (Score 1) 58

Funny, given about 7-8 years ago I was tasked with coding in Java (something I barely have skills in except back in the Visual J++ days because it was the cheapest way to get Windows NT).

No "vibe coding" for me, but I managed to piece together something that worked just by scouring the documentation I had, some code examples and a lot of Android Studio (IntelliJ) helping me with the syntax intricacies. The code worked, was relatively clean and I could explain it all. I'm sure a seasoned Java developer could write it 10 times better, but it was sufficient for the job (it was a debugging tool). And that was with the tools I had - my brain, Google, and development environments that do a lot of the boilerplate. I wouldn't call myself a Java developer because I know I relied a lot on tooling and help and would be lost if I was given a text editor only.

If you're a seasoned developer who knows the basics, picking up other stuff is syntax and library calls. Heck, I still remember using Android Studio to write some Android NDK bindings as it handled the JNI trickery. Still not a Java developer, but a little logic and exploration with the tools meant I could fake it but understand what I did. I also understood the faults in the code so to avoid coding more bugs.

Comment Re:What a shame (Score 1) 38

If people are using R, it's because they are doing statistics. R doesn't compete against other languages, it fills a different niche. Its use grows or shrinks depending on the size of that niche.

That would put it in the domain of a 4GL (4th generation language) which are generally domain specific languages. You already can name another 4GL - SQL.

And you probably can tell you are unlikely to write the next Chrome killer in SQL (though I'm sure people have tried).

R is just another one of those - for statistics. Sure you can do a lot of it with NumPy and other Python packages, just like you can do a lot in C as well. But it's just a lot easier in R since you get to abstract away most of the crap and just get it to do what you want.

It's also not new - the Apollo Guidance Computer software was structured the same way. Because obviously a 15-bit (plus sign) processor with limited instruction set wasn't going to get you to the moon easily and back. Instead they created a runtime on top of it (a virtual machine) which provided far more capabilities including floating point that did all the orbital mechanics computations needed. The key point is that the machine had limited task slots so it could only run a few of those at any one time. Those task slots contained the state of the VMs and its registers (which were far larger than the 15+1 native hardware registers) and there were limited amounts of them.

The whole Error 1202 was the executive (hypervisor) basically saying it was out of task slots because the radar computations started occupying task slots until they ran out. The executive then did a reboot, and restarted the essential tasks (whose state was saved in the task slots), while discarding the useless radar computation tasks (because new radar readings would make the old ones obsolete).

That's what R does - it provides a programming environment abstracted out from the actual underlying hardware to let you get your statistics done. Its optimized for that, and while it can do other things (Turing complete, after all) it's just awkward at best.

That's why 4GLs often provide linkages to other languages so you can give it the hard analysis tasks while you work on the things it's less suitable for. Just like you use SQL to interact with structured databases but don't use it to be your HTTP server, but use linkages to whatever your web application uses.

Comment Re:dust (Score 1) 5

Also note that Linux is GPLv2-only. GNU would want something compliant with GPLv3. Linus Torvalds has made his opinion on the GPLv3 very clear (he thinks it's a huge mistake to make it part of the GPL especially given the incompatibility).

And before you say TiVo-ization (funny,a term that basically now outlast the original company), Linus acknowledges that, but also acknowledges the fact that Linux got some much-needed MIPS patches from TiVo to fix various bugs.

Comment Re:Good riddance (Score 1) 56

That's because Java itself didn't get a native GUI toolkit until late late late in its life. Things like Swing and such were GUI toolkits, but it never looked native so you could always tell. These days it looks native because there are native GUI toolkits you can use where a Java app will look like it always was a Windows app on Windows, a macOS app on Mac, and ... something on Linux.

Of course, there was a native toolkit available - the web browser, which is why most apps started moving that way. Java applets were a way to cope with the extreme limitations at the time such as interactive feedback - you just couldn't do any that didn't involve a request and response through the server, and what was available was limited.

Then Adobe came around with Flash which basically handled a lot of interactivity. HTML5 finally came along, along with iOS to kill Flash, and with WebAssembly, the need for plugins disappeared. (Something Mozilla did - they wanted a way for people to not need plugins and leveraged the JavaScript engine to do WASM)

Nowadays plugins are basically obsolete - the WebAssembly runtime is basically universal and safe since it's basically just pre-parsed JavaScript and running on the JS engine. It only really took 35 years to beat Java at its own game.

Comment Re:Charging at home (Score 1) 152

The problem is that everything thinks you need special electrical installations for EV charging. And that's expensive to retrofit - the need to install high power panels and such.

The key point is that it's much easier to run a L1 charger to parking spots, but people keep hearing "L1 is going to take days to charge" and thus dismiss it. Yet, L1 is practical for many people - if you drive 50 miles or less daily, you can likely "survive" on L1 alone. And the vast majority of car trips done are, well, less than that. "Oh no, what if my kid wants to go visit a friend's house?" or "I need to run an errand!". Yeah, just because you don't completely charge one night, doesn't mean you won't the next night.

It's a lot easier and cheaper to run L1 charging to parking spaces than to try to run L2 capable chargers to a few.

Even in Europe they're starting to realize lower end L2 chargers with 13A output (i.e., a regular plug) is often sufficient, though the plug often needs to be EV rated for durability reasons.

A company selling EV charging subscriptions (they offer to outfit your apartment parking lot with L2 chargers, but with the requirement everyone buy their power) is into it - they offer a split L1/L2 - if you and a neighbour use the outlet, you both get L1 charging. But if you don't, your neighbor gets L2. So it's a pair of outlets per pair of parking spaces, and since EV penetration is such it's unlikely you'll both be using it except at night.

Comment Re:I have to say by now I approve (Score 1) 82

The problem is kernel code IS hard. The Linux kernel code uses a lot of fancy tricks to make it a lot easier. But there are still a lot of places where it remains tricky.

If you've coded for Linux kernel for decades, it used to be you needed to know your context - atomic or process. But over the decades, the need to know this died out - very little code is still atomic and most developers which would've needed to care, don't anymore.

But other parts are no so forgiving - because kernel code is naturally re-entrant so there are lots of places where you can get in trouble - driver unloading remains one of the trickiest parts because you can unload a driver but still get requests in unexpected places which cause a crash.

Rust is good because it helps eliminate a class of tricky bugs regarding memory. You can do it properly in C - many drivers in Linux already does, but it's pretty much all tribal knowledge gained over the years because the context where the memory is allocated is often different from when it's eventually discarded and lifetimes end up screwed up. Even worse is it probably works 99.99% of the time but then someone does something weird (but legal) and hits that 0.01% and now you have to re-architect your buffering to handle it.

But Rust cannot save you from bog-standard logic errors (like what CloudFlare did), or deployment errors.

Luckily for the kernel, memory management is still a huge element of a lot of the work so eliminating a whole class of bugs is a big positive. This is especially so in drivers where most of the work is in memory management, like GPU drivers. So Rust handles more administrivia on memory management, leaving the developer to write the code that actually performs the task.

Comment Re:This RAM thing is the AI bubble inflection poin (Score 1) 30

Iâ(TM)m hoping this bubble is how I get 2TB of rdimm at 4800 or 5600 slightly used for my server for just $200 in 2028.

That's how the memory makers got burned the last 2-3 times when they were forced to clear out surplus RAM at discount prices. Which they've said they aren't going to do - they're going to shut down DDR4 production as planned and to make DDR5 as planned. They aren't going to invest in new fabs to meet demand because they don't want to be stuck with stuff they can't sell. They see the bubble and they know bringing on a new fab will take years and cost billions and by the time it comes online, the bubble bursts and they're stuck with a white elephant. (And memory fabs are special so it's not easy to repurpose them for say, logic chips).

That's not to say the dip won't happen - prices will collapse and you'll find memory that cheap because a datacenter placed an order for them years ago and prepaid for it but didn't take delivery so forfeited their deposit. So as far as the memory makers go, they have memory that's pretty much nearly paid for, they just want the rest of the money.

The big problem with AI is the business model - people are pumping in so much money, but revenue isn't there. OpenAI made billions, but they need to make hundreds of billions and the need to scale revenues is an issue. The current plan is they need to scale revenue by 3 or 4 times to be able to break even by 2030 operationally/ But we're also talking about the largest and biggest company - what about everyone else?

And what about the next round of funding - those GPUs have a very short shelf life and OpenAI and the like will need to make further purchases to continue development because the last gen will not cut it.

Will ChatGPT disappear? Unlikely. Even the dot-com bubble left us with Amazon, eBay, Google, and tech companies like Microsoft and Cisco survived. It certainly wiped out lots of other pie in the sky ideas.

And we'll be back to AI not replacing jobs - because to make money, "vibe coding" and such rates will likely have to jump to the point where humans are cheaper than the costs to run the equipment. But it will continue because once most of the firms implode, there will be a flood of cheaper Nvidia GPUs to run the models left behind. Once one company implodes, floods the market with near-new GPUs which are snapped up by companies to run models in-house so they can avoid paying a company for it, others will lose subscribers and implode, starting the cycle anew as the subscriber base dwindles.

Comment Re:I'm still missing why Apple needs to bend the k (Score 1) 100

I think you're right. They should be able to charge what they wish. But if you agree to that, you should also agree that Microsoft should be able to take a 30% cut of any Windows application. And charge developers a fee to have the privilege of writing code on Windows. And prevent any application from accepting payment in any other form than the Windows Store payment system. And prevent you from displaying other payment options.

After all, it's their OS. They don't owe anyone access to their ecosystem.

In the early days, to develop for MS-DOS or Windows, you had to pay for development languages - which Microsoft sold for $300 in the early days for MS-DOS and likewise same for Windows.

Heck, you know what the first GNU software project was? GCC. Why? Because if you bought a Sun, HP, SGI or other Unix workstation, it didn't come with a compiler. You had to buy the compiler package at many thousands of dollars. GCC wasn't the best compiler, but it was available free, and all you needed was someone to compile it for you.

Anyhow, also consider what the market would do. Had Microsoft done this, perhaps we'd see a more vibrant desktop OS marketplace, instead of a complete domination by Microsoft in the end.

Comment Re:On the contrary (Score 1) 161

Fair point. I could have been clearer though. My biggest concern is EVs being disabled en masse as an act of war. If Chinese EVs reach even 10% of the cars on the road, suddenly disabling them all at once would lead to a lot of chaos. Add that to the other backdoors that the Chinese inevitably have in key tech products, and you have a pretty effective opening salvo in a war.

Thee BYD Dolphin have no connectivity options in the car. They're just too discounted to afford the cellular modem and service for it, so they're rather basic on the infotainment side. They do have CarPlay, though.

Comment Re:What a lost opportunity for Microsoft (Score 1) 18

I'm sure this doesn't align with Microsoft's long-term agenda. They're trying to eliminate on-premises and private infrastructure in favor of everyone running their workloads in their cloud. If I were switching from VMware, I'd be really cautious about switching to Hyper-V as well. What is stopping Microsoft from pulling the same style licensing switcheroo with Hyper-V in the future?

You're actually closer than you think. VMware's tagline of late is "bringing the cloud on premises". As in you use their tools to bring the cloud in-house. That's the completely opposite for Microsoft which wants to push you into the cloud and not on-prem.

They're basically working opposite ends of the spectrum - VMWare to sell you stuff to bring the cloud in-house via expensive subscriptions. And Microsoft to bring your stuff to the cloud via subscriptions.

Comment Re:Math... (Score 1) 61

Help me understand this math, how does 3 computers value out to $46,855? That's more than 15k per computer, which TBH, I've never seen a normal computer cost that much. Servers? yea, I've racked servers that have cost a quarter of a million... but not normal computers. What are these students working on?

Back in the day, Sun workstations were popular because they were relatively cheap - after all $20,000 would get you a fairly nice workstation (in an era where a kick-ass PC would be running around $5000). Most other workstations would start at $20K for the base model, and go over $100K easily. Of course, universities were often the target and got pretty nice educational discounts plus grants from those companies so students could easily have access to machines that would've cost the downpayment of a house.

These days, an AI chip like the H200 go for $40,000 or so. Lower end units can probably be had for $10K or so. A few National Instruments cards in a computer can easily cost $20K or more.

Depending on your area of research, you might be sitting on what is commercially very expensive high end pieces of equipment that get donated

Comment Re:Unintended combination of stupid laws? (Score 2) 267

Also makes the assumption that people are on social media. I mean, I have a Facebook account, and the past 5 years I posted 0 times on it. I have a Twitter account, and the past 5 years all I have are tweets like "Enter now for your change to win a free iPad from MacRumors!"

That's really the only reason I have any social media accounts - if I want more entries in some draw I have to post a message on my Twitter feed. Last I checked, I was followed by a couple of bots and my total follower count are those bots.

Comment Re:Poor choice. (Score 2) 81

Beyond this, X has the resources to keep Operation Bluebird in court longer than Operation Bluebird can afford legal representation.

That comes later. They're trying to get the trademark cancelled - which is made easier with public statements saying Twitter is dead, and Musk posting that it's not Twitter, but X and to stop referring it as such.

Once the trademark is cancelled, they are free to then register it and then maybe X would be able to sue.

At best, X could contest the cancellation and registration request with the USPTO, but it's again going to be hard as X/Musk have done a lot of disavow Twitter.

Slashdot Top Deals

The beer-cooled computer does not harm the ozone layer. -- John M. Ford, a.k.a. Dr. Mike

Working...