Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror

Comment Self Correction (Score 3, Interesting) 28

I don't know if this guy has been on vacation or living under a rock, but there was already a correction last month. Microsoft dropped 2GW worth of DC leases (on top of the several hundred MW they did in Feb) which flooded the market with inventory. Two of my customers immediately dropped out of work that was being done on other data centers because they knew they would be able to pick up space sooner and for less money as a result. Everybody in the industry saw a pull-back. Where we were working on designing and selling inventory that was 24 months out now nobody wants to talk about anything that is further than 12 months away from being ready.

Tying new data centers to old nuclear plants has a whole host of other issues around it that make me think this will end up being a nothing burger (SMRs are another matter), but this supposed irrationality of a capital system working as intended seems ill conceived.

Comment Re:Is this even possible? (Score 1) 86

I'm not sure this will answer your question, but if it doesn't then maybe you can expand on it a bit and I will try again.

In a typical DC you have both the utility feed and (diesel) generator feed coming into an automatic transfer switch (ATS) which automatically switches the load over from one to the other if the utility feed fails. But these are then typically run to an on-line UPS which (again, typically) has a run-time of 4-5 minutes. So the power is conditioned by the UPSs in the facility all the time. If the power from the utility goes out the ATS switches over to generator while the UPS picks up the load. The gens take about 30 seconds to start and get up to speed which is plenty fast enough since you have 4+ minutes of run-time on your UPSs.

So the minor fluctuations in the grid don't really matter to a DC operator. The UPSs condition the power and the generators can run the facility indefinitely (provided you've got good fueling contracts in place).

That said, I design about a dozen large data center solutions a year and each one comes with a set of somewhere between 500 and 1000 distinct requirements. It's big money and the people who are buying the capacity want to make sure they are getting exactly what they need/want. And every single one of these requests contain a question about the proximity of the data center to high-risk areas like chemical processing facilities, fuel storage depots, freight rail, etc. Guess what's on the list? Nuclear facilities.

So the idea that these DC operators want to locate their facilities right beside nuclear power stations is in contrast to one of their risk requirements. I don't think this is going to be a big trend because *their* customers are not going to be happy about it even if they are willing to make the compromise. $0.02 and all that...

Comment Nature's End (Score 4, Interesting) 23

What a crazy time to be alive. Back in 1987 authors Whitley Strieber and James Kunetka wrote a book called "Nature's End" about environmental catastrophes on Earth. In it the protagonist used a computer called an "IBM AXE" that had a rollable screen. The book was set in 2025.

And here we are with Lenovo (formerly IBMs consumer products division) releasing such a product in the same year set in the book. Wonderful! I wonder if anyone working at Lenovo has any idea...

Comment Re:WUE (Score 4, Informative) 79

Thanks for the link. You are exactly correct. As usual the media butchered it (in this case Bloomberg) -- the press release makes perfect sense.

In a typical data centre the cooling cycle is: Chillers on the roof, which either use water or air-based chilling, cool a loop of water that runs to your server rooms. These rooms have devices called CRAHs (Computer Room Air Handlers) or FWUs (Fan Wall Units) that use the chilled water to blow cold air through the room. That air gets heated up by the equipment, it rises to the ceiling and is then sucked back out and into the chilled water loop, heating it up. That is then cooled back down by the chillers on the roof again. It's amazing that we can get a PUE of 1.25 to 1.4 out of such a system but it works pretty well.

AI is driving much higher densities in the racks. A typical air-cooled rack is something like 8-12kW full of servers but can get as high as 20 or 30kW. To cool a rack that is pushing 80kW+ you need to use liquid cooling. Lots of techniques have been tried but the one the industry is settling on is direct-to-chip which uses a device called a CDU (Coolant Distribution Unit) to take the chilled water from the pipes that run to the CRAHs, and loop that out in smaller lines to the racks where it is distributed directly to cold plates on the CPUs and GPUs. This is almost exactly like what you would find on a higher-end gaming system.

The wonderful thing about direct-to-chip cooling is that it is much more efficient than air cooling so your PUE goes down. The more your PUE goes down the more energy you can use to power servers and the less you need to use to cool equipment. With direct-to-chip efficiencies in cooling you can also have a higher chilled water loop temperature (because more cooling is getting directly to the equipment).

So what Microsoft is saying in a nutshell is: "Hey, we're using less water because we're building more data centers with air chillers than evaporative water chillers, but because we're also deploying more direct-to-chip installations in those DCs, it's not increasing our power consumption too much".

One last thought: You still have to have CRAHs or FWUs in a data hall because ancillary equipment still has to be cooled down, and humans have to work in them. So unfortunately we can't get rid of the necessity to cool down the air.

Comment WUE (Score 4, Insightful) 79

Two common measurable gauges of data center efficiency are WUE (Water Usage Effectiveness) and PUE (Power Usage Effectiveness). A typical hyperscale data center design PUE is something like 1.25 where an actual PUE when the DC is fully loaded with servers is 1.4. What this means is that the the total power consumed by the entire data center is 40% more than the servers themselves consume. Obviously 1.0 would be ideal but is unreachable.

WUE is a bit different but the goal is the same: to measure how effective the data center is at using water. The calculation is Water Usage (L) / Energy Consumed (kWh). To get a data center built there is a lengthy and expensive permitting process and local municipalities want to know the effect that the facility will have on the local water supply (aquifers, municipal water, etc). So data center builders often use air cooled chillers and closed loop chilled water loops. These systems don't use any water for cooling. They aren't new. They work in almost any climate.

I bring all this up because evaporative cooling is on the decline due to these concerns and Microsoft is already leasing data center space in Phoenix in data centers that do not use evaporative chillers (and has been for years). So I'm at a loss to explain why we have an article about them "investing in a new design" they are already using. This is likely just a feel-good article and isn't anything new.

Also, for those folks saying "why not just build somewhere cold" etc. For plenty of workloads that is possible (like machine learning, REST-type services and anything that is transactional that way), but for others you still need to build close to the population centers you are serving because of latency. The perfect location for a data center is one where land is reasonably inexpensive, the power is reasonably cheap, and yet is still near large populations centers. It's not easy to find ideal locations and with the DC boom resulting from COVID and now machine learning it has become much more difficult.

Slashdot Top Deals

We are drowning in information but starved for knowledge. -- John Naisbitt, Megatrends

Working...