Two Phase Immersion Liquid Cooling at Supercomputing 2019
by Dr. Ian Cutress on November 29, 2019 2:00 PM ESTIt would now appear we are saturated with two phase immersion liquid cooling (2PILC) – pun intended. One common element from the annual Supercomputing trade show, as well as the odd system at Computex and Mobile World Congress, is the push from some parts of the industry towards fully immersed systems in order to drive cooling. Last year at SC19 we saw a large number of systems featuring this technology – this year the presence was limited to a few key deployments.
Two Phase Immersion Liquid Cooling (2PILC) involves a server with next to no heatsinks, and putting it into a liquid that has a low boiling point. These liquids are often organic compounds (so not water, or oil) that give direct contact to the silicon and as the silicon is used it will give off heat which is transferred into the liquid around it, causing it to boil. The most common liquids are variants of 3M Novec of Fluorinert, which can have boiling points around 59C. Because it turns the liquid into a gas, the gas rises, forcing convection of the liquid. The liquid then condenses on a cold plate / water pipe and falls back into the system.
These liquids are obviously non-ionic and so do not transfer electricity, and are of a medium viscocity in order to facilitate effective natural convection. Some deployments have extra forced convection which helps with liquid transport and supports higher TDPs. But the idea is that with a server or PC in this material, everything can be kept at a reasonable temperature, and it also supports super dense designs.
OTTO automated system with super dense racking
We reported on TMGcore’s OTTO systems, which involve this 2PILC technology to create data center units up to 60 kilowatts in 16 square feet – all the customer needs to do is supply power, water, and a network connection. Those systems also had automated pickup and removal, should maintenance be required. Companies like TMGcore cite that the 2PILC technology often allows for increased longevity of the hardware, due to the controlled environment.
One of the key directions of this technology last year was for crypto systems, or super-dense co-processors. We saw some of that again at SC19 this year, but not nearly as much. We also didn’t see any 2PILC servers directed towards 5G compute at the edge, which was also a common theme last year. All the 2PILC companies on the show floor this year were geared towards self-contained easy-to-install data center cubes that require little maintenance. This is perhaps unsurprising, given that 2PILC support without a dedicated unit is quite difficult without a data center ground up design.
One thing we did see was that component companies, such as companies building VRMs, were validating their hardware for 2PILC environments.
Typically a data center will discuss its energy efficiency in terms of PUE, or Power Usage Effectiveness. A PUE of 1.50 for example means that for every 1.5 megawatts of power used, 1 megawatt of useful work is performed. Standard air-cooled data centers can have a PUE of 1.3-1.5, or purpose built air-cooled datacenters can go as low as a PUE of 1.07. Liquid cooled datacenters are also around this 1.05-1.10 PUE, depending on the construction. The self-contained 2PILC units we saw at Supercomputing this year were advertising PUE values of 1.028, which is the lowest I’ve ever seen. That being said, given the technology behind them, I wouldn’t be surprised if a 2PILC rack would cost 10x of a standard air-cooled rack.
35 Comments
View All Comments
Beaver M. - Friday, November 29, 2019 - link
That bubbling would drive me crazy.valinor89 - Friday, November 29, 2019 - link
But no defeaning Fan noise. I wonder if this will allow for quieter operation on dense server rooms. Server Fans are LOUD.firewrath9 - Friday, November 29, 2019 - link
They still need to cool the heated vapor, so they would need giant condensers, which would require big fans.The Chill Blueberry - Friday, November 29, 2019 - link
Yes but they can use bigger slower fans, rather than tiny ear raping fans to fit in the racksqlum - Saturday, November 30, 2019 - link
Except bigger fans take up more space and noise really is not a big issue in a server environment so loud fans generally make a lot of sense here.PeachNCream - Saturday, November 30, 2019 - link
I spend enough time working around rack mounted hardware that I bring my own hearing protection. It does not take a lot of that sort of noise to damage your hearing and you never can get that back once its gone. Things that can reduce server fan noise would be helpful and if the cooling itself is more efficient in terms of power and removal of waste heat, it's good in many ways. Now it we could just develop hardware that doesn't produce as much heat to begin with that'd be even better.mode_13h - Tuesday, December 3, 2019 - link
More efficient HW will just make it affordable for datacenters to grow even larger.I'm not saying not to care about energy efficiency, but demand for compute is forecast to significantly outstrip any energy efficiency improvements on the horizon.
rahvin - Saturday, November 30, 2019 - link
You'd immerse the whole rack (in fact the whole row of racks) and cycle the vapor to a chiller on the roof. It would actually be quite a bit more efficient than the hot/cold isles of the current design if it wasn't for all the complications the system would bring.rbanffy - Monday, December 2, 2019 - link
Noise may not bother humans who spend most of their time outside the datacenter, but the vibration affects the machines the fans are attached to. You know - screaming at hard disks increase latencies.mode_13h - Tuesday, December 3, 2019 - link
Fan noise is the sound of wasted energy. By definition, a very low-PUE setup cannot be particularly loud.