In recent years, rack power densities in data centers have grown substantially. The days of the low-power data center are over as chipmakers launch powerful new chips with TDPs (thermal design power) that exceed 500W. For one, this means that data center cooling solutions must evolve to meet the rising rack density requirements of high-performance applications. Let’s explore the implications of higher power density and what this means for data centers in 2022.
What is power density?
First, let’s get the basics out of the way. What exactly is power density? At the rack level, power density refers to the power draw of a single, fully populated server rack, measured in kilowatts. Power density is rising because we want to cram more, higher power chips into the same amount of space. The more power per rack, the higher the computing workload that can be accommodated in less floor space.
What’s the average power density in data centers today?
According to the 10th Annual Uptime Institute Global Survey of IT and Data Center Managers, a decade ago, average power densities loomed around 4-5kW . Today, average power densities hover around 8-10kW per rack with two-thirds of data centers in the US experiencing higher peak demands of 16-20 kW per rack . As rack power requirements exceed 20kW, operators will require more efficient cooling methods regardless of facility size. Air cooling is no longer a feasible option in such power-dense environments.
Should we be surprised that rack density is rising? Bill Kleyman, AFCOM Data Center World Program Chair, doesn’t think so. “Don’t be surprised as rack density continues to grow,” said Kleyman. “We’ve seen it grow substantially over the last couple of years, and the average grew an entire kilowatt from last year. That’s not a little bit of growth – it’s an entire kilowatt inside of a rack. That’s a telltale sign that people are putting more gear in each rack. You’re using space wisely, and you’re making your racks work for you. I think that’s a good thing .”
Why has adoption been so sluggish despite the need for liquid cooling and the trends toward more powerful equipment?
Risk is always a factor with step-change. A move to liquid cooling is a big step, and it often requires changes to the data center design, facility operations, and IT equipment. According to an Informa Engage survey, “Data Center Trends: Use of Liquid Cooling,” produced in association with Dell Technologies, in 2020, the biggest deterrents to liquid cooling adoption were (1) the cost of retrofitting a facility (55% of responses) and (2) the concern over water in the data center (39% of responses) . These concerns can be daunting for even the most forward-thinking data center operators.
Some operators are also put off by the perceived complexity of installation and maintenance of liquid cooling solutions. Operators still believe liquid cooling is A) expensive and B) complicated. While some solutions are more expensive than air cooling, the complexity argument no longer always holds true due to innovative liquid cooling systems available that can eliminate complex infrastructure requirements such as chillers and cooling towers.
As data centers become more powerful, the need for liquid cooling becomes more apparent. Liquid cooling is not a one-size-fits-all solution, and there are many factors to consider before making the switch. However, liquid cooling may be the best answer for data centers struggling with increasing power densities.
The key drivers to liquid cooling adoption
Despite the perceived risks in data centers, liquid cooling is inevitable because of the increasing power demands of data center equipment. The rise of AI, IoT, and 5G will only accelerate this trend as more powerful processors are needed to handle the increased workloads. Key drivers for liquid cooling adoption include reducing downtime, extending device lifetime, and minimizing operating costs. However, the most critical driver of all, liquid cooling allows for “future-proofing” of data center operations. Liquid cooling allows a data center to be agile in responding to changing business demands, such as chipmakers dictating higher TDP chips and reduced case temperatures. The adoption of liquid cooling thereby will keep them competitive in the highly dynamic technology landscape for years to come. More information about these key drivers is discussed as follows.
One of the most important benefits of liquid cooling is decreasing downtime. Downtime is often caused by overheating, which can damage or even destroy equipment. When equipment is adequately cooled, it runs more efficiently and has a longer lifespan. According to Gartner, downtime costs an average of $5,600 per minute, extrapolating to over $300k/hour.
Extend device lifetime
Liquid cooling also extends the lifetime of devices. The semiconductors in servers are highly sensitive to temperature, whose operating lifetime is often limited by the maximum chip temperature while running. Just a 10°C decrease in average operating temperature can more than double the lifetime of the semiconductor. Incorporating highly efficient liquid cooling may allow data centers to run longer before reaching their end of life.
Lower operating expenses
At a certain kW/rack, air cooling becomes inefficient, requiring a substantial amount of electricity to operate. In contrast, certain liquid cooling options can achieve energy savings and reduce operating expenses by raising coolant temperatures over 45°C. Some competitive liquid cooling solutions eliminate power- and water-intensive infrastructure like chillers and evaporative coolers, too, providing upfront capital and operational savings.
More efficient and sustainable
Liquid cooling has become more popular as data centers strive to go green since it provides efficient and sustainable cooling. With liquid cooling, free cooling (e.g. thermosiphons) can be used more often, decreasing the data center’s carbon footprint. How efficient can data centers become when they implement liquid cooling? Data centers that use free cooling in combination with direct-to-chip liquid cooling can achieve a PUE of 1.02.
Respond to changing business needs
As data centers strive to be more agile and efficient, liquid cooling will become increasingly popular. By deploying liquid cooling today, data center operators can avoid a total redesign of their facility down the road. This is an important consideration as data center designs have a lifespan of up to ten years. A design that does not account for future increases in power density will eventually reach a point where it can no longer support IT equipment.
By understanding the trends and key drivers, data center operators can decide when to deploy liquid cooling in their facility. Let’s explore an innovative liquid cooling solution that is helping data centers keep up with the ever-changing demand.
Direct-to-chip liquid cooling: serving the highest power devices
JetCool’s microconvective cooling system is an excellent example of an innovative liquid cooling solution for applications with dense compute profiles, such as those found in HPC and AI facilities, which often have heat loads greater than 30 kW/rack. This innovative cooling solution provides enterprises with more power per chip, enabling customers with exceptional device performance while using fewer chips.
Microconvective liquid cooling uses arrays of small fluid jets within compact cooling modules, transforming cooling performance at the chip level. This direct-to-chip cooling solution efficiently removes heat from the industry’s highest power devices.
As data center operators strive to be more efficient and sustainable, liquid cooling will become increasingly popular. To learn more about our innovative liquid cooling technologies, check out our data center and high-performance computing case studies and learn how JetCool can prepare your data center for the inevitable rise in power densities.