Data centers consume a lot of energy. As of 2017, data centers constituted 3% of the world’s electricity usage, amounting to a whopping 416 terawatts worldwide . With recent increases in remote work and edge computing, this metric may well increase in years to come. But what exactly contributes to all of this energy usage? How can we make data centers more energy efficient? The answers to these questions and more can be uncovered by understanding the impact of cooling energy on PUE.
To begin, the energy usage of a data center is commonly quantified by the power usage effectiveness, or PUE. This metric quickly communicates how much energy is used when operating a data center, beyond just the power provided to operate the IT equipment. Because each data center is unique, PUE can take on many forms, and it can be difficult to compare apples to apples across varied facilities.
Therefore, to provide a baseline, this post will utilize the PUE definition as provided by NREL for its Energy Systems Integration Facility (ESIF) High Performance Computing (HPC) Data Center. This definition is shown below:
To understand this metric a little better, NREL provides a nifty dashboard displaying its real time energy usage and how it contributes to PUE:
Figure 1: Example snapshot of the energy usage dashboard provided by NREL for it’s High Performance Computing Data Center.
This is a lot to take in at first, but there are a few major takeaways:
PUE is essentially an efficiency metric. The “IT” term represents the “output”, or the power that contributes to the server functions. As the IT term appears in both the numerator and denominator, the best-case scenario is a PUE of 1.0, and increasing above 1.0 requires more and more energy to operate the data center.
One of the five major terms reported is labelled as a dedicated “Cooling” contribution. This represents the power required by fans and any refrigeration cycles utilized in getting heat out of the working fluid providing cooling for the high-power processing units.
Even though “Pumps” and “HVAC” are split out separately, these are also directly related to the cooling of the data center.
Pumps represent the power requirement of the fluid movers that circulate the working fluid providing cooling to the high-power processing units.
HVAC represents the blowers and chilled air systems for managing the heat of the auxiliary electronics, such as DIMMs, power supplies, voltage regulators, and other lower power components.
The only term remaining is the “Lights/Plugs”, which has no cooling component. However, as seen in the dashboard, this is typically a very small power usage contribution.
Therefore, the energy dedicated to cooling is the major portion beyond the energy used to power the IT equipment. With PUEs typically in the range of 1.1-2.0 , energy for cooling makes up between 10%-50% of the 416 terawatts consumed worldwide by data centers!
PUE is a quick and easy metric for quantifying a data center’s energy usage. Although PUE can be calculated in many different ways, cooling often makes up a large majority of the non-IT equipment energy usage in data centers. In order to keep PUE as close to 1.0 as possible, it is critical to employ highly efficient and low energy usage cooling systems.
Interested in energy efficient cooling technology? Contact JetCool Technologies to learn more about their ultra-low thermal resistance cooling solutions for low PUE data center operation.
 “High-Performance Computing Data Center Power Usage Effectiveness.” NREL.gov, http://www.nrel.gov/computational-science/measuring-efficiency-pue.html.
 Bullard, Nathaniel. “Energy Efficiency a Hot Problem for Big Tech Data Centers.” Bloomberg.com, Bloomberg, 13 Dec. 2019, http://www.bloomberg.com/opinion/articles/2019-12-13/energy-efficiency-a-hot-problem-for-big-tech-data-centers.