Site icon JetCool Microconvective Liquid Cooling

Air to Liquid Migration Guide: How to Plan Your First Liquid Cooling Deployment

first liquid cooling deployment

As AI reshapes data center power and thermal profiles, one thing is clear: air cooling alone can’t keep up. Rack densities are surging past 30kW, often reaching 100kW or more, and facility operators must face a choice: continue retrofitting an aging system or switch to liquid cooling. But for operators facing their first liquid cooling deployment, this shift prompts new questions.

This guide walks you through how to migrate from air to liquid cooling, from the rationale to architecture, helping data center leaders confidently transition from pilot to production.

Step 1: Why It’s Time to Migrate

Air cooling has served data centers well for decades, thriving in an era of stable rack and power densities. That stability enabled the technology to mature and standardize across the industry, with incremental improvements in airflow, containment, and chiller efficiency providing sufficient cooling—without the need to rethink core infrastructure.

With the acceleration of AI adoption, that era is quickly coming to an end.

Once limited to high-performance computing (HPC) and hyperscalers, liquid cooling is now on the roadmap for banks, colocation facilities, and enterprise data centers alike. Grand View Research projects liquid cooling will reach 17.77B by 2030, with direct liquid cooling dominating the market share with over 68% in 2024. This shift is driven not just by performance, but by efficiency and long-term cost control.

But this shift isn’t super surprising. Water has always been the superior heat transfer fluid—technically, that hasn’t changed. What has changed is how the industry views barriers to liquid cooling adoption thanks to the urgency of AI. Today, the explosive growth of AI workloads has reframed those concerns as calculated trade-offs in the race for greater efficiency, density, and performance at scale. For many, the question is no longer if they’ll make the switch, but when.

Step 2: Evaluate Liquid Cooling Technologies

Once you recognize its time to migrate, the next step is to evaluate liquid cooling technologies and align them with your workload requirements. Each data center is different. Enterprise solutions may require a hybrid approach, while hyperscalers may benefit from a direct-to-die facility-level cooling. Start by surveying scalable, commercially available options.

Direct-to-Chip (D2C) Cooling

Direct-to-Chip (D2C) cooling, also referred to as DtC cooling, is a method where fluid is routed directly to the primary heat-generating components of the server, typically processors like CPUs, GPUs, ASICs, or FPGAs. Using a heat transfer fluid such as a water-based mixture like PG25 (propylene glycol), direct liquid cooling systems remove heat directly from the processor surface via cold plates or cooling modules.

Cold plates are the most common technology behind D2C systems, but there are multiple ways to architect them and route fluid to the processor lid. Each design comes with its own set of trade-offs in terms of performance, integration complexity, and scalability.

Microchannel Cooling

Microchannel cooling is a direct-to-chip cooling technique that uses parallel, internal fluid channels to spread heat uniformly across the surface. These channels maximize contact between the coolant and the heated surface, increasing the effective cooling area and enhancing thermal performance. However, narrower channel designs can lead to higher pressure drops, which may limit cooling efficiency—particularly in high-power, high-density environments.

Microconvective Cooling®

Microconvective cooling uses targeted, perpendicular fluid jets to directly target processor hotspots, achieving high convective heat transfer coefficients with precision cooling. By channeling coolant only where it’s most needed, this method eliminates the need for heat spreading or enhanced surfaces—and in some configurations, it can even remove all thermal interface materials. The result is improved heat transfer efficiency and the ability to support chips dissipating 4kW+ in a single socket.

Single-Phase Immersion Cooling

Single-phase immersion cooling involves submerging entire servers in non-conductive dielectric fluid to dissipate heat from all components, enabling better thermal efficiency than air and supporting lower density compute loads for applications like blockchain and telecom. Single-phase immersion cooling faces challenges in efficiently managing high-power chipsets due to the high viscosity of the dielectric fluid, which increases pumping requirements and reduces overall thermal performance.

Two-Phase Immersion Cooling

Two-phase liquid cooling removes heat by boiling a dielectric fluid at the heat source and condensing it in a closed-loop cycle, leveraging the fluid’s latent heat for efficient thermal management. However, despite its superior cooling performance over single-phase immersion systems, its widespread adoption is hindered by environmental and regulatory concerns surrounding PFAS-based fluids.

Matching Cooling to Density

A key step in evaluating liquid cooling is assessing workload size and rack power density. As a general rule, workloads exceeding certain kW-per-rack thresholds benefit significantly from liquid cooling:
  • <20 kW/rack: Air cooling solutions are typically sufficient alone.
  • 20-50 kW/rack: Liquid-assisted air-cooling solutions become necessary and more efficient.
  • 150 kW+/rack: Advanced liquid cooling at rack-level integration is essential.

Uptime institute cooling capacity chart

Rahkonen, T. (2025, February 4). AI and cooling: methods and capacities. Uptime Intelligence. Retrieved from https://intelligence.uptimeinstitute.com/resource/ai-and-cooling-methods-and-capacities

Selecting the right technology ensures operators can maximize compute per floor tile while extending server life and reducing operational expenses (OpEx), a critical balance given the capital expense (CapEx) of high-performance AI servers.

Step 3: Understand Deployment Options – Start Small

A common misconception is that liquid cooling requires an all-or-nothing commitment. In reality, the initial entry point for many operators isn’t as daunting as you might think. Today’s cooling solutions are modular, scalable, and hybrid-friendly, giving you flexibility and a lower-risk path to liquid adoption.

Liquid-Assisted Air Cooling

Also known as hybrid cooling, liquid-assisted air cooling (LAAC) is one of the most accessible entry points. These closed-loop systems are fully self-contained within the server chassis, combining a pump, cold plate, radiator, and hosing without requiring any external plumbing or facility modifications.

This plug-and-play solution delivers the core advantages of liquid cooling—such as improved power efficiency and higher compute density—while maintaining the simplicity of an air-cooled setup. It enables faster deployment, reduces upfront investment, and maximizes performance per square foot, making it an ideal entry point for enterprises, colocation facilities, and edge environments alike.

Liquid Cold Plates and Coolant Distribution Units (CDUs)

For cooling high-performance processors, typically GPU clusters, the next approach is cold plate liquid cooling combined with either a coolant distribution unit (CDU) or a rear door heat exchanger. Cold plate systems, especially when paired with CDUs, offer one of the most adaptable and scalable architectures for liquid cooling deployments. These solutions can be seamlessly integrated within the rack using plates, manifolds, and quick disconnects.

At the facility level, CDUs enable customers to achieve ultra-high rack densities—some reaching up to 300kW and beyond—by utilizing warm inlet coolants (up to 60°C), which helps maximize processor performance and efficiency.

Coolant distribution units come in several configurations:

  • In-rack CDUs are mounted within the IT rack, typically at the bottom, top, or side of the rack enclosure. Ideal for ultra-high-density environments with limited floor space, where localized cooling is essential.
  • In-row CDUs are positioned adjacent to IT racks in the same aisle, typically housed in their dedicated enclosures. Suitable for edge data centers or mid-sized deployments where rack densities vary but localized control is still desired.

Direct-to-Die Cooling Modules

Pushing thermal boundaries even further, direct-to-die cooling eliminates the traditional thermal interface layer—such as copper plates and thermal paste—by bonding or sealing the cooling solution directly to the processor dies. This direct contact significantly reduces thermal resistance, achieving values as low as 0.01 K/W and enabling exceptional thermal performance. JetCool’s SmartLid™ cooling module utilizes this approach, combining it with microconvective cooling techniques to precisely target hot spots improving CPU efficiency by up to 10%.

This technology can be further advanced by embedding the cooling architecture directly into the chip substrate. These integrated semiconductor solutions represent a critical leap forward, enabling compatibility with increasingly diverse architectures—including multi-die and vertically stacked designs.

Vertically Integrated Rack Solutions

For operators seeking speed in turnkey deployment, vertically integrated rack systems offer the most straightforward path forward. These deliver ready-to-ship, factory-integrated, validated cooling, power, and monitoring. They’re invaluable when scaling AI infrastructure across sites, eliminating field integration risks.

Cost Considerations

Liquid cooling isn’t just a technical upgrade, it’s a financial one. Choosing the right cooling strategy depends on your deployment goals and growth trajectory.

Deployment Type

Cost ROI

Notes

Liquid-Assisted Air Cooling

Low CapEx, Immediate Time-to-Value

Excellent for retrofit; minimal disruption

Cold Plates and CDUs

Moderate CapEx, faster TCO recovery

Ideal for greenfield deployments, high-density racks, and ESG initiatives

Direct-to-Die Cooling

Moderate CapEx, longer development time

Best suited for deployments using processors exceeding 5kW in a single socket

Vertically Integrated Solutions

Moderate CapEx, Fast ROI

Suited for large-scale, purpose-built data center with integrated systems with one vendor solutions simplifying service and warranty

Step 4: Choosing Your Liquid Cooling Partner

Before launching pilots or beginning phased rollouts, the one of the most pivotal decisions is about partnership. Liquid cooling a hardware upgrade and a fundamental shift in how your data center operates. Most customers need support navigating this shift from pilot to production understanding that their partner can architect the best solutions that fit their needs for today and while preparing for the future. And with power levels skyrocketing, it’s important to work with a partner that provides an end-to-end solution from power to cooling and supply chain peace of mind.

Liquid cooling often blurs the traditional boundaries between IT and facilities in the data center. Components like pumps, manifolds, and coolant loops don’t fit neatly into legacy cooling models. In the article Hold the line: liquid cooling’s division of labor (June 2025), it notes “the conventional line of demarcation that divides the responsibilities of facilities and IT teams does not easily map to DLC equipment.” Without the right expertise, this ambiguity can stall deployments, introduce unnecessary risk and undermine long-term operational sustainability.

That’s why it’s important to find a partner who can:

  • Own the full lifecycle—from design and deployment to long-term support and optimization.
  • Bridge organizational silos, ensuring seamless coordination between IT and facilities teams.
  • Architect tailored solutions for unique workloads and site constraints.
  • Deliver global consistency, supporting deployments across regions with unified standards and service.
  • Ensure operational sustainability, with proactive support and clear accountability.

In short, the right partner doesn’t just supply technology—they unify teams, accelerate timelines, and safeguard your investment.

Step 5: Plan Your Deployment in Phases

After choosing the right partner, it’s time to execute.  A phased rollout helps reduce risk, build confidence, and align everyone involved.

Start with a pilot. Define clear success metrics like PUE/WUE targets, chip temperatures, and delta-Ts. Use a representative workload to test the cooling setup in real conditions. Validate cold plate and CDU integration and monitor telemetry and coolant data to catch issues early. Make sure the pilot has a clear path to production—avoid getting stuck in “pilot purgatory.”

During deployment, collaboration is key. Liquid cooling often blurs the lines between IT and facilities. Clarify responsibilities to prevent delays. A strong partner provides not just the tech, but also support, warranties, and guidance to keep teams aligned.

Scale with consistency. Standardize rack configurations, SOPs, and telemetry systems to replicate success across sites. Your partner should be able to support you from pilot racks to full-scale deployments, locally and globally.

Maintain for the long haul. Build routine checks into operations—monitor coolant quality (or work with a partner that does this for you), pump performance, and chip temps. Schedule filter changes and system flushes. With the right partner, you’ll boost uptime, extend server life, and reduce OpEx while meeting the demands of AI and high-performance computing.

Conclusion: It’s time to plan your path

Every data center has a different starting point, but the endgame is the same: scalable, sustainable cooling that meets modern demands.

Whether you’re testing liquid cooling in a pilot or preparing to scale across your campus, the best way to get started is to talk with a partner who understands the full lifecycle of deployment.

Ready to start your transition to liquid cooling?

Contact JetCool to discuss your workloads, facility constraints, and deployment goals.

Exit mobile version