Direct to Chip (D2C) Cooling 

As AI workloads push server rack densities beyond the reach of conventional air cooling, direct-to-chip (D2C) liquid cooling has emerged as the defining thermal management strategy for modern data centers. According to a 2025 survey of data center operators conducted by Uptime Institute, 22% of organizations used direct liquid cooling, while 75% used perimeter air cooling. 

Meanwhile, the 2024 edition of the Uptime survey found the same number, 22%, using direct liquid cooling. However, 61% indicated they do not currently use DLC but would consider it in the future. 

What does that mean? There is a significant opportunity for direct liquid cooling to gain favor in the world of data center thermal management. At the very least, operators are open to the possibility, particularly those overseeing ever-rising rack densities powering our increasingly digital world. Woman standing next to server racks in a data center

Further, the questions about data center liquid cooling are changing from earlier days. As the participants in a June 2025 ASHRAE podcast on liquid cooling discussed, one of the main questions about data center liquid cooling is shifting from the experimental initial phase (in other words, does it work?) to one about its resiliency and reliability. 

On this page, we will cover what direct-to-chip cooling is, why it matters, how it compares with competing approaches, and recent developments in the D2C cooling space. 

On this page we, will cover:

  1. What is Direct-to-Chip Cooling?
  2. Direct-to-Chip Cooling Benefits
  3. Direct-to-Chip Cooling vs. Immersion vs. Air
  4. Future of Direct-to-Chip Cooling
  5. Recent D2C Advancements 

What is Direct-to-Chip Cooling?

Direct-to-chip (D2C) cooling is a form of liquid cooling that delivers coolant directly to the most thermally demanding components in a server — CPUs, GPUs, and AI accelerators — via cold plates mounted on the chips themselves. Rather than relying on air to carry heat away from densely packed components, D2C systems route a coolant fluid through a closed loop, extracting heat at the source before it can accumulate in the surrounding environment.

A standard direct to chip system consists of:

  • Cold plates (typically copper or copper-aluminum alloy) mounted directly on the chip
  • Thermal interface materials (TIM) between the chip surface and cold plate
  • Manifolds, quick-connects, and tubing to route coolant through the loop
  • A circulation pump
  • A Coolant Distribution Unit (CDU) for heat rejection and flow control

D2C systems come in two primary variants. In single-phase D2C, the coolant — typically a water-glycol heat transfer fluid — remains liquid throughout the loop, the most common configuration in enterprise data centers. In two-phase D2C, the coolant transitions from liquid to vapor inside or near the cold plate, dramatically increasing heat transfer capability for chips with extreme thermal design power (TDP) requirements.

Both configurations are closed-loop: coolant is fully contained and recirculated. The physics are straightforward — water’s volumetric heat capacity is approximately 3,200 times greater than air, allowing it to absorb and transport vastly more thermal energy per unit of volume, directly at the silicon die.

Direct-to-Chip Cooling Benefits

The benefits of direct-to-chip cooling extend well beyond raw thermal performance. D2C enables data centers to meet the demands of modern AI infrastructure while meaningfully reducing energy consumption, improving sustainability metrics, and protecting long-term operational costs.

Higher Rack Density

Air cooling typically maxes out at 20–35 kW per rack. Direct to chip extends that ceiling to 60–120+ kW, making it the only viable solution for AI-optimized facilities housing NVIDIA H100/H200, AMD MI300X, or next-generation accelerators with TDPs ranging from 600 to 1,200+ W.

Reduced Cooling Energy Consumption

Cooling can account for up to 40% of a data center’s total energy draw. D2C can reduce cooling subsystem energy use by up to 90% by replacing energy-intensive air handlers with efficient CDUs and eliminating the majority of server-level fan power. Facility-wide energy savings of 20–30% are typical.

Improved Power Usage Effectiveness (PUE)

Air-cooled facilities typically operate with PUEs of 1.5–2.0. D2C-cooled facilities can achieve PUEs of 1.03–1.20 — near the theoretical minimum of 1.0 — representing a meaningful reduction in wasted energy. A wider transition to D2C cooling would inject some life into the PUE trajectory, as average PUE has stagnated over the last 5 years or so.

According to Uptime, which referred to PUE as the "standard metric for facility energy efficiency," average annual PUE ratings improved significant from 2007 (2.5) to 2014 (1.65), but have stagnated in recent years. Since 2020, the average annual PUE figure fluctuated slightly within the narrow band of 1.56-1.59, according to the Uptime report. 

Waste Heat Recovery

Single-phase D2C systems return coolant at 35–50°C, temperatures suitable for free cooling and waste heat recovery applications including district heating and industrial process heating.

Full Chip Performance at Rated TDP

By maintaining stable chip temperatures under sustained load, D2C tackles the challenge of thermal throttling — the automatic performance reduction that occurs when chips overheat — ensuring AI workloads run at full rated capacity.

Direct to Chip Cooling vs. Immersion vs. Air

Data center operators evaluating cooling strategy typically weigh three primary approaches: traditional air cooling, direct to chip liquid cooling, and immersion cooling.

Each has a role in the broader ecosystem, but their performance profiles, infrastructure requirements, and operational trade-offs differ significantly.

Air cooling remains suitable for lower-density, legacy workloads but cannot support the thermal loads imposed by modern GPU clusters.

Immersion cooling offers the highest theoretical performance ceiling and eliminates fans entirely, but requires purpose-built tanks, specialized servers, and significant facility redesign — making it best suited for greenfield deployments.

For most operators upgrading existing facilities, direct-to-chip cooling offers the most practical path forward: dramatically higher performance and efficiency than air, with substantially lower infrastructure upheaval than full immersion. Residual heat from non-chip components (VRMs, DIMMs, drives) still requires supplemental airflow, making D2C a hybrid rather than a fully liquid solution — a manageable trade-off for the vast majority of enterprise data centers.

Future of Direct to Chip Cooling

Given the results of the aforementioned surveys, the trajectory for direct-to-chip cooling is clear: adoption is accelerating, rack densities are climbing, and the infrastructure ecosystem is maturing fast.

But the path forward is not without friction. Like a lot of things in life, the way forward isn't always linear. In fact, the way forward will likely be choppy and uneven.

Just as we've seen with the transition from legacy internal combustion engine vehicles to battery electric vehicles, the line of progress and adoption is not always going to move up and to the right. 

Below are a few reasons why: 

Market Growth

The liquid cooling market for data centers is projected to reach $14+ billion by 2030, with direct-to-chip cooling accounting for a growing share as hyperscale operators standardize on liquid-ready rack designs.

Standards Evolution

The Open Compute Project (OCP) has established specifications for propylene glycol-based heat transfer fluids in D2C cold plate systems — setting corrosion tolerance and fluid purity requirements that serve as de facto industry benchmarks. ASHRAE TC 9.9 continues updating data center thermal guidelines as liquid cooling moves into the mainstream.

Chip TDP Escalation

As AI accelerators approach and exceed 1,500 W TDP, two-phase D2C and advanced cold plate designs will become necessary — driving continued R&D investment across the hardware and fluid supply chain.

Alongside these tailwinds, operators should account for several implementation challenges:

Retrofitting Complexity

Most installed server hardware was not designed for cold plates. Widespread D2C adoption requires new motherboard generations validated for liquid cooling integration.

Standardization Gaps

Quick-connect fittings, cold plate geometries, and manifold designs vary by OEM, creating compatibility challenges in multi-vendor environments.

Fluid Management Requirements

Operators must monitor coolant pH drift, inhibitor depletion, and microbial growth throughout the fluid’s service life.

Workforce Skills Gap

Facilities teams need upskilling in liquid system maintenance, leak detection, and CDU operation.

D2C Cooling Advancements (October 2025-April 2026)

In the last six months, D2C liquid cooling has seen major strides in capacity, efficiency, and industry adoption. 

Below are just a few recent advancements in the D2C single-phase cooling space in recent months: 

Flex/JetCool at Equinix (Nov. 12, 2025)

Flex and JetCool deployed an OCP rack with Vertically Integrated Liquid Cooling at an Equinix co-innovation lab in Ashburn, Virginia. Using JetCool’s SmartPlate and SmartSense products, they cooled Dell R760 servers with warm coolant (~70 °C).

Key results: 15% IT power savings, 90% reduction in cooling water usage, and 50% less cooling power (chiller/Pump) compared to traditional liquid cooling. Their system can cool up to 4 kW per socket (with headroom), leveraging a 6U CDU capable of 300 kW. Warm-water operation avoids evaporative chillers.

Why it matters: This field demo shows that single-phase D2C can already support multi-kW accelerators per chip. The large savings in energy and water indicate one-phase solutions are maturing.

HRL Labs Low-Chill Cold Plate (Feb. 24, 2026):

DOE’s ARPA-E COOLERCHIPS program funded HRL to develop a novel single-phase cold plate. HRL’s “Low-Chill” design uses a 3D-printed manifold to inject coolant uniformly across the chip

Lab tests showed it removes 40% more heat than conventional cold plates at equal pump power. It achieved ~3 kW cooling on a single 750 mm² die (expected GPU size), supporting heat fluxes up to 400 W/cm². Pumping power dropped to below 1% of rack IT power. 

By enabling coolant up to 70 °C, it allows entirely air-cooled rejection.

Why it matters: It extends single-phase cooling to next-gen chips. With uniform micro-channel flow, it overcomes the “hot spot” limits of today’s blocks. It suggests megawatt-scale racks (multi-100 kW) using water cooling (waterless cooling) are feasible without switching to two-phase. For operators, it offers high density with simpler infrastructure (no expensive refrigerant handling).

Airsys PowerOne (Nov. 17, 2025):

Airsys unveiled its PowerOne platform, which includes a “LiquidRack” single-phase spray-cooling architecture for AI servers.

The system uses closed-loop liquid spray (no compressor) and can leverage dry coolers for economization. While detailed metrics weren’t given, Airsys claims the system achieves 0 WUE (Water Usage Effectiveness) and industry-leading PUE. The introduction of a dedicated AI-era rack cooler shows one-phase vendors are retooling for 100 kW+ loads.

Industry Trends

Overall, single-phase D2C appears to be becoming “mainstream.”

Cold plates and CDUs are improving (e.g., Nidec’s In-Row CDU now supports 2.0 MW to 12 racks.) Vendors report open architectures (quick-disconnects, CDU connectors) are now standard.

Industry commentary published in Data Center Dynamics notes most liquid cooling today is single-phase, and many hyperscalers (Google, Meta, AWS) are deploying it at scale.

Conclusion

Direct-to-chip cooling has moved from a niche consideration to a foundational requirement for modern data center operations. As AI workloads drive rack densities beyond what air cooling can support, D2C liquid cooling provides the only scalable path to maintaining chip performance, controlling energy costs, and meeting sustainability targets.

The transition involves real operational considerations — from retrofitting complexity to fluid chemistry management — but the performance, energy, and financial returns are well-documented. For operators evaluating their cooling roadmap, the question is no longer whether to adopt liquid cooling, but when and how.

The coolant itself is a critical, often-underappreciated variable in that transition. Fluid selection directly affects corrosion protection, thermal performance, deposit formation, and long-term system reliability.

Bottom line, the data center industry is moving fast. What was standard yesterday might not be in six months. While standardization continues to take shape, operators are not slowing down.

For many of them, that means implementation of direct-to-chip cooling systems. 

---

Dober COOLWAVE™ heat transfer fluids are engineered specifically for direct-to-chip liquid cooling. The COOLWAVE DC line — DC-25, DC-30, and DC-55 — uses a USP-grade propylene glycol base with a proprietary scale inhibitor and surfactant package that prevents deposit formation and protects copper, aluminum, brass, steel, and solder alloys. COOLWAVE DC-25 has earned OCP Inspired™ recognition, meeting the Open Compute Project’s specifications for PG-based heat transfer fluids in cold plate D2C systems. Backed by Dober’s FluidIQ Services monitoring and analysis program, COOLWAVE gives operators confidence to run at peak performance, longer.

Learn More About Dober's Data Center Coolants

References

  1. International Energy Agency. (2025). Energy demand from AI — Energy and AI. https://www.iea.org/reports/energy-and-ai/energy-demand-from-ai
  2. ASHRAE. (n.d.). ASHRAE Journal Podcast Episode 49. ASHRAE Journal Podcast episode 49. https://www.ashrae.org/news/ashraejournal/ashrae-journal-podcast-episode-49 

  3. Zhou, H., Zeng, J., Song, M., Sun, X., & Qin, S. (2026). Adaptability assessment of air-cooling systems for data center with varied rack power densities. Energy and Buildings, 357, 117201. https://doi.org/10.1016/j.enbuild.2026.117201
  4. Sickinger, D., Van Geet, O., & Ravenscroft, C. (2015). Energy performance testing of Asetek’s RackCDU system at NREL’s high performance computing data center (NREL/TP-7A40-62905). National Renewable Energy Laboratory. https://docs.nrel.gov/docs/fy15osti/62905.pdf
  5. Uptime Institute. (2024). Cooling systems survey 2024. Uptime Institute Intelligence.
  6. Alkrush, A. A., Salem, M. S., Abdelrehim, O., & Hegazi, A. A. (2024). Data centers cooling: A critical review of techniques, challenges, and energy saving solutions. International Journal of Refrigeration, 160, 246–262. https://doi.org/10.1016/j.ijrefrig.2024.02.007
  7. Open Compute Project. (2023). OCP liquid cooling specification: Propylene glycol heat transfer fluid for cold plate-based, single-phase D2C systems. https://www.opencompute.org
  8. American Society of Heating, Refrigerating and Air-Conditioning Engineers. (2021). ASHRAE TC 9.9: Thermal guidelines for data processing environments (5th ed.). ASHRAE.