Why AI Data Centers Are Facing Cooling Constraints — And What It Means for Vertiv

AI data center expansion is increasingly constrained by thermal management rather than GPU capability. As the number of GPUs deployed within data centers increases, the ability to remove the resulting thermal load is approaching the limits of data center cooling systems.
Cooling systems are already deeply embedded within AI data centers, alongside compute and power delivery, which together form the three core pillars of AI data center infrastructure. However, unlike compute and power delivery, cooling systems do not follow a step-change innovation curve. As a result, rising thermal loads are exposing the limits of existing cooling systems as GPU deployments increase.
The current cooling landscape
Cooling infrastructure in AI data centers is currently dominated by air cooled systems, which remain the default system in data centers. These systems are widely deployed due to their maturity and low deployment complexity. However, air systems are reaching the limits of effective thermal management as the number of GPUs deployed increases. This is driving adoption of liquid cooled systems within new data centers where thermal loads exceed the operating range of air cooling.
In parallel, hybrid systems are being deployed, combining air and liquid systems within the same data center. This creates a fragmented cooling infrastructure, where multiple systems are deployed. These systems converge at the rack, where increasing GPU deployments are pushing thermal loads to the limits of existing cooling systems.
Cooling has become an operational constraint
As GPU deployments within data centers increase, GPUs at the rack become more densely arranged. This increases thermal load at the rack, where traditional air systems can no longer manage thermal load effectively, creating a cooling constraint at the rack.
While liquid systems extend thermal management capability beyond air systems, they are also constrained at the rack by increasing thermal load that exceeds their operating range. This reflects a mismatch between rising thermal load and the rate at which cooling systems evolve relative to GPU growth.
Vertiv and the thermal infrastructure buildout
This mismatch defines where Vertiv operates, within the buildout of AI data center cooling infrastructure driven by GPU growth. Vertiv is a provider of integrated thermal management systems spanning air and liquid systems.
Its exposure also extends beyond cooling systems to the physical buildout of AI cooling infrastructure across data centers. This positions the company within the installation layer, where cooling systems are deployed and scaled. However, Vertiv’s scaling of cooling systems inadvertently reinforces the market narrative that cooling has reached a solved transition.
The market narrative: cooling as a solved transition
The prevailing market narrative assumes data center cooling systems are scaling in lockstep with GPU deployment and resolving thermal constraints. In reality, cooling systems are not advancing in lockstep with GPU deployment.
Air systems continues to dominate data center infrastructure, hybrid systems define the transition between air and liquid systems, and liquid systems remains concentrated in new data centers. This results in a staggered adoption curve, where thermal capability improves in stages, while even liquid systems struggle to fully offset rising thermal loads as the number of GPUs deployed within data centers increases.
Thermal capacity is the binding constraint
As GPU deployments within AI data centers increase, cooling infrastructure remains critical to continued expansion, positioning companies like Vertiv at the center of this buildout. However, market pricing has driven valuations of cooling infrastructure providers to levels that imply current generation cooling systems operate at the efficiency required to manage higher thermal loads. In reality, they do not.
As a result, the investment risk lies not in the relevance of cooling systems, but in the growing disconnect between operational realities and the expectations currently priced into the sector. This disconnect reinforces the prevailing market narrative and the potential for mispricing as cooling capacity fails to scale in line with GPU deployment.
Disclosure: This article reflects the author’s personal analysis and opinions and is not investment advice. The author does not hold shares in Vertiv Holdings Co. (VRT) at the time of writing. Images used are independent illustrative renderings and are not official Vertiv Holdings Co. promotional materials.
RISK PROFILE
Market Mispricing: The prevailing market narrative assumes data center cooling systems are scaling in lockstep with GPU deployment and resolving thermal constraints. In reality, cooling systems are not. This disconnect creates the potential for mispricing of cooling infrastructure providers.
RELATED ANALYSIS
AI Power and Optics: The Hidden Thermal Friction Risk in AI Data Centers
NVTS Exposes AI’s Power Limit — Silicon Is the Real Bottleneck
POET Solves AI’s Next Constraint After Power Chips