AI Power and Optics: The Hidden Thermal Friction Risk in AI Data Centers

Rising compute workloads increase power consumption and data flow in AI data centers, requiring more efficient power delivery and faster data movement. This drives demand for technologies such as NVTS power chips and POET optical interposers. Navitas Semiconductor’s (NVTS) gallium nitride (GaN) and silicon carbide (SiC) solutions improve efficiency and performance across AI power delivery stacks, while POET’s optical interposer enables higher data movement by replacing copper pathways at the GPU.
Despite these gains, the convergence of higher power and faster data movement introduces a nuanced challenge: localized thermal friction. If unmanaged, this heat can limit system performance and erode the efficiency gains these technologies are designed to deliver.
NVTS and POET: power delivery and data movement
Navitas has positioned itself as a supplier of efficient power chips for AI data centers. Its use of GaN and SiC increases power delivery and switching speeds, enabling data centers to handle rising compute workloads more efficiently than legacy silicon solutions.
POET’s optical interposer addresses the data movement constraint by replacing copper pathways at the GPU, enabling higher data transfer as copper interconnects approach practical speed limits. Together, these technologies allow more power and data to flow within AI infrastructures, supporting the scaling needed to handle rising compute workloads efficiently.
The hidden challenge: localized thermal friction in integrated systems
Despite efficiency improvements, NVTS and POET technologies create localized thermal friction within AI data centers. As power delivery systems push more power into racks and optical interconnects increase data movement, residual heat must be effectively dissipated. GaN and SiC devices still generate heat that becomes concentrated in power delivery stages.
Likewise, optical engines, though more efficient per bit than copper, operate in dense environments where thermal dissipation remains critical. These technologies shift the thermal burden rather than eliminate it, creating localized thermal friction where heat can accumulate if not properly managed.
Material and packaging solutions: how NVTS and POET manage heat
Both Navitas and POET leverage material and package integration strategies to mitigate localized thermal friction and maintain performance. NVTS’ integrated GaN and SiC chips replace discrete silicon chips to improve high voltage efficiency. This reduces waste heat by minimizing thermal resistance across the power delivery stages.
POET’s optical interposer integrates optical pathways within the package, replacing copper interconnects and reducing resistive losses and associated heat generation within data interconnects. These engineering choices demonstrate how material selection and package integration shape overall thermal and electrical performance.
Residual system level constraints: why cooling still matters
Even with material and packaging efficiency gains, thermal conditions within AI data centers remain a limiting factor. In addition, AI data centers concentrate power delivery and data interconnects within compact footprints, increasing localized thermal friction.
Achieving optimal performance therefore requires cooling strategies that extend beyond power delivery and data interconnects to the entire AI data center. Without effective thermal management via optimized airflow, liquid cooling, or heat sinks, throttling will erode the benefits of higher power delivery and faster data movement from NVTS and POET.
Implications for AI data center deployment and infrastructure planning
For AI data centers, the combination of increased power delivery and data movement creates a hidden integration risk in the form of localized thermal friction points. The performance and reliability of data centers ultimately depends on managing heat, ensuring that efficiency and throughput are not limited by thermal constraints, which must be addressed through system level cooling.
This creates a secondary layer of opportunity for companies focused on advanced cooling infrastructure, which we explore in the next article.
Disclosure: This article reflects the author’s personal analysis and opinions and is not investment advice. The author holds shares in Navitas Semiconductor (NVTS) at the time of writing. Images used are independent illustrative renderings and are not official Navitas Semiconductor promotional materials.
RISK PROFILE
Thermal Constraint: Increasing power delivery and data movement within AI data centers concentrates heat at localized points, which, if not effectively managed at the system level, can erode the efficiency gains provided by NVTS power chips and POET optical interposers.
RELATED ARTICLES
NVTS Is Quietly Collapsing the AI Power Delivery Stack
POET Solves AI’s Next Constraint After Power Chips
NVIDIA Backs Transceiver Optics While GPU Optics Remains Open