NVTS Exposes AI’s Power Limit — Silicon Is the Real Bottleneck

AI’s scaling problem is no longer compute. It is power. As data centers increase GPU count and workload per rack, conventional silicon power chips approach their physical limits. Navitas Semiconductor operates at this critical point, where traditional power chips begin to fail.

Why power, not compute, is the next choke point

AI systems are no longer limited by raw compute alone. As GPUs, the chips that perform AI calculations, are deployed at higher performance and utilization, the power chips supplying the GPU must operate at higher power delivery levels and faster switching speeds. These power chips, which regulate, convert, and deliver power to the GPU, increasingly hit power delivery and switching speed limits as load rises, creating power bottlenecks that restrict performance.

The power chip bottleneck NVTS is attacking

The limitation stems from the power chips themselves. Because they are made from silicon, conventional power chips face hard limits on power delivery and switching speeds. These constraints cannot be overcome with incremental design improvements.

Displacing incumbents is about architecture, not brand

Conventional silicon manufacturers, Texas Instruments and onsemi, dominate power delivery by executing extremely well within traditional silicon-based power chips. Their scale, qualification depth, and long product cycles are advantages only as long as silicon can meet rising power delivery and switching speed demands, which they are increasingly failing to do.

Navitas addresses this failing by replacing discrete silicon power chips with a single GaN power chip designed to sustain higher power delivery and faster switching speeds, improving efficiency at the GPU level. As AI data centers increase the number of GPUs per rack and total rack power, operators focus on watts per rack and switching speed limits. When power delivery and switching performance become limiting factors, architectural efficiency outweighs brand familiarity, compressing incumbent advantages.

Why silicon workarounds remain structurally weak

The key question is whether silicon manufacturers can offset silicon’s limitations through engineering workarounds. The answer is partially but not indefinitely. Techniques such as parallelization and overprovisioning can mask power delivery and switching-speed constraints, but they do not remove them. Every workaround adds cost, complexity, or inefficiency elsewhere. As AI racks push higher power envelopes, these compensations scale poorly.

The NVIDIA validation—and its limits

However, investors should be careful not to overstate exclusivity. NVIDIA deliberately maintains multiple power chip partners, hedging supply risk and preserving pricing leverage. It continues to work with incumbents using silicon power chips supported by system-level workarounds. This diversification limits Navitas’s near-term bargaining power.

Risks and execution reality

While NVIDIA validation confirms that Navitas’s GaN power chips work in principle, turning that validation into large scale deployment is a separate challenge. GaN manufacturing remains more complex than silicon, and only a limited number of foundries can produce high quality GaN wafers at scale. Device qualification for data center power chips takes longer, and incumbents retain advantages in scale, pricing leverage, and customer relationships. Adoption will be uneven, and power chip transitions do not happen overnight.

However, this constraint cuts both ways. Navitas is a fabless power chip designer and works with external foundries to manufacture its GaN power chips, with relationships spanning more than a decade, most notably with TSMC, long before data-center power became strategic. That history matters because GaN capacity is finite, slow to expand, and not easily repurposed from silicon. Incumbent silicon power chip vendors cannot simply “turn on” GaN; they must compete for limited fab access and power chip qualification capacity already aligned with specialists like Navitas. Scaling GaN is therefore a timing and access problem, not just a capital one, reducing Navitas’s relative execution risk even as absolute scaling remains challenging.

The upside if power becomes the constraint

These scaling constraints slow near-term adoption but do not change the underlying limit: as AI infrastructure scales, silicon cannot sustain required power delivery and switching speeds. Power delivery stops being a secondary consideration and becomes a primary system constraint. Crucially, this shift applies broadly across hyperscale data centers and custom accelerator platforms, not just a single GPU ecosystem.

In this scenario, Navitas isn’t just selling power chips. It’s enabling more usable power at the GPU through faster switching and more efficient power delivery. Those gains support higher sustained GPU utilization and reduce power related inefficiencies, justifying premium pricing and creating durable demand even without exclusive contracts.

The bottom line

Navitas is positioned at the least glamorous but increasingly decisive layer of AI infrastructure. While compute captures headlines, power defines feasibility. GaN is not an optional upgrade; it is a response to silicon’s limits.

Navitas faces strong incumbents, diversified customers, and real execution risk. However, material physics structurally favor GaN over silicon. If AI’s next constraint is power rather than compute, Navitas’s addressable opportunity expands sharply, not because of hype but because the system demands it. That is where the asymmetric upside lies.

Disclosure: This article reflects the author’s personal analysis and opinions and is not investment advice. The author does not hold shares in Navitas Semiconductor (NVTS) at the time of writing. Images used are independent illustrative renderings and are not official Navitas Semiconductor promotional materials.

RISK PROFILE
Substitution Timing: While GaN overcomes silicon’s power delivery and switching speed limits, silicon power chips can extend relevance through system-level workarounds. If hyperscalers accept incremental silicon improvements rather than transitioning to GaN power chips, adoption may proceed gradually.

Leave a Comment