POET Solves AI’s Next Constraint After Power Chips

AI systems do not scale by fixing everything at once. They scale by removing one hard limit at a time, with each improvement exposing the next constraint in the system. Navitas Semiconductor exposed a structural ceiling in AI infrastructure by demonstrating that silicon based power delivery at the graphics processing unit (GPU) could not scale efficiently. As that constraint begins to ease, AI systems do not become unconstrained. They advance until the next limitation asserts itself. Once GPUs can draw and sustain higher power efficiently, the critical question shifts from how much power can be delivered to how efficiently data can move inside dense GPU architectures.
POET Technologies sits at that next inflection point. Its relevance does not come from AI growth itself, but from whether GPU level power gains expose data movement as the next system constraint. POET only matters if copper becomes the limiting factor once power delivery is no longer the constraint and if that limitation cannot be economically absorbed through incremental copper based fixes.
From power delivery to data movement
AI scaling is often framed as a matter of adding more GPU compute. In practice, AI systems advance only as far as their weakest supporting layer allows. Until recently, power delivery to the GPU was that limiting layer. Improvements in power delivery at the GPU allow it to operate at higher compute density, increasing the volume of data that must move between GPUs.
That data movement is carried almost entirely by copper interconnects today. Copper traces, retimers, and short reach links remain viable because their inefficiencies were historically masked by limitations in power chips. As power chips become more efficient, data movement across copper emerges as the primary bottleneck.
What POET actually solves
This bottleneck exposes copper as the limiting factor because copper cannot scale bandwidth fast enough to match growing GPU compute density. Extending copper requires retimers, equalization, and redundancy, which add complexity without removing the underlying constraint.
Replacing copper links with optical interconnects at the GPU data movement layer directly addresses this limitation. However, conventional optical architectures are not designed to operate at GPU proximity. This is the problem POET targets.
POET’s architectural differentiation
POET’s core technology is its optical interposer platform. Instead of assembling optics from discrete components, POET integrates lasers, modulators, detectors, and waveguides onto a single optical interposer designed to sit adjacent to the GPU at the package level, the integration layer between the GPU die and the board.
This integration collapses the effective distance between GPUs by removing copper from the data path. It enables bandwidth scaling that copper cannot achieve at this proximity. No incumbent optical architecture is currently deployed at this level of GPU proximity.
The execution reality
POET’s outcome hinges on whether GPU system design is forced to confront copper as a hard constraint. Its interposer architecture becomes economically relevant only if copper interconnects begin eroding the gains unlocked by higher GPU compute density.
If copper based solutions can continue scaling through packaging, signal processing, and incremental optimization, POET remains structurally early. If those workarounds fail and internal data movement begins to erode the gains unlocked at the GPU, POET’s approach becomes necessary.
Why timing is the real risk
The primary risk to POET is not technological feasibility. Its architecture is sound in principle. The risk lies in timing. Copper interconnects continue to improve through incremental advances in materials, packaging, and signal processing. Techniques such as silicon bridges, chiplets, and 2.5D integration extend the practical life of copper by absorbing rising data rates without forcing architectural change. As long as these approaches remain sufficient, optics remain optional rather than required.
Each year copper continues to scale pushes POET’s relevance further out. Incumbents strengthen their position, system architectures remain unchanged, and adoption pressure is deferred. Markets do not price architectural transitions until existing solutions fail at the system level.
Competitive context
Incumbent optical suppliers dominate today’s markets because their architectures align with current system designs. They are optimized for scale, yield, and standardized deployment. Those strengths persist as long as optics remain peripheral to GPU data movement.
POET is not competing head on with these players. Its bet is that system design shifts inward, where integration and GPU compute density matter more than modularity. If that shift does not occur, incumbents win by default.
The bottom line
Navitas exposed silicon’s limits in power delivery. POET is positioned for what happens after those limits are relieved. Its opportunity exists only if improved chip level power efficiency forces a reckoning in GPU data movement architecture.
If AI systems reach a point where copper interconnects erode the gains unlocked by NVTS power chips at the GPU, optics must move closer to the GPU. POET’s platform is designed for that moment. If not, its relevance remains deferred. This is not an AI growth story. It is a sequence story. And POET only matters if the sequence breaks exactly where it is placed.
Disclosure: This article reflects the author’s personal analysis and opinions and is not investment advice. The author does not hold shares in POET Technologies Inc. at the time of writing. Images used are independent illustrative renderings and are not official POET Technologies Inc. promotional materials.
RISK PROFILE
Adoption Timing: POET’s relevance depends on copper interconnects reaching structural limits. If incremental advances in materials, packaging, and signal processing continue to extend copper’s viability, optical interconnect adoption at the GPU layer may be deferred, pushing revenue realization further out.
RELATED ANALYSIS
NVTS Exposes AI’s Power Limit — Silicon Is the Real Bottleneck
SKYT Is Sitting on a Critical Chip Breakthrough
License Tensions Could Cost IONQ More Than The SKYT Acquisition