
For most of the past year, the loudest story in AI data center connectivity has been optics. Silicon photonics, Co-Packaged Optics (CPO), and 1.6T pluggables were pitched as the inevitable future, while Direct Attach Copper (DAC) was quietly written off. The picture that emerged at Nvidia GTC 2026, and in roadmap updates from Broadcom and the major hyperscalers, is more nuanced: copper and fiber are now expected to coexist for at least the next several years, each doing what it does best.
For a fiber optic cable manufacturer, this coexistence is not a setback. It is a sharper specification problem. The question is no longer "copper or fiber," but "which cabling physics matches which segment of an AI cluster, and how do we design cabling plants that remain upgrade-ready through 800G, 1.6T, and eventually hollow-core deployments." This piece lays out how we think about that, based on what we see in AI-ready data center cabling projects today.
Why Copper Is Still in the Picture for Scale-Up Links
Inside a single rack, or across two adjacent racks, the physics still favor copper. Passive DAC cables work well at roughly one to two meters at 100G per lane, beyond which signal attenuation becomes the limiting factor. Active Electrical Cables (AEC) extend that reach by integrating retimer chips into the cable assembly, which is how short-reach 800G links can now stretch to around five to seven meters in production deployments, and further in some lab demonstrations.
That extension is enough to cover most intra-rack GPU-to-switch paths in current NVL-class rack designs, and it usually does so at lower cost and lower per-port power than a comparable optical module. Jensen Huang's public framing at GTC 2026 - copper for scale-up, optics for scale-out - reflects that trade-off rather than a retreat from photonics. Broadcom has made similar comments about its XPU customers preferring DAC through the 400G SerDes generation, again for power and cost reasons. For teams that want a deeper primer on when copper interconnect makes sense, our DAC cable guide for data center interconnect covers the cable-level details.
A note on the AEC market: Credo Technology is widely reported as the dominant supplier of AEC retimer silicon, with figures often cited in the high-80s percent range based on 650 Group estimates. We flag that these numbers circulate in secondary reporting rather than audited share data, and the "zero link flap" reliability story, while repeated often in hyperscale designs, is more an application story than a universal property of copper versus optics.

Where Fiber Still Wins in AI Data Centers
Copper's reach advantage ends roughly where a single rack row does. Once a link needs to cross aisles, connect back to a spine or aggregation layer, or reach a different hall, fiber is effectively the only practical medium. A few scenarios where we consistently see fiber selected in AI cluster designs:
- Scale-out fabric between racks and halls. Pluggable optics on single-mode or OM4/OM5 multimode fiber dominate here because copper simply cannot carry 800G past a handful of meters without active regeneration. High-fiber-count MPO/MTP trunk and breakout assemblies carry most of this traffic in modern AI halls.
- Long reach and DCI. For campus-scale GPU clusters, AI training jobs that span multiple buildings, or data center interconnect, ultra-low-loss single-mode fiber such as G.654.E gives the lowest attenuation budget and the best headroom for higher-order modulation.
- Future-proofing the cabling plant. Copper assemblies are tied to a specific speed and reach. A fiber trunk installed today at OM4 or single-mode can typically carry several generations of transceivers, from 400G through 800G and into 1.6T, without pulling new cable.
- Thermal and power density at reach. As AI racks push toward 120–200 kW, cable plant heat and bend management in already-dense trays becomes a real constraint. Fiber's smaller cross-section and lighter weight matter more here than in classical enterprise data centers.
In other words, copper has reclaimed the intra-rack zone, but the moment a link crosses a row or needs to survive a hardware refresh, fiber continues to be the cheaper answer over the lifetime of the plant.

The Optical Roadmap: LPO, CPO, and Hollow-Core Fiber
On the optical side, three developments are worth tracking closely, because they change what fiber plants need to support.
LPO (Linear Pluggable Optics). LPO removes the DSP from the transceiver and lets the host silicon handle equalization, which can cut module power by roughly 40–50% at 800G. The LPO MSA published its 100G-per-lane specification in March 2025, which cleared the way for broader vendor support. LPO is not a universal replacement for DSP-based optics - link budgets and host-side equalization requirements constrain where it fits - but for short-reach scale-out inside a hall, it is increasingly viable.
CPO (Co-Packaged Optics). Despite sustained hype, large-scale CPO integration for scale-up links now looks like a late-decade event. Nvidia's current public roadmap points to meaningful scale-up optics adoption around 2028, later than many investors expected in 2024–2025. The delay is consistent with the copper-and-glass framing: current AEC-based scale-up is good enough that the industry is not forced to absorb CPO yield and serviceability risks yet.
Hollow-Core Fiber (HCF). By guiding light primarily through air rather than silica, hollow-core fiber reduces propagation latency by roughly one third and largely removes nonlinear impairments that limit long-haul capacity. That matters for two emerging use cases: latency-sensitive financial trading networks, where Microsoft and other hyperscalers have already deployed HCF, and very large AI clusters where synchronization latency between training nodes starts to hurt throughput. HCF is still significantly more expensive than standard single-mode fiber, with pricing quoted in different currencies and ranges across sources, so procurement teams should validate vendor quotes directly rather than rely on headline figures.
A Practical Framework: When to Choose Copper vs Fiber
Based on typical AI data center link budgets as of 2026, a reasonable default decision path looks like this:
- Intra-rack, under 2 m, 800G: Passive DAC is usually the right choice. Lowest cost, lowest power, no retimer needed.
- Intra-rack to adjacent rack, 3–7 m, 800G: AEC is competitive where the design is stable and the reach is within retimer specifications. Beyond about seven meters, optics start to look better on total cost of ownership.
- Inter-rack, across a row or to a middle-of-row switch: Pluggable optics on OM4/OM5 or single-mode fiber. LPO is worth evaluating where host silicon supports it and the link budget is tight enough that the 40–50% power saving is meaningful.
- Cross-hall, campus, or DCI: Single-mode fiber with ultra-low-loss G.654.E or G.652.D for new builds. MPO/MTP pre-terminated trunks simplify installation and future upgrades.
- Latency-critical or very large synchronized clusters: Evaluate hollow-core fiber on selected links rather than wholesale replacement. The economic case is strongest where each microsecond of one-way latency has a measurable downstream cost.
This framework is deliberately conditional rather than absolute. Real deployments mix two or three of these categories in the same hall, which is why structured, generation-agnostic data center connectivity solutions matter more than optimizing any single link type.
What This Means for Data Center Cabling Teams
For procurement, network architecture, and cabling engineering teams, the practical takeaways are fairly concrete. First, do not over-specify copper beyond its reach window; a generous AEC budget is not a substitute for a proper fiber backbone, because the next two transceiver generations will not run over those copper assemblies. Second, specify high-fiber-count MPO/MTP trunks on the scale-out fabric, because port density on AI switches will keep rising. Third, choose ultra-low-loss single-mode fiber for backbone and DCI paths where the plant is expected to outlive two or three transceiver refreshes. Fourth, begin evaluating HCF on a per-link basis for latency-critical or long-haul AI scenarios, rather than waiting for general-purpose availability.
The headline is not that copper beat fiber or that fiber is losing ground. It is that the boundary between them has sharpened, and the segments on the fiber side of that boundary - scale-out, long reach, future capacity headroom - are exactly the segments that are growing fastest inside AI data centers.
FAQ
Is copper replacing fiber in AI data centers?
No. Copper has reclaimed the very short-reach intra-rack zone, mostly through AEC, but everything beyond roughly seven meters still runs on fiber. The two technologies are coexisting in defined layers rather than competing for the same links.
What is the difference between DAC and AEC?
DAC is passive copper, limited to about one to two meters at 100G per lane. AEC adds retimer chips inside the cable assembly to regenerate the signal, extending reach to roughly five to seven meters at 800G with a modest power penalty compared to DAC.
When should I use LPO instead of traditional pluggable optics?
LPO is worth considering when the link is short, the host silicon supports linear drive, and power reduction is a priority. On longer reaches or where host equalization margin is thin, DSP-based pluggables remain the safer choice.
Is hollow-core fiber ready for mainstream deployment?
HCF is in production for specific use cases - notably low-latency financial networks and selected hyperscaler deployments - but it is not yet priced or supplied at a level that replaces standard single-mode fiber in general enterprise or data center cabling. Expect a gradual expansion into AI cluster backbones over the next few years.
What fiber type should I specify for AI data center scale-out?
For short intra-hall links, OM4 or OM5 multimode with MPO/MTP trunks remains cost-effective at 400G and 800G. For anything that crosses buildings or needs to carry 1.6T and beyond, single-mode with low-loss G.652.D or ultra-low-loss G.654.E is the safer long-term specification.
Does copper really not suffer from temperature sensitivity?
Copper assemblies are less sensitive to the optical-module-specific failure modes sometimes seen under thermal stress, but they are not immune to environmental effects. Connector integrity, cable bending, and aging still matter. The reliability argument for copper in scale-up links is about system-level behavior in dense racks, not about copper being fundamentally failure-proof.




