Apr 23, 2026

800G All-Optical Network: What Fiber Do You Need?

Leave a message

800G optical interconnect has moved from trials into volume production. Through 2025 and into 2026, 800G pluggable modules in QSFP-DD and OSFP form factors became the connectivity baseline for new AI fabrics, while carriers began deploying 800G coherent on metro and backbone routes. For network planners, the design choices made today around fiber type, cabling density, and architecture will determine whether the network can carry 800G - and 1.6T after it - without an expensive re-pull.

What Is an 800G All-Optical Network?

An 800G all-optical network is a transport network in which 800 Gbps per wavelength or per lane group is carried end to end over fiber, with the data plane staying in the optical domain across as many hops as possible. Two distinct contexts get grouped under this label.

The first is the intra-data-center fabric, where 800G modules connect leaf-spine switches and AI accelerator clusters. Here, 800G is typically delivered as 8×100G PAM4 lanes (for example 800G-DR8 or 2×400G FR4), running over parallel single-mode fiber with MPO/MTP connectors. This is the dominant near-term volume case, pulled by GPU-server interconnect requirements.

The second is the metro and long-haul transport network, where 800G is carried as a single wavelength using coherent modulation - typically 800G ZR/ZR+ pluggables or higher-baud-rate line-system transponders. This is what most carriers mean when they describe an "800G all-optical city network": a flatter OTN/WSS-based optical layer that brings 800G wavelengths from core sites out to metro aggregation, data centers, and computing nodes with as few electrical regenerations as possible.

For module-level detail on form factors, modulation, and reach options, our overview of 800G optical modules and their role in next-generation networks covers the device side in more depth.

800G vs 400G vs 100G: What Actually Changes

The headline numbers - 8× the per-wavelength capacity of mainstream 100G systems, 2× that of 400G - matter less than the architectural and physical implications. The practical differences operators see at each rate:

  • 100G: NRZ or PAM4 modulation, runs over almost any installed G.652.D fiber, modest cabling density, well-understood power envelope. Still the workhorse for general enterprise and access-aggregation links.
  • 400G: PAM4 standard for short reach (DR4, FR4); coherent ZR/ZR+ for metro and DCI. G.652.D still adequate for most spans. Cabling density rises but is manageable with conventional MPO-12/24.
  • 800G: 8×100G PAM4 inside the data center; coherent for transport. Long-haul reach starts to depend on whether the underlying fiber is G.652.D or G.654.E. MPO/MTP density and end-face cleanliness become serious link-quality factors. Power per bit becomes a primary KPI alongside raw throughput.

The shift from 400G to 800G is not just "more capacity." It is the point at which fiber type, structured cabling design, and module power efficiency stop being neutral and start determining whether a given route or facility can be upgraded at all without physical changes.

What Fiber Type Do You Need for 800G?

At 10G and 100G, most operators could treat the outside plant as a given. At 800G coherent, that assumption breaks down on longer routes.

For long-haul and inter-DC links, attenuation and effective area dictate reach. According to the ITU-T G.654 Recommendation, G.654.E is the cut-off-shifted single-mode fiber category designed for terrestrial high-bit-rate transmission, with low attenuation (typically below 0.18 dB/km at 1550 nm) and an enlarged effective area of 110–130 µm². In greenfield deployments, G.654.E can carry 800 Gbps coherent signals over routes exceeding 600 km without an intermediate regenerator, where standard G.652.D would typically require at least one OEO regeneration site mid-span. That difference translates directly into both capex and opex over the link's lifetime.

For operators planning new long-haul routes that need to be 800G-ready from day one, deploying G.654.E single-mode fiber is now a serious option to evaluate against its higher per-kilometer cost. The trade-offs are covered in more depth in our practical guide to G.654.E and what it unlocks for next-generation transport.

Inside the data center, the dominant 800G cabling story is parallel single-mode over MPO/MTP. An 800G-DR8 link uses 8 transmit and 8 receive fibers, so a row of GPU servers can require thousands of fibers between leaf and spine. Three things matter much more than they did at 100G: high-fiber-count ribbon and rollable-ribbon cables (1,728-fiber and above) for spines; connector quality and polarity discipline, since end-face contamination on a single MPO ferrule can degrade an entire 800G link; and pre-terminated, factory-tested assemblies that reduce the on-site splicing risk. Our MPO/MTP product line and broader data center connectivity solutions are designed around these constraints.

Looking further out, hollow-core fiber is moving from research into early deployment for low-latency financial and AI interconnect routes, where the roughly 30% propagation-speed advantage over solid silica is material. It is not a mainstream metro choice yet, but it is on multiple vendor roadmaps and is worth tracking for long-horizon planning.
 

G.652.D vs G.654.E fiber for 800G@hengtongglobal

Architecture Implications: Flatter Networks, Tighter Compute Coupling

Three architectural shifts come with 800G.

Flatter topologies and fewer OEO conversions. Traditional metro networks aggregate traffic through several tiers of equipment rooms, each terminating and regenerating signals electrically. At 800G, every avoidable optical-to-electrical-to-optical conversion adds cost, latency, and power. Operators are using 800G to push toward "one-hop" architectures from core OTN nodes directly to access aggregation, reducing tiers in the metro layer.

Transport and compute become a single planning problem. AI training and inference workloads make compute placement a network problem. China Mobile Zhejiang's intelligent computing private network is a documented example: by upgrading metro OTN reach and integrating computing-node information into the all-optical transport map, the carrier reports approximately 1 ms latency to access compute for latency-sensitive workloads such as cloud rendering and model training. Whether a given operator can replicate that figure depends on distance, hop count, and whether OTN nodes are pushed close enough to users - it is a design outcome, not a property of the fiber itself.

Power per bit becomes the dominant constraint. Switch and module power, not raw capacity, increasingly sets the upper bound on what a site can host. This is why linear-drive pluggable optics (LPO) and co-packaged optics (CPO) are getting attention at 800G and 1.6T. The goal is fewer joules per transmitted bit, not just more bits.

National policy is reinforcing this trajectory. China's MIIT launched its 10 Gbps All-Optical Broadband pilot in January 2025, targeting residential communities, factories, and industrial parks for 50G-PON-based 10 Gbps access - now covering around 168 projects across 30 provinces. 800G sits one layer up, providing the metro and inter-DC capacity that this access layer and adjacent computing centers need to be useful.
 

800G network architecture and future scaling@hengtongglobal

How to Plan for 800G Readiness

Audit the existing fiber plant before committing to a generation skip. Many operators have G.652.D in the ground that supports 800G coherent for shorter spans but not for full route lengths. Knowing which routes need a refresh - and which don't - avoids both unnecessary capex and surprise regeneration sites later.

Treat 800G modules as a multi-year supply problem. Volume capacity for 800G QSFP-DD and OSFP modules is still tight in some regions, and 1.6T is starting to compete for the same manufacturing lines. Locking in qualified suppliers across a multi-year horizon matters more than chasing the lowest unit price on a first batch.

Design cabling for one generation beyond your current target. Pulling fiber is the slowest and most expensive part of any optical upgrade. Fiber count, duct space, and patch-panel density chosen today should anticipate 1.6T fabrics, not just 800G. For data-center builds, our fiber optic cabling solutions for data centers are sized with that headroom in mind.

Make the energy KPI a procurement criterion. Both regulators and large customers are starting to evaluate networks on picojoules per bit, not just gigabits per second. The fiber and connector plant has to be ready to support LPO and CPO transitions when they happen.

FAQ

Q: Is 800G Ready For Production Deployment Today?

A: Yes for AI data-center interconnect and for metro/inter-DC coherent links - both have moved past trial. For nationwide long-haul backbone refresh, 800G is being deployed but supply, vendor interoperability, and the choice of underlying fiber are still active engineering decisions rather than commodities.

Q: Can I Run 800G Coherent Over My Existing G.652.D Fiber?

A: For shorter spans, yes. For long-haul routes, the higher OSNR demanded by 800G coherent often limits G.652.D reach to roughly 300 km without regeneration, or forces additional repeater stations. G.654.E typically extends unregenerated reach significantly on the same route. The right answer depends on the actual span, link budget, and whether the route is greenfield or brownfield.

Q: What Does 800G Mean For Structured Cabling In AI Data Centers?

A: Higher fiber counts per cable, much heavier reliance on MPO/MTP connectivity (commonly 8-fiber and 16-fiber configurations for 800G-DR8), and stricter end-face cleanliness and insertion-loss budgets. Pre-terminated assemblies become the default rather than the exception.

Q: What Comes After 800G?

A: 1.6T pluggables (OSFP-XD and related form factors) are already in early deployment in AI fabrics, with broader availability expected through 2026 and 2027. 3.2T is on the roadmap. Hollow-core fiber and co-packaged optics are likely to reshape how those rates are physically delivered, particularly inside hyperscale facilities.

Summary

800G is the point at which the optical network stops being a passive utility and becomes an architectural choice. The headline rate is the easy part. The harder questions - which fiber sits in the ground, where the OEO boundaries are, how cabling density scales into 1.6T, how power per bit is measured - are what determine whether a network can actually carry the next generation of traffic. For operators and data-center builders planning beyond 2026, the work that matters is making sure the underlying fiber plant, the part that cannot be replaced cheaply, is sized for the decade ahead.
 

Send Inquiry