Cisco AI router solves data centre interconnect challenge

Cisco has entered an increasingly competitive race to dominate AI data centre interconnect technology, becoming the latest major player to unveil purpose-built routing hardware for connecting distributed AI workloads across multiple facilities.
The networking giant unveiled its 8223 routing system on October 8, introducing what it claims is the industry’s first 51.2 terabit per second fixed router specifically designed to link data centres running AI workloads.
At its core sits the new Silicon One P200 chip, representing Cisco’s answer to a challenge that’s increasingly constraining the AI industry: what happens when you run out of room to grow.
A three-way battle for scale-across supremacy?
For context, Cisco isn’t alone in recognising this opportunity. Broadcom fired the first salvo in mid-August with its “Jericho 4” StrataDNX switch/router chips, which began sampling and also offered 51.2 Tb/sec of aggregate bandwidth backed by HBM memory for deep packet buffering to manage congestion.
Two weeks after Broadcom’s announcement, Nvidia unveiled its Spectrum-XGS scale-across network—a notably cheeky name given that Broadcom’s “Trident” and “Tomahawk” switch ASICs belong to the StrataXGS family.
Nvidia secured CoreWeave as its anchor customer but provided limited technical details about the Spectrum-XGS ASICs. Now Cisco is rolling out its own components for the scale-across networking market, setting up a three-way competition among networking heavyweights.
The problem: AI is too big for one building
To understand why multiple vendors are rushing into this space, consider the scale of modern AI infrastructure. Training large language models or running complex AI systems requires thousands of high-powered processors working in concert, generating enormous amounts of heat and consuming massive amounts of electricity.
Data centres are hitting hard limits—not just on available space, but on how much power they can supply and cool.
“AI compute is outgrowing the capacity of even the largest data centre, driving the need for reliable, secure connection of data centres hundreds of miles apart,” said Martin Lund, Executive Vice President of Cisco’s Common Hardware Group.
The industry has traditionally addressed capacity challenges through two approaches: scaling up (adding more capability to individual systems) or scaling out (connecting more systems within the same facility).
But both strategies are reaching their limits. Data centres are running out of physical space, power grids can’t supply enough electricity, and cooling systems can’t dissipate the heat fast enough.
This forces a third approach: “scale-across,” distributing AI workloads across multiple data centres that might be in different cities or even different states. However, this creates a new problem—the connections between these facilities become critical bottlenecks.
Why traditional routers fall short
AI workloads behave differently from typical data centre traffic. Training runs generate massive, bursty traffic patterns—periods of intense data movement followed by relative quiet. If the network connecting data centres can’t absorb these surges, everything slows down, wasting expensive computing resources and, critically, time and money.
Traditional routing equipment wasn’t designed for this. Most routers prioritise either raw speed or sophisticated traffic management, but struggle to deliver both simultaneously while maintaining reasonable power consumption. For AI data centre interconnect applications, organisations need all three: speed, intelligent buffering, and efficiency.
Cisco’s answer: The 8223 system
Cisco’s 8223 system represents a departure from general-purpose routing equipment. Housed in a compact three-rack-unit chassis, it delivers 64 ports of 800-gigabit connectivity—currently the highest density available in a fixed routing system. More importantly, it can process over 20 billion packets per second and scale up to three Exabytes per second of interconnect bandwidth.
The system’s distinguishing feature is deep buffering capability, enabled by the P200 chip. Think of buffers as temporary holding areas for data—like a reservoir that catches water during heavy rain. When AI training generates traffic surges, the 8223’s buffers absorb the spike, preventing network congestion that would otherwise slow down expensive GPU clusters sitting idle waiting for data.
Power efficiency is another critical advantage. As a 3RU system, the 8223 achieves what Cisco describes as “switch-like power efficiency” while maintaining routing capabilities—crucial when data centres are already straining power budgets.
The system also supports 800G coherent optics, enabling connections spanning up to 1,000 kilometres between facilities—essential for geographic distribution of AI infrastructure.
Industry adoption and real-world applications
Major hyperscalers are already deploying the technology. Microsoft, an early Silicon One adopter, has found the architecture valuable across multiple use cases.
Dave Maltz, technical fellow and corporate vice president of Azure Networking at Microsoft, noted that “the common ASIC architecture has made it easier for us to expand from our initial use cases to multiple roles in DC, WAN, and AI/ML environments.”
Alibaba Cloud plans to use the P200 as a foundation for expanding its eCore architecture. Dennis Cai, vice president and head of network Infrastructure at Alibaba Cloud, stated the chip “will enable us to extend into the Core network, replacing traditional chassis-based routers with a cluster of P200-powered devices.”
Lumen is also exploring how the technology fits into its network infrastructure plans. Dave Ward, chief technology officer and product officer at Lumen, said the company is “exploring how the new Cisco 8223 technology may fit into our plans to enhance network performance and roll out superior services to our customers.”
Programmability: Future-proofing the investment
One often-overlooked aspect of AI data centre interconnect infrastructure is adaptability. AI networking requirements are evolving rapidly, with new protocols and standards emerging regularly.
Traditional hardware typically requires replacement or expensive upgrades to support new capabilities. The P200’s programmability addresses this challenge.
Organisations can update the silicon to support emerging protocols without replacing hardware—important when individual routing systems represent significant capital investments and AI networking standards remain in flux.
Security considerations
Connecting data centres hundreds of miles apart introduces security challenges. The 8223 includes line-rate encryption using post-quantum resilient algorithms, addressing concerns about future threats from quantum computing. Integration with Cisco’s observability platforms provides detailed network monitoring to identify and resolve issues quickly.
Can Cisco compete?
With Broadcom and Nvidia already staking their claims in the scale-across networking market, Cisco faces established competition. However, the company brings advantages: a long-standing presence in enterprise and service provider networks, the mature Silicon One portfolio launched in 2019, and relationships with major hyperscalers already using its technology.
The 8223 ships initially with open-source SONiC support, with IOS XR planned for future availability. The P200 will be available across multiple platform types, including modular systems and the Nexus portfolio.
This flexibility in deployment options could prove decisive as organisations seek to avoid vendor lock-in while building out distributed AI infrastructure.
Whether Cisco’s approach becomes the industry standard for AI data centre interconnect remains to be seen, but the fundamental problem all three vendors are addressing—efficiently connecting distributed AI infrastructure—will only grow more pressing as AI systems continue scaling beyond single-facility limits.
The real winner may ultimately be determined not by technical specifications alone, but by which vendor can deliver the most complete ecosystem of software, support, and integration capabilities around their silicon.
See also:
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events including the Cyber Security Expo, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.
The post Cisco AI router solves data centre interconnect challenge appeared first on Tokention.