Cisco said its Silicon One G300 switch chip, which is expected to go on sale in the second half of the year, is designed to accelerate the flow of information across massive data centers, News.Az reports, citing Reuters.
The chip aims to improve how processors that train and deploy AI systems communicate with each other across hundreds of thousands of network links.
RECOMMENDED STORIES
The Silicon One G300 will be manufactured using Taiwan Semiconductor Manufacturing Co’s advanced 3-nanometer process. According to Cisco, the chip includes several new “shock absorber” features intended to prevent AI networks from slowing down when they experience sudden surges in data traffic.
“These situations occur when you have tens of thousands or even hundreds of thousands of connections, and they happen quite regularly,” said Martin Lund, executive vice president of Cisco’s common hardware group, in an interview with Reuters. “We focus on the total end-to-end efficiency of the network.”
Cisco estimates that the new chip could help certain AI computing tasks run up to 28% faster. This performance gain is partly driven by the chip’s ability to automatically reroute data around network issues within microseconds, reducing congestion and downtime.
Networking has become a critical battleground in the AI race. When Nvidia introduced its latest AI systems last month, one of the six core components was a networking chip that competes directly with Cisco’s products. Broadcom is also targeting the same space with its Tomahawk family of networking chips, intensifying competition in the sector.





