Aviz Networks Podcast Series
Aviz Networks Podcast Series
Podcast Description
Explore Networking 3.0 and its AI-driven future in our exclusive podcast series.
Podcast Insights
Content Themes
Focuses on the intersection of artificial intelligence and networking technologies, with episodes covering topics such as open networking practices, the impact of SONiC in enterprise environments, and how automation can alleviate burnout in tech roles, showcasing insights from experts like Zaid Kahn and Ethan Banks.

Explore Networking 3.0 and its AI-driven future in our exclusive podcast series.
In this episode, Scott Raynovich (Founder & Principal Analyst at Futurum) speaks with Thomas (Chief Product Officer at Aviz) about what’s changed in networking as AI clusters and “AI factories” scale up fast. AI traffic isn’t just bigger, it’s fundamentally different: east-west heavy, latency-sensitive, bursty, and dominated by machine-driven collective flows.
The conversation explores how modern AI infrastructure requires multiple networks (front-end and back-end), and why moving data in/out of GPU servers is now a major design consideration.
What you’ll learn in this episode:
1.) Why AI traffic behaves differently than traditional client-server workloads
2.) The rise of multiple AI networks: front-end (north-south) and GPU back-end (east-west)
3.) How open networking + SONiC helps reduce cost and improve flexibility
4.) Why standardization lowers operational complexity across mixed hardware environments
5.) How blueprints, validation, and automation speed up AI fabric deployments
6.) “Networks for AI” vs. “AI for Networking” – applying AI to day-0/day-1/day-2 operations
7.) What network observability means in AI environments: KPIs, prediction, faster RCA
8.) What’s new at Aviz: Certified Community SONiC and faster deployment tooling
9.) Aviz’s partnership with NVIDIA Spectrum-X and reference architecture automation
10.) What to expect in 2026: enterprise clusters, GPU-as-a-service/neo-clouds, sovereign AI footprints, and stronger focus on data + storage workflows
If you’re building or operating AI infrastructure, whether for training, inference, or GPUaaS this conversation breaks down what matters most: speed to deploy, operational simplicity, and performance at scale.
#AINetworking #SONiC #OpenNetworking #NetworkAutomation #AIOps #NetworkObservability #NVIDIA #SpectrumX #AIFactory #GPUClusters #DataCenterNetworking #Futurum #Aviz

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.