Tech Threads: Weaving the Intelligent Future

Tech Threads: Weaving the Intelligent Future
Podcast Description
This podcast hosted by Baya Systems explores the cutting edge of technology, from AI acceleration to data movement and chiplet innovation. Each episode dives into groundbreaking advancements shaping the future of computing, featuring insights from industry experts on the trends and challenges defining the tech landscape. Tune in to stay ahead in the rapidly evolving world of technology.
Podcast Insights
Content Themes
The podcast covers cutting-edge themes in technology such as AI acceleration, data movement, and chiplet innovation. For instance, in the inaugural episode, the discussion revolves around the rapid evolution of the semiconductor industry, highlighting the impact of AI on chip design and the implications of a slowing Moore's Law.

This podcast hosted by Baya Systems explores the cutting edge of technology, from AI acceleration to data movement and chiplet innovation. Each episode dives into groundbreaking advancements shaping the future of computing, featuring insights from industry experts on the trends and challenges defining the tech landscape. Tune in to stay ahead in the rapidly evolving world of technology.
In this episode of Tech Threads, Nandan Nayampally sits down with Sally Ward-Foxton (EE Times) and Dr. Ian Cutress (More Than Moore) for an unfiltered look at the state of AI, from the far edge to hyperscale data centers.
Ahead of the recording, we asked our LinkedIn followers to weigh in on some of the biggest questions in AI today, from bottlenecks in system design to the future of GPUs. Those poll results are revealed and discussed in the episode, bringing your insights directly into the conversation.
The discussion covers where the real bottlenecks lie in AI system design, whether “AI at the edge” is living up to the hype, and if GPUs will continue to dominate or give way to new architectures. With insights on hardware-software co-design, open vs proprietary ecosystems, and the realities of scaling AI infrastructure, this episode blends deep technical perspective with candid industry observations.
If you care about AI performance, power efficiency, and what’s next in compute architecture, this is a discussion you won’t want to miss.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.