Tech Threads: Weaving the Intelligent Future

Tech Threads: Weaving the Intelligent Future
Podcast Description
This podcast hosted by Baya Systems explores the cutting edge of technology, from AI acceleration to data movement and chiplet innovation. Each episode dives into groundbreaking advancements shaping the future of computing, featuring insights from industry experts on the trends and challenges defining the tech landscape. Tune in to stay ahead in the rapidly evolving world of technology.
Podcast Insights
Content Themes
The podcast covers cutting-edge themes in technology such as AI acceleration, data movement, and chiplet innovation. For instance, in the inaugural episode, the discussion revolves around the rapid evolution of the semiconductor industry, highlighting the impact of AI on chip design and the implications of a slowing Moore's Law.

This podcast hosted by Baya Systems explores the cutting edge of technology, from AI acceleration to data movement and chiplet innovation. Each episode dives into groundbreaking advancements shaping the future of computing, featuring insights from industry experts on the trends and challenges defining the tech landscape. Tune in to stay ahead in the rapidly evolving world of technology.
In this episode of Tech Threads: Weaving the Intelligent Future, Baya Systems’ CCO Nandan Nayampally welcomes Fabrizio Del Maffeo, founder and CEO of Axelera AI, one of Europe’s most promising AI semiconductor startups. The conversation opens with a sharp look at the growing shift from cloud to edge AI, exploring the power, cost, latency constraints, and more importantly, the regional and use-case considerations that are reshaping how and where intelligence is deployed.
The discussion covers strategies for deploying AI at the network edge, adapting to rapidly evolving workloads, and leveraging digital in-memory computing to enable low-power, high-throughput inference acceleration. It also delves into the future of chiplet-based design, the role of open and programmable hardware, and broader efforts to democratize compute. With shared perspectives on “scale within” and scalable system architectures, this episode offers a compelling view into the future of distributed AI.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.