Data Faces Podcast

Data Faces Podcast
Podcast Description
Data Faces is a podcast that brings the human stories behind data, analytics, and AI to the forefront. Join us for engaging interviews and discussions with the industry’s leading voices—the leaders, practitioners, and tech innovators who are shaping the future of data-driven decision-making. In each episode, we explore the culture, challenges, and real-life experiences of the people behind the numbers. Whether you're a tech executive, data professional, or just curious about the impact of data on our world, Data Faces offers a refreshing look at the individuals and ideas driving the next wave
Podcast Insights
Content Themes
The podcast covers themes such as the impact of AI on business, data integrity, historical lessons on technology, and ethical considerations in AI. Episodes include discussions on why 90% of Gen AI projects might fail, the role of trusted data in successful AI initiatives, and how historical revolutions inform our understanding of AI's future potential.

Data Faces is a podcast that brings the human stories behind data, analytics, and AI to the forefront. Join us for engaging interviews and discussions with the industry’s leading voices—the leaders, practitioners, and tech innovators who are shaping the future of data-driven decision-making. In each episode, we explore the culture, challenges, and real-life experiences of the people behind the numbers. Whether you’re a tech executive, data professional, or just curious about the impact of data on our world, Data Faces offers a refreshing look at the individuals and ideas driving the next wave
📢 Can we really trust AI without trustworthy data?
Field CTO Shane Murray of Monte Carlo Data shares what “AI-ready” actually means, and why most data teams are underprepared for the shift to generative AI.
In this episode, we explore the practical and philosophical challenges behind building data products that can power AI applications — from defining quality in unstructured data to the ripple effects of small changes in AI systems. Shane draws on his experience leading data at The New York Times and now helping organizations scale observability and governance at Monte Carlo Data.
🔍 Key Takeaways:
Why the term “AI-ready” is often misunderstood — and what it really takes
How unstructured data quality and observability differ from traditional structured approaches
The hidden risks of hallucinations, model drift, and multi-agent errors
Why governance can’t be “pumped in” after the fact — it must be designed in from the start
A pragmatic path for data teams: start small, keep humans in the loop, and build what matters
⏳ Timestamps for Easy Navigation:
00:00 – Intro & Shane Murray’s background
03:23 – What does “AI-ready” actually mean?
07:54 – Measuring quality in unstructured data
12:43 – The hidden causes of AI hallucinations
18:23 – Multi-agent systems and compounding errors
20:31 – Rethinking AI governance in enterprise environments
25:35 – Can we ever truly trust AI?
30:45 – The future of trustworthy AI systems
34:38 – Shane’s advice to data teams and where to start
📩 More insights & resources:
👉 [Link to blog post or Substack recap here]
🔗 Connect with Shane Murray:
💼 LinkedIn: https://www.linkedin.com/in/shanemurray5/
🌎 Website: https://www.montecarlodata.com
💬 What stood out to you most? Let us know in the comments.
👍 Like this episode? Subscribe and share for more conversations on data, AI, and analytics leadership.
#AIReadyData #DataGovernance #TrustworthyAI

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.