MLOps London On Air

MLOps London On Air
Podcast Description
MLOps London On Air is a podcast spin-off from the MLOps London quarterly meetups, diving into the evolving strategy and market landscape of ML and AI. Each episode explores how emerging technologies and industry shifts are shaping real-world applications across sectors.Join our meetup: https://www.meetup.com/mlopslondon/Explore our event and podcast sponsor: https://www.seldon.io/
Podcast Insights
Content Themes
The podcast emphasizes content themes such as responsible AI, market landscape shifts, and deployment strategies. Episodes cover specific topics including ethical considerations in AI, governmental regulations, and emerging technologies, with examples like the premiere episode exploring AI deployment with Prerna Kaul, discussing the challenges of scaling GenAI platforms in enterprise settings.

MLOps London On Air is a podcast spin-off from the MLOps London quarterly meetups, diving into the evolving strategy and market landscape of ML and AI. Each episode explores how emerging technologies and industry shifts are shaping real-world applications across sectors.
Join our meetup: https://www.meetup.com/mlopslondon/
Explore our event and podcast sponsor: https://www.seldon.io/
CTO of Etiq AI, Raluca Crisan, joins us to explore what it really takes to build responsible, scalable AI in today’s high-pressure, fast-iterating tech environments. Drawing from her experience in data science and model governance across startups and impact-driven organizations like Zinc, Raluca shares how teams can move from reactive fixes to proactive safeguards without slowing innovation down.
Unpack why most data scientists still struggle with bias detection and testing, how orchestration tooling is evolving to support real-world deployment cycles, and what it means to operationalize responsibility from inside the data pipeline. The conversation touches on invisible risks in behavioral data, lessons from building testing tools that data scientists actually want to use, and the nuanced challenge of debugging AI failures in live environments.
We also look at why generative AI has accelerated urgency around model oversight, how LLMs mirror user bias, and why automation-first approaches to testing may be key to unlocking trust at scale.
Tune in for a wide-ranging discussion on responsible AI, emergent failure modes, and what it takes to make testing as intuitive, and indispensable, as model training.
MLOps London Meetup: https://www.meetup.com/mlopslondon/
Learn more about Etiq AI: https://www.etiq.ai/

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.