The Healthcare AI Podcast
Podcast Description
Explore real-world applications of generative AI, large language models, and advanced NLP in The Applied AI Podcast. We dive into healthcare, finance, legal, life sciences, and more with expert interviews, practical case studies, and insights on open-source tools and frameworks. Discover how organizations deploy AI at scale, navigate ethical and technical challenges, and unlock transformative business value. Open and impactful discussions for AI professionals and enthusiasts.
Podcast Insights
Content Themes
The show covers diverse topics such as healthcare AI, financial applications of NLP, legal tech innovations, and advancements in life sciences. Episodes delve into specific examples like the evaluation of LLMs in healthcare settings, detailing case studies that highlight transformative AI use cases and ethical considerations, including episodes on the impact of AI on clinical coding and patient data privacy.

Explore real-world applications of generative AI, large language models, and advanced NLP in The Applied AI Podcast. We dive into healthcare, finance, legal, life sciences, and more with expert interviews, practical case studies, and insights on open-source tools and frameworks. Discover how organizations deploy AI at scale, navigate ethical and technical challenges, and unlock transformative business value. Open and impactful discussions for AI professionals and enthusiasts.
AI in healthcare can save lives-or put them at risk. This episode explores guardrails, safety LLMs, regulation, and why generic AI controls fail in clinical settings.
Timestamps:
00:00 Introduction
01:26 What are LLM guardrails and why do they matter in healthcare
02:36 Why AI hallucinations are dangerous in medical settings
03:47 Why people still use chatbots for medical advice
05:13 Why generic AI safety tools fail in healthcare
06:16 Regulation pressure: US vs Europe
09:03 Guardrail frameworks: Guardrails AI, NeMo, Llama Guard
15:08 Safety LLMs and red teaming medical AI
22:17 Why healthcare AI needs application-specific testing
27:49 Shift-left AI safety and responsible design
32:44 The ELIZA effect
37:27 Practical advice for teams building healthcare AI
𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀 𝗳𝗼𝗿 𝗟𝗶𝘀𝘁𝗲𝗻𝗲𝗿𝘀 ►
Papers:
– Hakim et al. (2025) ”The need for guardrails with large language models in pharmacovigilance.”
– Meta's Llama Guard paper: ”Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations” (arXiv:2312.06674)
– Ayala-Lauroba et al. (2024) ”Enhancing Guardrails for Safe and Secure Healthcare AI” (arXiv:2409.17190)
Code and Models:
– Hakim et al. analysis code: https://github.com/jlpainter/llm-guardrails/
– Llama Guard: Available on Hugging Face (requires approval)
– gpt-oss-safeguard: https://huggingface.co/openai/gpt-oss-safeguard-20b (Apache 2.0)
Medical Ontologies:
– MedDRA (Medical Dictionary for Regulatory Activities): https://www.meddra.org/
– WHO Drug Dictionary: Via Uppsala Monitoring Centre
Regulatory Guidance:
– EMA AI Reflection Paper: https://www.ema.europa.eu/en/about-us/how-we-work/data-regulation-big-data-other-sources/artificial-intelligence
– FDA AI Guidance: Available on FDA.gov
LISTEN ON ►
YouTube: https://youtu.be/IWoARQ0G7sg
Apple Podcasts: https://podcasts.apple.com/us/podcast/the-healthcare-ai-podcast/id1827098175
Spotify: https://open.spotify.com/show/2XNrQBeCY7OGql2jVhcP7t
Amazon Music: https://music.amazon.com/podcasts/5b1f49a6-dba8-479e-acdf-9deac2f8f60e/the-healthcare-ai-podcast
FOLLOW ►
Website: https://www.johnsnowlabs.com/
LinkedIn: https://www.linkedin.com/company/johnsnowlabs/
Facebook: https://www.facebook.com/JohnSnowLabsInc/
X (Twitter): https://x.com/JohnSnowLabs
#HealthcareAI #AIGuardrails #MedicalAI #AISafety #AIEthics #HealthTech #AIRegulation #DigitalHealth #AIinMedicine #MLOps #AICompliance #AIHallucinations

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.