Women in AI Research (WiAIR)

Women in AI Research (WiAIR)
Podcast Description
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
Podcast Insights
Content Themes
The podcast focuses on topics such as bias in AI, the limitations of transformer models, and the personal journeys of women in AI research. Episodes include discussions on the social implications of AI, technical challenges in language models, and the overall impact of diverse voices in the AI field.

Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
How can we build AI systems that are fair, explainable, and truly responsible?
In this episode of the #WiAIR podcast, we sit down with Dr. Faiza Khan Khattak, the CTO of an innovative AI startup, with a rich background in both academia and industry. From fairness in machine learning to the realities of ML deployment in healthcare, this conversation is packed with insights, real-world challenges, and powerful reflections.
REFERENCES:
- MLHOps: Machine Learning Health Operations
- Using Chain-of-Thought Prompting for Interpretable Recognition of Social Bias
- Dialectic Preference Bias in Large Language Models
- The Impact of Unstated Norms in Bias Analysis of Language Models
- Can Machine Unlearning Reduce Social Bias in Language Models?
- BiasKG: Adversarial Knowledge Graphs to Induce Bias in Large Language Models
👉 Whether you’re an AI researcher, a developer working on LLMs, or someone passionate about Responsible AI, this episode is for you.
📌 Subscribe to hear more inspiring stories and cutting-edge ideas from women leading the future of AI.
Follow us at:
♾️ Bluesky
♾️ X (Twitter)
#WomenInAI #WiAIR #ResponsibleAI #FairnessInAI #AIHealthcare #ExplainableAI #LLMs #AIethics #BiasMitigation #MachineUnlearning #InterpretableAI #AIstartup #AIforGood

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.