Women in AI Research (WiAIR)
Women in AI Research (WiAIR)
Podcast Description
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
Podcast Insights
Content Themes
The podcast focuses on topics such as bias in AI, the limitations of transformer models, and the personal journeys of women in AI research. Episodes include discussions on the social implications of AI, technical challenges in language models, and the overall impact of diverse voices in the AI field.

Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
In this conversation, Dr. Hila Gonen (Assistant Professor at the University of British Columbia) joins us to explore the deep insights into how large language models (LLMs) leak semantic information, behave across languages, and how researchers can uncover their root causes. Dr. Gonen shares her journey in interpreting AI systems, addressing biases, and controlling model outputs for safer, fairer applications.
In this episode:
- The influence of prompt elements, like colour, on model predictions
- How semantic leakage impacts model outputs unintentionally
- The role of multilinguality and modality in model safety and behaviour
- Interventional vs. observational approaches to understanding models
- Challenges in controlling and aligning AI behavior across languages and domains
- Future directions in model interpretability, safety, and causal analysis
Key Topics:
- Color and semantic influence on language model completions
- The concept of semantic leakage and examples from real prompts
- Differences between bias, hallucination, and leakage failures
- Unintended behaviours discovered through experimentation
- The importance of model interpretability and transparency
- Roots of behaviour: training data and internal representations
- Interventional analysis as a causal tool in NLP research
- Cross-lingual and cross-modal alignment in safety detection
- Challenges in evaluating safety across languages and modalities
- Strategies for building robust controls against unseen attack types
- The future of AI research: combining performance with reliability and safety
- Ethical considerations: avoiding directions that hinder societal benefits
Resources & Links:
- Lipstick on a Pig: Debiasing Methods Cover up Systematic Gender Biases in Word Embeddings But do not Remove Them
- Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models
- Rewriting History: A Recipe for Interventional Analyses to Study Data Effects on Model Behavior
- OMNIGUARD: An Efficient Approach for AI Safety Moderation Across Modalities and Languages
Connect with Dr. Hila Gonen:
Note: This episode emphasizes practical and theoretical challenges in model interpretability, safety, bias detection, and causality—providing a comprehensive view suitable for researchers, practitioners, and AI enthusiasts interested in responsible AI development.
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
Follow us at:

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.