Women in AI Research (WiAIR)
Women in AI Research (WiAIR)
Podcast Description
Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
Podcast Insights
Content Themes
The podcast focuses on topics such as bias in AI, the limitations of transformer models, and the personal journeys of women in AI research. Episodes include discussions on the social implications of AI, technical challenges in language models, and the overall impact of diverse voices in the AI field.

Women in AI Research (WiAIR) is a podcast dedicated to celebrating the remarkable contributions of female AI researchers from around the globe. Our mission is to challenge the prevailing perception that AI research is predominantly male-driven. Our goal is to empower early career researchers, especially women, to pursue their passion for AI and make an impact in this rapidly growing field. You will learn from women at different career stages, stay updated on the latest research and advancements, and hear powerful stories of overcoming obstacles and breaking stereotypes.
Do large language models actually understand meaning — or are we over-interpreting impressive behavior?
In this episode, we speak with Maria Ryskina, CIFAR AI Safety Postdoctoral Fellow at the Vector Institute for AI, whose research bridges neuroscience, cognitive science, and artificial intelligence. Together, we unpack what the brain can (and cannot) teach us about modern AI systems — and why current evaluation paradigms may be missing something fundamental.
We explore how language models can predict brain activity in regions linked to visual processing, what this reveals about cross-modal knowledge, and why scale alone may not resolve deeper conceptual gaps in AI. The conversation also tackles the growing importance of interpretability, especially as AI systems become more embedded in high-stakes, real-world contexts.
Beyond technical questions, Maria shares why community matters in AI research, particularly for underrepresented groups — and how diversity directly shapes the kinds of scientific questions we ask and the systems we ultimately build.
REFERENCES
- Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
- Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models
- Language models align with brain regions that represent concepts across modalities
- Elements of World Knowledge (EWoK): A Cognition-Inspired Framework for Evaluating Basic World Knowledge in Language Models
- Prompting is not a substitute for probability measurements in large language models
- Auxiliary task demands mask the capabilities of smaller language models
🎧 Subscribe to stay updated on new episodes spotlighting brilliant women shaping the future of AI.
Follow us at:

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.