Variance
Variance
Podcast Description
Discover how bold ideas are born. In each episode, we sit down with trailblazers in science, tech, art, and design to uncover the variant mindsets and views that spark breakthroughs. Tune in for insights into what it takes to upend convention—and create something truly new.
Podcast Insights
Content Themes
Focuses on groundbreaking ideas and unconventional solutions across various fields, with episodes addressing topics such as sustainable architecture in the context of climate change featuring Dan Spiegel discussing housing crises and electric car infrastructure, and the implications of creative thinking on design and art.

Discover how bold ideas are born. In each episode, we sit down with trailblazers in science, tech, art, and design to uncover the variant mindsets and views that spark breakthroughs. Tune in for insights into what it takes to upend convention—and create something truly new.
Can we make moral AI agents? Can these agents get good enough to provide therapy and other personal services to humans, and even if they can, is that a good idea? Are language models sentient and deserving of moral concern – and how would we know? How do we incorporate a pluralistic set of views into AI systems?
Join Jared Moore, a computer scientist, AI alignment researcher, and educator probing how large language models understand (and sometimes misunderstand) human minds and values. Now at Stanford University, he investigates social reasoning, theory-of-mind, and the pitfalls of machine deception while co-creating courses like “How to Make a Moral Agent.” Jared blends rigorous research with creative outreach—publishing on pluralistic alignment, writing a satirical novel about conscious AI, and building installations that turn code into poetry—to push the question: how can we make AI systems reliably do what we want, for everyone’s benefit?
Show Notes:
Why LLMs Won’t Replace Therapists Anytime Soon
Are Large Language Models Consistent over Value-laden Questions?
The Strength of the Illusion: a satirical novel about AI

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.