The Existential Hope Podcast
The Existential Hope Podcast
Podcast Description
The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely.Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more.About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success.Explore more: Transcript, listed resources, and more: https://www.existentialhope.com/podcastsFollow on X Hosted on Acast. See acast.com/privacy for more information.
Podcast Insights
Content Themes
The podcast delves into themes including the potential of AI, nanotechnology, longevity biotech, neurotechnology, and smarter governance. For instance, episodes with David Deutsch focus on the principles of knowledge and progress, while discussions with David Pearce examine philosophical approaches to eliminating suffering and enhancing human potential.

The Existential Hope Podcast features in-depth conversations with people working on positive, high-tech futures. We explore how the future could be much better than today—if we steer it wisely.
Hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite the scientists, founders, and philosophers shaping tomorrow’s breakthroughs— AI, nanotech, longevity biotech, neurotech, space, smarter governance, and more.
About Foresight Institute: For 40 years the independent nonprofit Foresight Institute has mapped how emerging technologies can serve humanity. Its Existential Hope program is the North Star: mapping the futures worth aiming for and the breakthroughs needed to reach them. This podcast is that exploration in public. Follow along and help tip the century toward success.
Explore more:
- Transcript, listed resources, and more: https://www.existentialhope.com/podcasts
- Follow on X
Hosted on Acast. See acast.com/privacy for more information.
When people think about AGI, most of them ask “When is it going to arrive?” or “What kind of AGI will we get?”. Andrew Critch, AI safety researcher and mathematician, argues that the most important question is actually “What will we do with it?”
In our conversation, we explore the importance of our choices in the quest to make AGI a force for good. Andrew explains what AGI might look like in practical terms, and the consequences of it being trained on our culture. He also claims that finding the “best” values AI should have is a philosophical trap, and that we should instead focus on finding a basic agreement about “good” vs. “bad” behaviors.
The episode also covers concrete takes on the transition to AGI, including:
- Why an advanced intelligence would likely find killing humans “mean.”
- How automated computer security checks could be one of the best uses of powerful AI.
- Why the best preparation for AGI is simply to build helpful products today.
On the Existential Hope Podcast hosts Allison Duettmann and Beatrice Erkers from the Foresight Institute invite scientists, founders, and philosophers for in-depth conversations on positive, high-tech futures.
Full transcript, listed resources, and more: https://www.existentialhope.com/podcasts
Follow on X.
Hosted on Acast. See acast.com/privacy for more information.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.