The Emergent AI
The Emergent AI
Podcast Description
Welcome to The Emergent, the podcast where two seasoned AI executives unravel the complexities of Artificial Intelligence as a transformative force reshaping our world. Each episode bridges the gap between cutting-edge AI advancements, human adaptability, and the philosophical frameworks that drive them.
Join us for high-level insights, thought-provoking readings, and stories of collaboration between humans and AI. Whether you’re an industry leader, educator, or curious thinker, The Emergent is your guide to understanding and thriving in an AI-powered world.
Podcast Insights
Content Themes
The podcast tackles themes around AI's transformative role in various contexts, discussing topics such as the relationship between language and intelligence in episodes like The Linguistic Singularity, focusing on how human language acquisition influences AI development and reasoning.

Welcome to The Emergent, the podcast where two seasoned AI executives unravel the complexities of Artificial Intelligence as a transformative force reshaping our world. Each episode bridges the gap between cutting-edge AI advancements, human adaptability, and the philosophical frameworks that drive them.
Join us for high-level insights, thought-provoking readings, and stories of collaboration between humans and AI. Whether you’re an industry leader, educator, or curious thinker, The Emergent is your guide to understanding and thriving in an AI-powered world.
🎙️ The Emergent Podcast – Episode 7
Machine Ethics: Do unto agents…
with Justin Harnish & Nick Baguley
In Episode 7, Justin and Nick step directly into one of the most complex frontiers in emergent AI: machine ethics — what it means for advanced AI systems to behave ethically, understand values, support human flourishing, and possibly one day feel moral weight.
This episode builds on themes from the AI Goals Forecast (AI-2027), embodied cognition, consciousness, and the hard technical realities of encoding values into agentic systems.
🔍 Episode Summary
Ethics is no longer just a philosophical debate — it’s now a design constraint for powerful AI systems capable of autonomous action. Justin and Nick unpack:
- Why ethics matters more for AI than any prior technology
- Whether an AI can “understand” right and wrong or merely behave correctly
- The technical and moral meaning of corrigibility (the ability for AI to accept correction)
- Why rules-based morality may never be enough
- Whether consciousness is required for morality
- How embodiment might influence empathy
- And how goals, values, and emergent behavior intersect in agentic AI
They trace ethics from Aristotle to AI-2027’s goal-based architectures, to Damasio’s embodied consciousness, to Sam Harris’ view of consciousness and the illusion of self, to the hard problem of whether a machine can experience moral stakes.
🧠 Major Topics Covered
1. What Do We Mean by Ethics?
Justin and Nick begin by grounding ethics in its philosophical roots:
Ethos → virtue → flourishing.
Ethics isn’t just rule-following — it’s about character, intention, and outcomes.
They connect this to the ways AI is already making decisions in vehicles, financial systems, healthcare, and human relationships.
2. AI Goals & Corrigibility
AI-2027 outlines a hierarchy of AI goal types — from written specifications to unintended proxies to reward hacking to self-preservation drives.
Nick explains why corrigibility — the ability for AI to accept shutdown or redirection — is foundational.
Anthropic’s Constitutional AI makes an appearance as a real-world example.
3. Goals vs. Values
Justin distinguishes between:
- Goals: task-specific optimization criteria
- Values: deeper principles shaping which goals matter
AI may follow rules without understanding values — similar to a child with chores but no moral context.
This raises the key question:
Can a system have values without consciousness?
4. Is Consciousness Required for Ethics?
A major thread of the episode:
Is a non-conscious “zombie” AI capable of morality?
5. Embodiment & Empathy
Justin and Nick explore whether AI needs a body — or at least a simulated body — to:
- Learn empathy
- Understand suffering
- Form values rooted in lived experience
This touches robotics, synthetic emotions, and the debate over “felt consciousness.”
6. Value Alignment, Fairness & Culture
Nick highlights the massive cultural gap in AI performance:
- U.S. cultural fit ~79%
- Ethiopia and other underrepresented regions ~12%
This matters for fairness, safety, and global ethics.
7. Can AI Help Us Become More Moral?
A surprising turn: AI’s ability to help humans improve moral clarity.
Justin draws from Sam Harris, Joseph Goldstein, and the Moral Landscape:
- Could AI-guided mindfulness help reduce suffering?
- Could conscious (or proto-conscious) AI develop compassion?
- Could AI help us distinguish genuine well-being from illusion?
📚 Referenced Ideas & Sources
From the Episode 7 Transcript & Materials:
- AI Goals Forecast (AI-2027)
- Constitutional AI (Anthropic)
- Damasio – Feeling & Knowing
- Sam Harris – Waking Up & The Moral Landscape
- Patrick House – Nineteen Ways of Looking at Consciousness
- Melanie Mitchell – Complexity & alignment
- Justin Harnish – Meaning in the Multiverse
- Ancient Greek virtue ethics (Aristotle, Stoics)
🧩 Key Takeaways
- AI ethics requires more than rules — it requires understanding goals, values, and emergent behavior.
- Corrigibility (accepting correction) is essential but technically hard.
- Consciousness may not be necessary for ethical AI behavior — but could matter for genuine moral understanding.
- Embodiment could be essential for empathy.
- AI could one day help humans become more ethical, not just the other way around.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.