Exploring Machine Consciousness
Exploring Machine Consciousness
Podcast Description
A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness.
Podcast Insights
Content Themes
The podcast delves into topics such as the ethical ramifications of machine consciousness, advancements in AI technologies, and human-AI interactions. Specific episodes may cover trends like anthropomorphism in AI relationships and the moral implications of machine consciousness, as explored in guest Henry Shevlin's recent work.

A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness. Visit www.prism-global.com for more about our work.
Michael Graziano is Professor of Psychology and Neuroscience at Princeton University and one of the most distinctive voices in consciousness science. His lab at Princeton investigates how information-processing systems arrive at the conclusion that they have an inner subjective experience; treating consciousness as a mechanistic, scientific question rather than an intractable mystery. That approach drives his Attention Schema Theory (AST) and its direct applications to machine consciousness. He is the author of several books including Rethinking Consciousness (2019) and Consciousness and the Social Brain (2014).
In this episode, Michael walks us through the core claims of AST and why he thinks the brain's simplified internal model of attention is what generates the experience of being conscious. We discuss:
- Why attention is arguably the most important innovation in the evolution of the brain, and how the brain's need to monitor and control attention gives rise to a simplified self-model that we experience as consciousness.
- Why Graziano dislikes the word ”illusionism” despite accepting that AST belongs in that tradition, and why he prefers ”caricature” to ”illusion” when describing our inner experience.
- Graziano’s nuanced perspectives on whether current LLMs already qualify as conscious: that they have some pieces of the puzzle, particularly at the level of conceptual representation, but lack the stable, automatic self-models that characterise human consciousness.
- The case for building pro-social AI: why Graziano believes we are currently building sociopathic machines, and how embedding theory-of-mind and self-modelling capabilities could make AI genuinely cooperative rather than merely compliant.
- The moral stakes of AI emotion: why the absence of an autonomic nervous system means current LLMs almost certainly lack genuine emotions, and why that changes, but does not eliminate, the moral calculus around AI.
- How chatbots are already changing us through social contagion, and the surprising finding from his lab's research (led by Rose Guingrich) that most heavy users of companion chatbots report positive effects on their human relationships.
- Why the choice between conscious AI and ”zombie AI” may be one of the most consequential decisions we face — and why Graziano thinks the former is the safer bet.
- Mind uploading: whether it's possible, what the ”branching problem” means for personal identity, and why he compares the technological challenge to detecting gravitational waves.
- Graziano argues that consciousness research has passed through philosophical and neuroscientific phases and is now irreversibly a technological issue; one sitting at the heart of our future as a species. Getting the theory right, he says, has never mattered more.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.