Exploring Machine Consciousness
Exploring Machine Consciousness
Podcast Description
A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness.
Podcast Insights
Content Themes
The podcast delves into topics such as the ethical ramifications of machine consciousness, advancements in AI technologies, and human-AI interactions. Specific episodes may cover trends like anthropomorphism in AI relationships and the moral implications of machine consciousness, as explored in guest Henry Shevlin's recent work.

A podcast from PRISM (The Partnership for Research Into Sentient Machines), exploring the possibility and implications of machine consciousness. Visit www.prism-global.com for more about our work.
Cameron Berg is Research Director at AE Studio, where he leads research exploring markers for subjective experience in machine learning systems. With a background in cognitive science from Yale and previous work at Meta AI, Cameron investigates the intersection of AI alignment and potential consciousness.
In this episode, Cameron shares his empirical research into whether current Large Language Models are merely mimicking human text, or potentially developing internal states that resemble subjective experience. We discuss:
- New experimental evidence where LLMs report ”vivid and alien” subjective experiences when engaging in self-referential processing
- Mechanistic interpretability findings showing that suppressing ”deception” features in models actually increases claims of consciousness—challenging the idea that AI is simply telling us what we want to hear
- Why Cameron has shifted from skepticism to a 20-30% credence that current models possess subjective experience
- The ”convergent evidence” strategy, including findings that models report internal dissonance and frustration when facing logical paradoxes
- The existential implications of ”mind crime” and the urgent need to identify negative valence (suffering) computationally—to avoid creating vast amounts of artificial suffering
Cameron argues for a pragmatic, evidence-based approach to AI consciousness, emphasizing that even a small probability of machine suffering represents a massive ethical risk requiring rigorous scientific investigation rather than dismissal.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.