The in-between trust podcast
The in-between trust podcast
Podcast Description
The in-between trust podcast explores how we build, break, and rebuild trust in a world shaped by accelerating technology. Hosted by Eva Simone Lihotzky, the podcast holds space for in-depth conversations at the intersection of AI, business, ethics, and human connection. Through interdisciplinary voices - from business and politics to neuroscience to tech and systems thinking in organizations - it invites reflection over performance and makes room for ambiguity, emotion, and meaningful deep dives into one of the complex topics we need to solve as a society and beyond.
Podcast Insights
Content Themes
The podcast tackles complex themes surrounding the building and breaking of trust, with episode topics including the neurochemistry of trust, humane design in circular innovation, responsible AI ethics, and the impact of leadership on trust within AI systems. Specific episodes dive into how empathy enhances scientific discourse, the role of emotional intelligence in AI leadership, and the interaction between biological timelines and technological advancement.

The in-between tech and trust podcast explores how we build, break, and rebuild trust in a world shaped by accelerating technology. Hosted by Eva Simone Lihotzky, the podcast holds space for in-depth conversations at the intersection of AI, business, ethics, and human connection. Through interdisciplinary voices – from business and politics to neuroscience to tech and systems thinking in organizations – it invites to conversation that reflect ambiguity, emotion, and deep dives into one of the complex topics we need to solve as a society and beyond.
A conversation on how emotionally intimate AI systems are built, monitored, and held together under real-world constraints.
🎧 Opening
This episode explores how trust is built, measured, and sometimes strained in AI systems designed for emotionally intimate conversations. It’s a technical and ethical discussion for people working on conversational AI, product infrastructure, and safety in systems that users form real attachments to. The focus stays on operational reality – what engineers actually face when AI moves from tools to companions.
🔍 Episode overview
Eva Simone Lihotzky speaks with Lior Oren about what it means to run AI companions at scale, where user trust is not an abstract principle but a daily KPI. Drawing on his experience as CTO of Replika and prior work on integrity teams at Meta, Lior explains how unpredictability, observability, and emotional reliance shape engineering decisions.
The conversation examines tensions between flexibility and stability, innovation and guardrails, and regulation and lived product reality. Rather than future speculation, it stays grounded in how teams design memory, user control, and safety systems when conversations themselves are the product.
🧩 Key themes discussed
Trust treated as a measurable success metric, not a philosophical goal
Why observability is essential in statistical, non-deterministic AI systems
Guardrails as part of core infrastructure, similar to security or reliability
Emotional attachment influencing uptime, priorities, and team culture
User agency through transparency, memory control, and conversational steering
The risk of breaking “tone” and continuity when models change
Limits of regulation and the trade-offs inherent in statistical safety systems

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.