Pop Goes the Stack
Pop Goes the Stack
Podcast Description
Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.
Podcast Insights
Content Themes
The show focuses on application delivery and security, exploring topics such as multimodal AI, security vulnerabilities in modern technologies, and the future of operations with episodes like 'Multimodal Madness: The Darth Vader Debacle' that analyze both the innovative and problematic aspects of emerging tech.

Explore the evolving world of application delivery and security. Each episode will dive into technologies shaping the future of operations, analyze emerging trends, and discuss the impacts of innovations on the tech stack.
Agents break the old rules of observability. Latency, throughput, and error rates still matter, but once software starts making decisions and taking actions on someone else’s behalf, the real question becomes: is it doing the right thing, and is it doing it for the right reasons?
In this episode of Pop Goes the Stack, Lori MacVittie and Joel “OpenClaw” Moses are joined by observability expert Chris Hain to unpack what changes when systems become agentic. Instead of a single prompt-response interaction, you get decision chains that branch, loop, call tools, and evolve over time. A system can “succeed” operationally while still being wrong, expensive, or misaligned with intent.
Chris argues you don’t have to throw away what already works. Distributed tracing still applies, but now each agent step becomes a span, decorated with richer metadata like model identity, tool calls, token usage, prompts, and cost. The discussion also dives into why standardization matters, including OpenTelemetry and emerging semantic conventions for generative and agentic AI, and why auto-instrumentation approaches like eBPF become critical when agents generate code that has no built-in telemetry.
Joel adds a new set of metrics that feel uncomfortably necessary: decision loops per task, drift in tool-call chains, human override frequency, and the cost and token patterns that signal something has changed. The group also tackles the awkward feedback loop of using agents to make observability actionable, while acknowledging the risk of agents optimizing the dashboard instead of the system.
If you’re building agentic workflows, this episode is a practical guide to why “failed successfully” is now a real production state, and why instrumenting for correctness and intent alignment is the next observability frontier.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.