YAAP (Yet Another AI Podcast)

YAAP (Yet Another AI Podcast)
Podcast Description
YAAP brings you practical conversations with the people actually building generative AI solutions. No hype, no sales pitches, just honest discussions about challenges, solutions, and lessons learned.
Listen to developers and engineers share what works, what doesn't, and what they wish they'd known sooner. Simple, useful insights for anyone working with AI — hosted by AI21's Yuval Belfer.
Podcast Insights
Content Themes
The show covers key themes in generative AI, focusing on real-world applications, challenges faced by developers, and lessons learned from the field, with episodes such as 'Tool Calling 2.0: How MCP Is Standardizing AI Connections' exploring topics like integration of AI with existing software like Jira and Notion as well as authentication challenges in cloud environments.

YAAP brings you practical conversations with the people actually building generative AI solutions. No hype, no sales pitches, just honest discussions about challenges, solutions, and lessons learned.
Listen to developers and engineers share what works, what doesn’t, and what they wish they’d known sooner. Simple, useful insights for anyone working with AI — hosted by AI21’s Yuval Belfer.
The Call Is Coming From Inside the Agent (And It Has Your Credentials)
You’ve shipped your first agent. It works. It’s useful. It might also be a security liability you don’t even know about. In this episode, Yuval talks to Zenity CTO Michael Bargury about how easy it is to hijack popular agent systems like Copilot and Cursor, what “zero-click” attacks look like in the agent era, and how to monitor, constrain, and secure your AI Agent in production. From sneaky prompt injections to memory-based persistence and infected multi-agent workflows, this is the “oh no” moment every builder needs.
Key Topics:
- Why “ignore previous instructions” still works better than it should
- How one agent goes rogue… and infects the others
- Real-world attacks: social media triggers, CRM leaks, and logic bombs
- Observability 101 for AI: logs, reasoning traces, and root cause sanity
- The new rule: build like it will go rogue—because one day it will

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.