AI Adoption Playbook

AI Adoption Playbook
Podcast Description
Welcome to The AI Adoption Playbook—where we explore real-world AI implementations at leading enterprises. Join host Ravin Thambapillai, CEO of Credal.ai, as he unpacks the technical challenges, architectural decisions, and deployment strategies shaping successful AI adoption. Each episode dives deep into concrete use cases with the engineers and ML platform teams making enterprise AI work at scale. Whether you’re building internal AI tools or leading GenAI initiatives, you’ll find actionable insights for moving from proof-of-concept to production.
Podcast Insights
Content Themes
The podcast emphasizes themes of AI implementation in enterprises, sharing specific case studies and frameworks such as WSI's top-down and bottom-up approach to AI adoption, Ramp's model-agnostic systems for financial data processing, and Checkr's innovative use cases that transform background checks. Episodes focus on actionable strategies, ROI calculations, governance frameworks, and quick-win methodologies like 90-minute process improvements, making complex AI topics accessible for business leaders.

Welcome to The AI Adoption Playbook—where we explore real-world AI implementations at leading enterprises. Join host Ravin Thambapillai, CEO of Credal.ai, as he unpacks the technical challenges, architectural decisions, and deployment strategies shaping successful AI adoption. Each episode dives deep into concrete use cases with the engineers and ML platform teams making enterprise AI work at scale. Whether you’re building internal AI tools or leading GenAI initiatives, you’ll find actionable insights for moving from proof-of-concept to production.
The fastest path to production AI isn’t perfect architecture, according to Alexander Page. It’s customer validation. In his former role of Principal AI Architect at BigPanda, he transformed an LLM-based prototype into “Biggy,” an AI system for critical incident management. BigPanda moved beyond basic semantic search to build agentic integrations with ServiceNow and Jira, creating AI that understands organizational context and learns from incident history while helping with the entire lifecycle from detection through post-incident documentation.
Alexander also gives Ravin BigPanda’s framework for measuring AI agent performance when traditional accuracy metrics fall short: combine user feedback with visibility into agent decision-making, allowing operators to drag-and-drop incorrect tool calls or sequence errors. He reveals how they encode this feedback into vector databases that influence future agent behavior, creating systems that genuinely improve over time.
Topics discussed:
- LLM accessibility compared to traditional ML development barriers
- Fortune 500 IT incident management across 10-30 monitoring tools
- Building Biggy, an AI agent for incident analysis and resolution
- Customer-driven development methodology with real data prototyping
- Agentic integrations with ServiceNow and Jira for organizational context
- Moving beyond semantic search to structured system queries
- AI agent performance evaluation when accuracy is subjective
- User feedback mechanisms for correcting agent tool calls and sequences
- Encoding corrections into vector databases for behavior improvement
- Sensory data requirements for human-level AI reasoning
Listen to more episodes:

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.