The API Hour

The API Hour
Podcast Description
The API Hour is your front-row seat to where APIs meet InfoSec. Hosted by Dan Barahona and brought to you by APIsec University, each episode dives into real-world breaches, testing tactics, and the tools shaping AppSec. Whether you're building, breaking, or securing APIs, you'll get practical insights from the experts redefining API security. Plug in, lock down, and decode what’s really going on behind the APIs—because in a connected world, security is everything.
Podcast Insights
Content Themes
The podcast focuses on critical topics related to API security, including real-world API breaches, testing tactics, and tools shaping Application Security. For instance, recent episodes discuss notable API breaches like the exposure of API keys at xAI and the unauthorized access to 64 million job applications at McDonald's. Additionally, the show delves into practical security measures and common vulnerabilities aligned with the OWASP API Security Top 10.

The API Hour is your front-row seat to where APIs meet InfoSec. Hosted by Dan Barahona and brought to you by APIsec University, each episode dives into real-world breaches, testing tactics, and the tools shaping AppSec. Whether you’re building, breaking, or securing APIs, you’ll get practical insights from the experts redefining API security. Plug in, lock down, and decode what’s really going on behind the APIs—because in a connected world, security is everything.
Artificial Intelligence is transforming every industry, but with that transformation comes new security risks. In this episode of The API Hour, host Dan Barahona interviews Robert Herbig, Senior Engineer at SEP and instructor of the APIsec University course, Building Security into AI, to explore the emerging world of AI attacks, data poisoning, and model tampering.
From poisoned stop sign datasets to prompt injections that trick LLMs into revealing dangerous information, this episode is packed with eye-opening examples of how AI can be manipulated, and what builders and security teams can do to defend against it.
What You’ll Learn
- Data poisoning in action: how mislabeled stop signs and manipulated datasets can cause catastrophic AI failures
- Watering hole attacks & typosquatting: why malicious datasets and libraries pose a hidden risk
- Prompt injection & jailbreaking: real-world cases where LLMs were manipulated into revealing restricted information
- Black box vs. white box attacks: what attackers can infer just by observing model confidence scores
- Retraining & RAG: how AI models ingest new information and why continuous updates create new vulnerabilities
- The API connection: why exposing models via APIs ties AI security directly to API security best practices
Episode Timestamps
- 00:45 – Stop signs, stripes, and poisoned training data
- 07:00 – Data poisoning in Gmail spam detection
- 17:00 – SEO hacks and AI summaries: a new frontier for attackers
- 22:00 – Typo-squatting and malicious packages
- 25:00 – Pliny the Liberator and “memetic viruses” in training data
- 33:00 – Black box vs. white box attacks on computer vision models
- 43:00 – Prompt injection and roleplay exploits
- 52:00 – APIs and AI security: two sides of the same coin

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.