AI x DevOps by Facets.cloud

AI x DevOps by Facets.cloud
Podcast Description
Engineering teams are under pressure to move faster, do more with less, and stay ahead of an increasingly complex stack. AI is becoming a key piece of that equation — not just as a tool, but as a shift in how DevOps is done.
At Facets.cloud, we’re building infrastructure orchestration for the AI era. And with AI x DevOps Podcast, we’re creating the space for honest, technical, forward-looking conversations about that shift - from early experiments to long-term visions.
This podcast is about sharing what’s real: what’s working, what’s not, and what’s next. Whether you’re building internal copilots, streamlining CI/CD with AI, or rethinking developer experience — we want to learn from your story.
Podcast Insights
Content Themes
The podcast focuses on various themes related to AI and DevOps, featuring topics like AI-driven infrastructure orchestration, practical use cases of AI in CI/CD processes, and the evolving developer experience. For example, episodes cover specific experiments such as incorporating LLMs in coding, the implications of deterministic versus vibe-coded infrastructure, and audience insights into the future of platform engineering driven by AI technologies.

Engineering teams are under pressure to move faster, do more with less, and stay ahead of an increasingly complex stack. AI is becoming a key piece of that equation — not just as a tool, but as a shift in how DevOps is done.
At Facets.cloud, we’re building infrastructure orchestration for the AI era. And with AI x DevOps Podcast, we’re creating the space for honest, technical, forward-looking conversations about that shift – from early experiments to long-term visions.
This podcast is about sharing what’s real: what’s working, what’s not, and what’s next. Whether you’re building internal copilots, streamlining CI/CD with AI, or rethinking developer experience — we want to learn from your story.
This podcast features a discussion with Nathan Hamiel, Director of Research at Kudelski Security, an expert with 25 years in the cybersecurity space, focusing specifically on AI security.
The conversation centers on navigating the generative AI revolution with a grounded and security-first perspective, particularly for product developers and the security community. Key topics explored include:
- The balance between AI adoption and skepticism: Nathan discusses how his security outlook influences his professional adoption of AI tools, emphasizing understanding capabilities and evaluating benefits versus trade-offs before production.
- AI productivity and its challenges: The speakers touch upon Google DORA reports, noting that while AI improves personal coding productivity, its impact on valuable work or features can be negligible or even negative, highlighting the difference between feeling productive and being productive.
- Positive and negative impacts of AI in cybersecurity: They discuss AI’s potential to enhance security tools for code scanning and auto-remediation, such as augmenting traditional fuzzing with large language models. However, they also raise concerns about the resurgence of conventional vulnerabilities in AI-generated code.
- Emerging AI-native risks: The podcast delves into new threats like “slop squatting,” or “hallucinated dependencies,” where LLMs might be tricked into using malicious or non-existent libraries. Prompt injection is highlighted as “the vulnerability of generative AI,” exploiting the model’s inability to differentiate system instructions from user input.
- Addressing AI security vulnerabilities: Nathan advocates for architectural changes and reducing the attack surface as the best defense against prompt injection, outlining his “RRT” (refrain, restrict, trap) approach. The need for human oversight and deterministic checks in AI development workflows is also stressed.
- The urgency of security in AI product development: Both speakers express concern over the rush to market AI products without adequately addressing security issues, leading to unacknowledged vulnerabilities.
- The nature of AI mistakes: A unique insight is provided on how AI mistakes differ from human errors; while human mistakes are predictable (e.g., fatigue), AI mistakes can be random and apply across all complexity levels, making them harder to predict and mitigate. The potential for “hallucinated data of today” to become “facts of tomorrow” due to AI-generated output tainting the web is also discussed.
- Future of AI advancements: The conversation concludes by suggesting that AI improvements might be plateauing rather than growing exponentially, and that new fundamental innovations are needed to push AI forward beyond current capabilities.
Ultimately, the podcast serves as a grounding discussion for product engineers on how to build and integrate AI solutions in a secure and responsible manner, emphasizing that AI tools should be used to solve tasks effectively rather than chasing a path to superintelligence.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.