Cyberside Chats: Cybersecurity Insights from the Experts

Cyberside Chats: Cybersecurity Insights from the Experts
Podcast Description
Stay ahead of the latest cybersecurity trends with Cyberside Chats—your go-to cybersecurity podcast for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity pro or an executive who wants to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you understand and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!
Podcast Insights
Content Themes
The podcast covers a variety of crucial cybersecurity topics including emerging threats from AI tools like DeepSeek, the legal implications of cybercrime like the Silk Road, and recent law enforcement actions against malware like PlugX. Each episode provides actionable solutions and strategies, such as updating incident response plans and strengthening employee training against phishing attacks.

Stay ahead of the latest cybersecurity trends with Cyberside Chats—your go-to cybersecurity podcast for breaking news, emerging threats, and actionable solutions. Whether you’re a cybersecurity pro or an executive who wants to understand how to protect your organization, cybersecurity experts Sherri Davidoff and Matt Durrin will help you understand and proactively prepare for today’s top cybersecurity threats, AI-driven attack and defense strategies, and more!

Can your AI assistant become a silent data leak? In this episode of Cyberside Chats, Sherri Davidoff and Matt Durrin break down EchoLeak, a zero-click exploit in Microsoft 365 Copilot that shows how attackers can manipulate AI systems using nothing more than an email. No clicks. No downloads. Just a cleverly crafted message that turns your AI into an unintentional insider threat.
They also share a real-world discovery from LMG Security’s pen testing team: how prompt injection was used to extract system prompts and override behavior in a live web application. With examples ranging from corporate chatbots to real-world misfires at Samsung and Chevrolet, this episode unpacks what happens when AI is left untested—and why your security strategy must adapt.
Key Takeaways
- Limit and review the data sources your LLM can access—ensure it doesn’t blindly ingest untrusted content like inbound email, shared docs, or web links.
- Audit AI integrations for prompt injection risks—treat language inputs like code and include them in standard threat models.
- Add prompt injection testing to every web app and email flow assessment, even if you’re using trusted APIs or cloud-hosted models.
- Red-team your LLM tools using subtle, natural-sounding prompts—not just obvious attack phrases.
- Monitor and restrict outbound links from AI-generated content, and validate any use of CSP-approved domains like Microsoft Teams.
Resources
#EchoLeak #Cybersecurity #Cyberaware #CISO #Microsoft #Microsoft365 #Copilot #AI #GenAI #AIsecurity #RiskManagement
Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.