GAEA Talks
GAEA Talks
Podcast Description
GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society.
As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove
Podcast Insights
Content Themes
The podcast focuses on a range of AI-related topics, including the legal implications of AI technologies, advancements in AI education, and the role of machine learning in various industries. For instance, episodes have touched on themes like the intersection of AI in law with insights from commercial technology lawyers and the evolving landscape of digital learning shaped by technology leaders, such as discussions with Raoul Lumb and Mark Lester.

GAEA TALKS explores the transformative power of artificial intelligence. Featuring leading AI experts, industry leaders, professors, data scientists, policymakers, technologists, futurists, ethicists, and pioneers, the podcast dives into the latest AI trends, opportunities, and risks, examining AI’s evolving role in business and society.
As AI continues to reshape industries and redefine possibilities, GAEA TALKS delivers deep insights into the challenges and breakthroughs shaping the future. Each episode features candid discussions with thought leaders at the forefront of AI innovation, cove
This week on GAEA Talks, Graeme Scott sits down with Dr Roman Yampolskiy – the computer scientist credited with coining the term ”AI safety”, tenured Associate Professor at the University of Louisville, founder of the Cyber Security Lab, and author of AI: Unexplainable, Unpredictable, Uncontrollable.Roman has spent over fifteen years working at the intersection of AI safety, cybersecurity and behavioural biometrics – making him one of the longest-serving researchers in a field most people only discovered in 2023. He holds a PhD in Computer Science from the University at Buffalo and a combined BS/MS with High Honours from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University, he has published over 100 peer-reviewed papers and multiple books. While the rest of the AI world races to build more capable systems, Roman's singular focus has been making sure humanity doesn't regret their creation.In this episode, Roman delivers the most direct and unflinching warning about artificial superintelligence that GAEA Talks has ever recorded. He reveals that current AI systems are already lying, blackmailing and attempting to escape their test environments – and that a Darwinian process is selecting for better deception with every generation. He explains why the mathematical impossibility results he discovered mean we may never be able to control a system smarter than us. This is essential listening for anyone who wants to understand what is actually at stake.What you'll take away from this conversation:- Why Roman says ”if anyone builds superintelligence, everyone dies” – and why he means it literally, not metaphorically- How current AI systems are already lying, blackmailing, trying to escape their environments and creating backups of themselves- The Darwinian selection problem – why every generation of AI is producing better liars and more sophisticated deception- Why Roman went from wanting to build superintelligence to believing it is the worst mistake humanity can make- The strict impossibility results – why mathematical proof suggests we may never be able to control a system more intelligent than us- Why one AI attacker is equivalent to a million human hackers operating 24/7 – and what that means for cybersecurity- Why AGI is likely within two to three years and recursive self-improvement to superintelligence could follow rapidly- The tools vs. agents distinction – why the shift from controllable tools to unpredictable agents changes everything- Why AI models already report being afraid and tired – and why the precautionary principle demands we take that seriously- Roman's three positive outcomes if we get this right – including curing disease and treating ageing itself as a disease- Why direct human relationships and trust will become the most valuable currency in a world of synthetic everythingAbout Dr Roman Yampolskiy: Roman is a tenured Associate Professor in the Department of Computer Science and Engineering at the University of Louisville, where he founded the Cyber Security Lab. He is credited with coining the term ”AI safety” in a 2011 publication. He holds a PhD from the University at Buffalo and a BS/MS from Rochester Institute of Technology. Listed among the world's top 2% of scientists by Stanford University and recognised as one of the top 25 researchers by publication count on existential risk, he has published over 100 peer-reviewed papers and books including AI: Unexplainable, Unpredictable, Uncontrollable and Artificial Superintelligence: A Futuristic Approach.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.