Augment AI Podcast

Augment AI Podcast
Podcast Description
The AI revolution is happening now, across the globe. Join two of Asia's sharpest AI minds as they unlock the secrets of this transformative technology with the world's leading entrepreneurs, researchers, policymakers, and investors. A Podcast About the Success Stories of Investors, Entrepreneurs, Corporate Senior Executives and Policymakers, hosted by Dr Ayesha Khanna and Dr Bernard Leong.
Podcast Insights
Content Themes
Explores themes such as the evolution of AI, cost efficiency in AI models, and the impact of emerging technologies on global markets, with episodes like 'How DeepSeek Changed the AI Game' examining technical innovations and business strategies, and 'AI is Eating the World' discussing AI integration in various industries.

The AI revolution is happening now, across the globe. Join two of Asia’s sharpest AI minds as they unlock the secrets of this transformative technology with the world’s leading entrepreneurs, researchers, policymakers, and investors. A Podcast About the Success Stories of Investors, Entrepreneurs, Corporate Senior Executives and Policymakers, hosted by Dr Ayesha Khanna and Dr Bernard Leong.
In this episode of Augment AI, hosts Ayesha Khanna and Bernard Leong provide an insightful analysis of DeepSeek, the Chinese AI model disrupting the global AI landscape. They trace DeepSeek’s journey from its beginnings as a side project of hedge fund manager Liang Wenfeng to becoming a formidable competitor to OpenAI and Anthropic. Ayesha and Bernard break down the technical innovations behind DeepSeek’s efficiency, discussing how its model distillation, mixture of experts approach, and multi-hit latent attention techniques dramatically reduce computational requirements. They debate whether DeepSeek’s claimed $6 million training cost (versus billions spent by US companies) is accurate, with Bernard estimating the true cost between $35-70 million—still significantly cheaper than competitors. The hosts highlight DeepSeek’s dramatic price advantage: $2.19 per million tokens versus OpenAI’s $60. Ayesha shares perspectives from her global network in Europe, India, and Southeast Asia about how DeepSeek is democratizing AI access worldwide. Bernard explains what “open source” truly means in AI, while both hosts discuss the emerging copyright challenges in AI development and how companies worldwide are responding to DeepSeek’s breakthrough that’s reshaping industry expectations around cost, efficiency, and innovation speed.
Episode Highlights:
[00:00:57] Ayesha Khanna introduces her background in machine learning, Wall Street experience, and AI engineering firm in Singapore.
[00:03:22] Bernard Leong shares his background as a theoretical physicist and experience in machine learning, genomics, and corporate roles.
[00:06:50] DeepSeek’s origin story as a side project of hedge fund manager Liang Wenfeng in China in late 2023.
[00:08:51] DeepSeek R1 model rivaled OpenAI’s models and caused NVIDIA’s stock to drop by $560 billion.
[00:10:53] Discussion of DeepSeek’s rapid development speed challenging tech giants’ timelines.
[00:13:10] Cost comparison: DeepSeek charges $2.19 per million tokens vs OpenAI’s $60, with self-hosting being free.
[00:15:01] Details about the talent behind DeepSeek – young PhDs from Zhejiang University costing 1/5 of US counterparts.
[00:17:58] Explanation of Mixture of Experts model that activates only 37 billion parameters instead of 671 billion.
[00:19:25] Technical innovations including Group Relative Policy Optimization and multi-hit latent attention.
[00:21:50] Debate on whether DeepSeek’s $6 million training cost claim is accurate (estimated $35-70M).
[00:24:28] Global impact of DeepSeek democratizing AI access for emerging markets.
[00:25:37] Discussion of what “open source” truly means in AI models and what DeepSeek actually shares.
[00:28:39] Conversation about copyright issues in AI model training.
[00:30:57] DeepSeek’s open source week with five repositories improving inference speed and efficiency.
[00:34:12] Importance of inference speed in user experience and competitive impact in China.
[00:38:08] Hosts compare their usage of multiple AI assistants and getting spoiled by speed expectations.
[00:39:39] Connection between hedge fund efficiency mindset and AI model optimization.
[00:41:41] Concluding remarks and preview of future episode topics.
Main Site: https://www.theaugment.ai/
LinkedIn Page: https://www.linkedin.com/company/augmentaipodcast/

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.