Alchemist Accelerator: Influencer Series Fireside Chat
Alchemist Accelerator: Influencer Series Fireside Chat
Podcast Description
The Influencer Series Fireside Chat, hosted by Alchemist Accelerator Founder & CEO Ravi Belani, offers intimate, high-energy conversations with influential leaders. Prominent VCs, startup founders, corporate executives, and academics come together for authentic, unscripted "dinner table" dialogues. After a decade connecting 4,000+ leaders and sparking 15,000+ influential relationships, the Influencer series now invites you to join the conversation.
Podcast Insights
Content Themes
The podcast covers a range of themes centered around corporate venture capital, deep tech funding, and the evolving landscape of AI and software. For example, one episode discusses how corporate venture capital can maintain financial independence while providing strategic value, while another focuses on the challenges deep tech startups face in securing funding beyond Series B. The show emphasizes practical insights into industry trends and strategic investment approaches.

The Influencer Series Fireside Chat, hosted by Alchemist Accelerator Founder & CEO Ravi Belani, offers intimate, high-energy conversations with influential leaders. Prominent VCs, startup founders, corporate executives, and academics come together for authentic, unscripted “dinner table” dialogues. After a decade connecting 4,000+ leaders and sparking 15,000+ influential relationships, the Influencer series now invites you to join the conversation.
Dr. Shelby Heinecke, Senior AI Researcher at Salesforce, joins Ravi Belani to explain why the future of AI will not belong only to giant models with hundreds of billions of parameters.
Shelby makes the case for small language models: compact systems with only a few billion parameters that can run faster, cost less, protect privacy, and still perform at a very high level when they are trained well on focused tasks.
In this episode, they dig into:
Why small models are a different tool, not a weaker version of large models
How fine tuned small models can beat much larger models on specific agentic tasks
Where small models shine most: privacy, speed, cost to serve and on device use cases
How Salesforce built “Tiny Giant,” a 1B parameter model that outperforms much larger models on selected tasks
What really matters in training: data quality, workflows and trajectory style datasets
How synthetic data, noise and guardrails help make models more robust in the real world
Why founders should look closely at on device AI and domain specific small models
Shelby also shares practical advice for founders who want to build in the small model space, and closes with a simple takeaway: do not underestimate small models.
If you care about AI agents, privacy, edge computing or future startup opportunities, this conversation will give you a lot to think about.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.