Shared Everything

Shared Everything
Podcast Description
Shared Everything is VAST Data’s editorial and thought leadership platform, spotlighting the technical frontlines of AI infrastructure, datacenters, and cloud architecture. Through in-depth interviews, expert-led discussions, and narrative-driven content, we explore how the most advanced organizations are architecting for the Agentic Age—where AI, data, and compute converge. Whether it's the latest in GPU optimization, multitenancy design, or the future of data orchestration, we dive deep into the systems and strategies shaping tomorrow’s digital landscape.
Podcast Insights
Content Themes
The podcast covers a range of topics including advancements in life sciences systems, AI sovereignty, supercomputing, digital twins, data infrastructure evolutions, and energy implications of AI. Examples of specific episodes include explorations of GPU optimization in genomics, the impact of sovereign clouds in Europe, and the future of supercomputing with AI-driven workloads.

Shared Everything is VAST Data’s editorial and thought leadership platform, spotlighting the technical frontlines of AI infrastructure, datacenters, and cloud architecture. Through in-depth interviews, expert-led discussions, and narrative-driven content, we explore how the most advanced organizations are architecting for the Agentic Age—where AI, data, and compute converge. Whether it’s the latest in GPU optimization, multitenancy design, or the future of data orchestration, we dive deep into the systems and strategies shaping tomorrow’s digital landscape.
00:00 – 02:00
Intro and Kartik’s background in physics and industry evolution; early days of personalized medicine and genomic research.
02:00 – 04:30
Breakdown of three main advances in life sciences: genomics, gene editing (CRISPR), and long-read sequencing technologies like Oxford Nanopore and PacBio.
04:30 – 06:30
Deep technical dive into nanopore sequencing: how it works, why it matters, and why it requires GPU acceleration.
06:30 – 08:30
The computational bottleneck: memory mapping, random I/O, why short-read sequencers are now limiting, and why SSDs are necessary.
08:30 – 10:00
Parallel file systems break under modern life sciences loads; shift toward storage architectures that can handle random I/O at scale.
10:00 – 12:30
How AlphaFold reshaped structural biology and compute expectations; protein folding as a graph neural network challenge.
12:30 – 15:00
LLMs in pharma, managing clinical trial data, and the rise of mixed, hybrid workloads in research computing.
15:00 – 17:00
Microscopy at scale (cryo-EM, light sheet imaging) and the data tsunami—petabytes per microscope, per year.
17:00 – 19:30
Shifting away from HPC-era assumptions: new workloads, new storage expectations, and lessons from vendors like Oxford Nanopore.
19:30 – 20:36
What’s next: generative AI models trained on molecular sequences and protein structures; a vision of disease-free future.

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.