My Robot Teacher
My Robot Teacher
Podcast Description
What happens when AI crashes into the classroom?
When ChatGPT rolled out across the California State University (CSU) system, it sparked a wide range of faculty responses - from panic to full adoption. In My Robot Teacher, CSU professors Taiyo Inoue and Sarah Senk explore how generative AI is disrupting higher education, and what resistance, habituation, and adaptation really look like in real classrooms.
My Robot Teacher is a podcast about AI and higher education, hosted by public university professors, produced by editaudio, and sponsored by the California Learning Lab.
Podcast Insights
Content Themes
The podcast focuses on the impact of generative AI in higher education, exploring themes such as resistance to AI, pedagogical adaptation, and the evolving role of assignments. Episode examples include discussions on how large language models like ChatGPT are reshaping trust and teaching, as well as insights from faculty on AI's current implications for public universities.

What happens when AI crashes into the classroom?
When ChatGPT rolled out across the California State University (CSU) system, it sparked a wide range of faculty responses – from panic to full adoption. In My Robot Teacher, CSU professors Taiyo Inoue and Sarah Senk explore how generative AI is disrupting higher education, and what resistance, habituation, and adaptation really look like in real classrooms.
My Robot Teacher is a podcast about AI and higher education, hosted by public university professors, produced by editaudio, and sponsored by the California Learning Lab.
Science education after ChatGPT: what happens when students can outsource the thinking, and still turn in something that looks right? In this episode of My Robot Teacher, CSU professors Sarah Senk and Taiyo Inoue talk with UC Davis biophysicist Jon Sack about AI literacy, scientific thinking, and how LLMs are reshaping both the classroom and the day-to-day reality of research.
If the most available “mentor” in a student’s life is an LLM optimized to validate, what happens to the virtues science depends on: tolerating disconfirmation, staying curious through failure, and separating confidence from evidence? And if AI can generate the output, what exactly are we teaching – especially when the point is conceptual understanding, not polished answers?
In this conversation, we explore:
What AI literacy should mean in science classrooms (beyond “don’t cheat”)
How to resist the reward of feeling right when LLMs produce fluent, plausible explanations on demand
How to redesign assessment so students can’t simply outsource the thinking
What “good” use looks like: prompting for falsification instead of praise, plus habits of verification and iteration
What AlphaFold and protein design teach us about hypothesis overload, “hallucinations,” and selection under uncertainty
The bigger meta-question: if we’re co-evolving with AI, how do we keep student agency intact?
Ultimately, Jon argues that resilience isn’t a soft skill in science—it’s the method: reality-testing what sounds plausible (including AI-generated ideas) and iterating without outsourcing the thinking.
Sponsored by the California Education Learning Lab.
💬 Drop your perspective in the comments. We may feature listener takes in a future episode.
✅ Subscribe for more “in the wild” classroom experiments and AI literacy for educators.
CHAPTERS
00:00-6:27 – Chapter 1 – Introduction: Claude Code Built My Canvas Course (Winter Break Experiment)
6:28-10:14 – Chapter 2 – Jon Sack’s First ChatGPT Moment (and the “Too-Positive” AI problem)
10:15-12:44 – Chapter 3 – Resilience is the Core Skill in Science
12:45-16:08 – Chapter 4 – Scientific Method = Falsification: “Kill Your Darlings” and Reality Testing
16:09-19:26 – Chapter 5 – Conceptual Understanding vs. Outsourcing: When the Thinking is the Assignment
19:27-25:25 – Chapter 6 – AL Literacy for Students: Use Every Tool, Track Limits
25:26-28:22 – Chapter 7 – Inside Jon Sack’s Lab: Ion Channels and Stochastic Decisions
28:23-30:36 – Chapter 8 – Stochastic 101: Probability, Sampling, and Why LLMs Vary
30:37-38:55 – Chapter 9 – Are We Stochastic All the Way Down?
38:56-44:07 – Chapter 10 – AlphaFold & Protein Design: Cheap Hypotheses, Hallucinations, Verification
44:08-53:30 – Chapter 11 – Co-evolving with AI: are tools optimizing around us, and are we changing around them?
53:31-1:01:22 Chapter 12 – Education After ChatGPT: Epistemic Virtues, Judgment, and Student Agency

Disclaimer
This podcast’s information is provided for general reference and was obtained from publicly accessible sources. The Podcast Collaborative neither produces nor verifies the content, accuracy, or suitability of this podcast. Views and opinions belong solely to the podcast creators and guests.
For a complete disclaimer, please see our Full Disclaimer on the archive page. The Podcast Collaborative bears no responsibility for the podcast’s themes, language, or overall content. Listener discretion is advised. Read our Terms of Use and Privacy Policy for more details.