Appears On
Episode Appearances
Making Sense with Sam Harris · Apr 10, 2026
#469 — Escaping an Anti-Human Future
“Referenced for AI safety research and statistics on acceptable risk thresholds for dangerous technology”
Modern Wisdom · Apr 2, 2026
#1079 - Tristan Harris - AI Expert Warns: “This Is The Last Mistake We’ll Ever Make”
“Authored AI textbook; estimated 2000:1 funding gap between AI capability and safety research”
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas · Feb 9, 2026
343 | Tom Griffiths on The Laws of Thought
“AI researcher who contributed to bounded rationality framework for resource-constrained agents”
Making Sense with Sam Harris · Jan 16, 2026
#453 — AI and the New Face of Antisemitism
“AI safety researcher; referenced for work on alignment and utility function approaches to AGI control”
Young and Profiting with Hala Taha (Entrepreneurship, Sales, Marketing) · Dec 26, 2025
Peter Norvig: Transforming AI Into the Ultimate Human Advantage | Artificial Intelligence | AI Vault
“Co-author with Norvig of foundational AI textbook 'Artificial Intelligence' first published in 1995”
Digital Disruption with Geoff Nielson · Dec 8, 2025
Go All In on AI: The Economist’s Kenneth Cukier on AI's Experimentation Era
“Co-author of canonical AI textbook 'Artificial Intelligence: A Modern Approach' used in undergraduate AI education.”
Digital Disruption with Geoff Nielson · Nov 17, 2025
AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore
“Co-authored 'Artificial Intelligence: A Modern Approach' textbook with Norvig; wrote article claiming AGI is already here”
The Last Invention · Nov 13, 2025
EP 7: The Scouts
“AI researcher who attended 2015 conference; leading figure in AI safety and governance discussions”
The Skeptics' Guide to the Universe · Oct 25, 2025
The Skeptics Guide #1059 - Oct 25 2025
“UC Berkeley computer science professor and AI safety expert who signed superintelligence prohibition letter”
Making Sense with Sam Harris · Oct 2, 2025
#435 — The Last Invention
“Cited for reframing AI risk as galvanizing challenge similar to alien contact scenario”
The Last Invention · Oct 2, 2025
EP 1: Ready or Not
“Computer scientist; framed AGI arrival as equivalent to receiving communication from advanced alien civilization”