Eliezer Yudkowsky
Mentioned in 24 analyzed podcast episodes across 10 shows
An influential AI safety researcher and rationalist whose early writings on intelligence explosion and existential AI risks shaped current thinking among AI lab leaders. He founded Less Wrong, a platform for rational discourse, and the Singularity Institute, and is known for advocating a cautious approach to AI development due to catastrophic risk concerns. Podcasts discuss him as a prominent figure in AI safety discourse who has consistently warned about AGI risks and their potential consequences.
Appears On
Episode Appearances
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Apr 1, 2026
Success without Dignity? Nathan finds Hope Amidst Chaos, from The Intelligence Horizon Podcast
“Early AI safety thinker; proposed paperclip maximizer scenario; authored 'List of Lethalities' post”
The AI Daily Brief: Artificial Intelligence News and Analysis · Mar 6, 2026
AI Is Officially Political
“AI doomer and safety researcher featured in Bernie Sanders video on AI concerns”
The a16z Show · Mar 5, 2026
Ben Thompson: Anthropic, the Pentagon, and the Limits of Private Power
“AI safety researcher cited for honestly discussing potential military responses to AI threats”
TBPN · Mar 2, 2026
FULL INTERVIEW: Ben Thompson on Why Anthropic is Wrong
“AI safety researcher who wrote about potentially bombing data centers to prevent AI risks”
Galaxy Brain · Feb 27, 2026
What Do the People Building AI Believe?
“Founder of rationalist online subculture; prominent AI doomer who believes superhuman AI will inevitably kill humanity”
Bankless · Feb 20, 2026
ROLLUP: Prediction Market War | Base Leaves Optimism | Tomasz Exits EF | Clarity Act Lives | Harvard Buys ETH
“AI safety researcher whose warnings about AI risks are relevant to autonomous AI agents with crypto access”
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Feb 14, 2026
Approaching the AI Event Horizon? Part 2, w/ Abhi Mahajan, Helen Toner, Jeremie Harris, @8teAPi
“AI safety researcher whose early writings on intelligence explosion influenced current lab leaders”
The Political Scene | The New Yorker · Feb 12, 2026
Can Anthropic Control What It's Building?
“AI safety researcher; advocates for existential risk focus over proximate harm mitigation”
Stuff You Should Know · Feb 10, 2026
How Cognitive Biases Work
“AI researcher and founder of lesswrong.org, platform for overcoming cognitive biases through Bayesian reasoning”
TBPN · Jan 28, 2026
Clawdbot’s name change, Meta’s new pricing plan, Tyler’s 21st birthday | Diet TBPN
Digital Disruption with Geoff Nielson · Jan 12, 2026
AI's Most Dangerous Truth: We've Already Lost Control
“AI safety researcher; author of 'If Anyone Builds It, Everyone Dies'; represents doomer perspective on superintelligence”
Moonshots with Peter Diamandis · Jan 9, 2026
The 2026 Timeline: AGI Arrival, Safety Concerns, Robotaxi Fleets & Hyperscaler Timelines | 221
Oxide and Friends · Jan 8, 2026
Predictions 2026!!
“Author of AI doomerism book 'If Anyone Builds It Everyone Dies' that Adam hate-read”
Digital Disruption with Geoff Nielson · Dec 29, 2025
AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future
“AI safety researcher cited as early voice warning about existential AI risk; predicted 99.99% extinction probability”
The Last Invention · Dec 19, 2025
Ezra Klein on the Uncertain Politics of A.I.
“AI safety researcher and 'doomer' advocating for AI development pause due to existential risks”
Digital Disruption with Geoff Nielson · Nov 24, 2025
How AI Will Save Humanity: Creator of The Last Invention Explains
“Leading AI doomer voice; warned about existential risk since 2013; co-author of 'If Anyone Builds It, Everyone Dies'”
Digital Disruption with Geoff Nielson · Nov 17, 2025
AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore
“Advocates for extreme caution on AI risks; represents far end of danger-focused spectrum”
The Last Invention · Nov 13, 2025
EP 7: The Scouts
“AI safety advocate who warned about AGI risks before mainstream acceptance; attended 2015 conference to bridge divide with AI researchers”
The Last Invention · Nov 6, 2025
EP 6: The AI Doomers
“Central figure who shifted from AI accelerationist to doomer; founded Singularity Institute; co-authored 'If Anyone Builds It, Everyone Dies'”
Making Sense with Sam Harris · Oct 2, 2025
#435 — The Last Invention
“Former accelerationist now dedicated to warning about AI existential risks; featured safety advocate”
The Last Invention · Oct 2, 2025
EP 1: Ready or Not
“Former AI accelerationist turned safety advocate warning about existential risks from superintelligent AI systems”
Making Sense with Sam Harris · Sep 16, 2025
#434 — Can We Survive AI?
“Primary guest discussing AI existential risks and alignment problems; co-author of 'If Anyone Builds It, Everyone Dies'”
Search Engine · Sep 5, 2025
How does a rationalist make a baby?
“Original author of the Sequences on LessWrong, foundational texts for rationalist community's approach to thinking clearly”
The TED AI Show · Apr 11, 2025
The magic intelligence in the sky | Good Robot
“Founding father of rationalism; created Less Wrong blog and paperclip maximizer thought experiment; warns of AI apocalypse”