Create your account

Analyse episodes, create alerts, spot trends before they go mainstream

Already have an account? Sign in

People

Eliezer Yudkowsky

Mentioned in 24 analyzed podcast episodes across 10 shows

An influential AI safety researcher and rationalist whose early writings on intelligence explosion and existential AI risks shaped current thinking among AI lab leaders. He founded Less Wrong, a platform for rational discourse, and the Singularity Institute, and is known for advocating a cautious approach to AI development due to catastrophic risk concerns. Podcasts discuss him as a prominent figure in AI safety discourse who has consistently warned about AGI risks and their potential consequences.

Episode Appearances

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Apr 1, 2026

Success without Dignity? Nathan finds Hope Amidst Chaos, from The Intelligence Horizon Podcast

Early AI safety thinker; proposed paperclip maximizer scenario; authored 'List of Lethalities' post

AI Alignment and SafetyReinforcement Learning ScalingAI Interpretability and World Models
View Analysis
The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis · Mar 6, 2026

AI Is Officially Political

AI doomer and safety researcher featured in Bernie Sanders video on AI concerns

OpenClaw AI Agent Platform AdoptionAnthropic-Pentagon Contract DisputeAI Supply Chain Risk Designation
View Analysis
The a16z Show

The a16z Show · Mar 5, 2026

Ben Thompson: Anthropic, the Pentagon, and the Limits of Private Power

AI safety researcher cited for honestly discussing potential military responses to AI threats

AI Safety and AlignmentGovernment AI RegulationMilitary AI Applications
View Analysis
TBPN

TBPN · Mar 2, 2026

FULL INTERVIEW: Ben Thompson on Why Anthropic is Wrong

AI safety researcher who wrote about potentially bombing data centers to prevent AI risks

AI Safety and AlignmentGovernment AI ContractsDigital Surveillance Laws
View Analysis
Galaxy Brain

Galaxy Brain · Feb 27, 2026

What Do the People Building AI Believe?

Founder of rationalist online subculture; prominent AI doomer who believes superhuman AI will inevitably kill humanity

AI Safety and Existential RiskArtificial General Intelligence (AGI) Timelines and DefinitionsSilicon Valley Political Alignment and Tech Right
View Analysis
Bankless

Bankless · Feb 20, 2026

ROLLUP: Prediction Market War | Base Leaves Optimism | Tomasz Exits EF | Clarity Act Lives | Harvard Buys ETH

AI safety researcher whose warnings about AI risks are relevant to autonomous AI agents with crypto access

Prediction Market Regulation and CFTC AuthorityBase Layer 2 Independence and OP Stack DepartureEthereum Foundation Leadership Transition
View Analysis
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis · Feb 14, 2026

Approaching the AI Event Horizon? Part 2, w/ Abhi Mahajan, Helen Toner, Jeremie Harris, @8teAPi

AI safety researcher whose early writings on intelligence explosion influenced current lab leaders

Recursive Self-ImprovementAI for Cancer TreatmentAutomated AI Research
View Analysis
The Political Scene | The New Yorker

The Political Scene | The New Yorker · Feb 12, 2026

Can Anthropic Control What It's Building?

AI safety researcher; advocates for existential risk focus over proximate harm mitigation

AI Safety and Alignment ResearchMechanistic Interpretability in Neural NetworksLarge Language Model Capability vs. Safety Trade-offs
View Analysis
Stuff You Should Know

Stuff You Should Know · Feb 10, 2026

How Cognitive Biases Work

AI researcher and founder of lesswrong.org, platform for overcoming cognitive biases through Bayesian reasoning

Heuristics and mental shortcuts in decision-makingSystem 1 vs System 2 thinking (fast vs deliberate cognition)Hindsight bias and false memory reconstruction
View Analysis
TBPN

TBPN · Jan 28, 2026

Clawdbot’s name change, Meta’s new pricing plan, Tyler’s 21st birthday | Diet TBPN

AI Agent DevelopmentTrademark Protection in AIAI Inference Demand
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Jan 12, 2026

AI's Most Dangerous Truth: We've Already Lost Control

AI safety researcher; author of 'If Anyone Builds It, Everyone Dies'; represents doomer perspective on superintelligence

AI Existential Risk AssessmentAI Safety Testing and Red TeamingAgentic vs. Non-Agentic AI Design
View Analysis
Moonshots with Peter Diamandis

Moonshots with Peter Diamandis · Jan 9, 2026

The 2026 Timeline: AGI Arrival, Safety Concerns, Robotaxi Fleets & Hyperscaler Timelines | 221

Artificial General Intelligence (AGI) definition and timelineAI safety and preparedness challengesRobotaxi deployment and autonomous driving
View Analysis
Oxide and Friends

Oxide and Friends · Jan 8, 2026

Predictions 2026!!

Author of AI doomerism book 'If Anyone Builds It Everyone Dies' that Adam hate-read

LLM-Assisted Code Generation and Coding AgentsAI Safety and Prompt Injection SecurityNormalization of Deviance in AI Development
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Dec 29, 2025

AI Boom or Bust? AI Boomers and Doomers Reveal Their Predictions for Our Future

AI safety researcher cited as early voice warning about existential AI risk; predicted 99.99% extinction probability

Artificial General Intelligence (AGI) definitions and timelinesAI existential risk and safety concernsGenerative AI capabilities and limitations
View Analysis
The Last Invention

The Last Invention · Dec 19, 2025

Ezra Klein on the Uncertain Politics of A.I.

AI safety researcher and 'doomer' advocating for AI development pause due to existential risks

AI Existential Risk vs. Near-Term HarmsAI Labor Market Disruption and Job DisplacementGeopolitical Competition and AI Race Dynamics
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Nov 24, 2025

How AI Will Save Humanity: Creator of The Last Invention Explains

Leading AI doomer voice; warned about existential risk since 2013; co-author of 'If Anyone Builds It, Everyone Dies'

AI Existential Risk and SafetyArtificial General Intelligence (AGI) Development TimelineAI Regulation and Governance
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Nov 17, 2025

AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore

Advocates for extreme caution on AI risks; represents far end of danger-focused spectrum

AGI Definition and TerminologyLanguage Model Scaling and EffectivenessAI Safety and Responsible Development
View Analysis
The Last Invention

The Last Invention · Nov 13, 2025

EP 7: The Scouts

AI safety advocate who warned about AGI risks before mainstream acceptance; attended 2015 conference to bridge divide with AI researchers

Artificial General Intelligence (AGI) development timeline and feasibilityAI safety research and technical alignment problemsGeopolitical competition between US and China in AI development
View Analysis
The Last Invention

The Last Invention · Nov 6, 2025

EP 6: The AI Doomers

Central figure who shifted from AI accelerationist to doomer; founded Singularity Institute; co-authored 'If Anyone Builds It, Everyone Dies'

AI Existential Risk and Extinction ScenariosAI Alignment Problem and Value AlignmentSuperintelligence Development and Control
View Analysis
Making Sense with Sam Harris

Making Sense with Sam Harris · Oct 2, 2025

#435 — The Last Invention

Former accelerationist now dedicated to warning about AI existential risks; featured safety advocate

Artificial General Intelligence (AGI) timelines and development trajectoriesArtificial Superintelligence (ASI) existential risk and control problemsAI safety research and alignment techniques
View Analysis
The Last Invention

The Last Invention · Oct 2, 2025

EP 1: Ready or Not

Former AI accelerationist turned safety advocate warning about existential risks from superintelligent AI systems

Artificial General Intelligence (AGI) timelines and developmentArtificial Superintelligence (ASI) existential riskAI alignment and control mechanisms
View Analysis
Making Sense with Sam Harris

Making Sense with Sam Harris · Sep 16, 2025

#434 — Can We Survive AI?

Primary guest discussing AI existential risks and alignment problems; co-author of 'If Anyone Builds It, Everyone Dies'

AI Alignment ProblemSuperintelligence RiskGradient Descent and Model Training
View Analysis
Search Engine

Search Engine · Sep 5, 2025

How does a rationalist make a baby?

Original author of the Sequences on LessWrong, foundational texts for rationalist community's approach to thinking clearly

Rationalism as intellectual movement and community practiceReligious deconstruction and ideological exitSex work and online content creation as economic strategy
View Analysis
The TED AI Show

The TED AI Show · Apr 11, 2025

The magic intelligence in the sky | Good Robot

Founding father of rationalism; created Less Wrong blog and paperclip maximizer thought experiment; warns of AI apocalypse

AI Existential Risk and SuperintelligencePaperclip Maximizer Thought ExperimentAI Safety and Alignment
View Analysis