Create your account

Analyse episodes, create alerts, spot trends before they go mainstream

Already have an account? Sign in

People

Nate Soares

Mentioned in 5 analyzed podcast episodes across 3 shows

AI safety researcher focused on preventing AGI development and addressing alignment problems. Co-authored "If Anyone Builds It, Everyone Dies" with Yudkowsky and participated in the 2015 Puerto Rico AI safety conference as part of the safety-focused research contingent. Discussed in podcast episodes examining AI existential risk and the researchers working to mitigate it.

Episode Appearances

The Last Invention

The Last Invention · Dec 19, 2025

Ezra Klein on the Uncertain Politics of A.I.

AI safety researcher dedicated to preventing AGI development

AI Existential Risk vs. Near-Term HarmsAI Labor Market Disruption and Job DisplacementGeopolitical Competition and AI Race Dynamics
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Nov 24, 2025

How AI Will Save Humanity: Creator of The Last Invention Explains

Co-author of 'If Anyone Builds It, Everyone Dies'; represents doomer camp gaining mainstream attention

AI Existential Risk and SafetyArtificial General Intelligence (AGI) Development TimelineAI Regulation and Governance
View Analysis
The Last Invention

The Last Invention · Nov 13, 2025

EP 7: The Scouts

AI safety researcher who attended 2015 Puerto Rico conference; part of safety-focused contingent

Artificial General Intelligence (AGI) development timeline and feasibilityAI safety research and technical alignment problemsGeopolitical competition between US and China in AI development
View Analysis
The Last Invention

The Last Invention · Nov 6, 2025

EP 6: The AI Doomers

AI safety researcher; co-author with Yudkowsky of 'If Anyone Builds It, Everyone Dies'; interviewed on alignment problem

AI Existential Risk and Extinction ScenariosAI Alignment Problem and Value AlignmentSuperintelligence Development and Control
View Analysis
Making Sense with Sam Harris

Making Sense with Sam Harris · Sep 16, 2025

#434 — Can We Survive AI?

Co-author and guest discussing AI safety research and policy implications; joined MIRI in 2010

AI Alignment ProblemSuperintelligence RiskGradient Descent and Model Training
View Analysis