Create your account

Analyse episodes, create alerts, spot trends before they go mainstream

Already have an account? Sign in

People

Nick Bostrom

Mentioned in 15 analyzed podcast episodes across 10 shows

A philosopher and AI safety researcher best known for authoring *Superintelligence* and popularizing concepts like the paperclip maximizer thought experiment in discussions about existential AI risks. His work has been foundational to early AI safety discourse and has influenced venture capital and technology leaders' thinking about AI development. He remains a frequently cited figure in conversations about the timing and risks of advanced AI systems.

Episode Appearances

Into The Dark

Into The Dark · Apr 8, 2026

165: Are We Living in a Simulation?

Formalized the simulation hypothesis in 2001, proposing statistical argument for likelihood of simulated reality

Simulation Hypothesis and Bostrom's TrilemmaNon-Algorithmic Understanding in PhysicsAI Consciousness and Emotional Learning
View Analysis
Modern Wisdom

Modern Wisdom · Apr 2, 2026

#1079 - Tristan Harris - AI Expert Warns: “This Is The Last Mistake We’ll Ever Make”

Wrote 'Superintelligence'; developed paperclip maximizer thought experiment; Vulnerable World Hypothesis on distributed destructive tech

AI Safety and AlignmentArtificial General Intelligence (AGI) DevelopmentRecursive Self-Improvement in AI Systems
View Analysis
The a16z Show

The a16z Show · Mar 19, 2026

AI Just Gave You Superpowers — Now What?

AI safety researcher whose views on superintelligence risks have evolved over time

AI automation economicsHuman-AI collaboration modelsVerification vs automation costs
View Analysis
AI + a16z

AI + a16z · Mar 3, 2026

Jack Altman & Martin Casado on the Future of VC

Author of 'Superintelligence' book cited as influence on early AI safety discourse

Venture capital evolution and specializationMedia strategy for VC firmsAI infrastructure investment opportunities
View Analysis
Moonshots with Peter Diamandis

Moonshots with Peter Diamandis · Feb 19, 2026

Ben Horowitz: xAI Executive Exodus, Apple's AI Crisis, The Pace of AI | #232

Philosopher who published essay on optimal timing for superintelligence pausing

Recursive Self-Improvement (RSI) and AI Singularity TimelineXAI Executive Departures and ITAR RegulationsByteDance C-Dance 2.0 Video Generation Technology
View Analysis
Last Podcast On The Left

Last Podcast On The Left · Feb 4, 2026

Side Stories: Pizza Party

AI philosopher funded by Epstein to develop transhumanist ideology and post-human futures

Jeffrey Epstein Files AnalysisCoded Communication in Criminal NetworksTech Billionaire Involvement in Exploitation
View Analysis
StarTalk Radio

StarTalk Radio · Dec 19, 2025

Cosmic Queries – Living in a Simulation with Nick Bostrom

Professor at University of Oxford's Future of Humanity Institute; originator of simulation argument and author of Superintelligence

Simulation Hypothesis and Simulation ArgumentAncestor Simulations and Procedural Content GenerationConsciousness and Substrate Independence
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Nov 24, 2025

How AI Will Save Humanity: Creator of The Last Invention Explains

Early AI safety advocate; attended Tegmark's 2015 conference; evolved from doomer toward scout position

AI Existential Risk and SafetyArtificial General Intelligence (AGI) Development TimelineAI Regulation and Governance
View Analysis
The Last Invention

The Last Invention · Nov 13, 2025

EP 7: The Scouts

Author of 'Superintelligence' book; AI safety researcher who attended 2015 conference to engage with AI research community

Artificial General Intelligence (AGI) development timeline and feasibilityAI safety research and technical alignment problemsGeopolitical competition between US and China in AI development
View Analysis
The Last Invention

The Last Invention · Nov 6, 2025

EP 6: The AI Doomers

Philosopher famous for introducing superintelligence concept to general public through books; Extropian community member

AI Existential Risk and Extinction ScenariosAI Alignment Problem and Value AlignmentSuperintelligence Development and Control
View Analysis
The Last Invention

The Last Invention · Oct 16, 2025

EP 4: Speedrun

Author of 'Superintelligence'; book influenced Sam Altman's thinking on AI existential risks

AI Existential Risk and SuperintelligenceOpenAI Founding and Mission DriftCompetitive Dynamics in AI Development
View Analysis
Making Sense with Sam Harris

Making Sense with Sam Harris · Oct 2, 2025

#435 — The Last Invention

Author of 'Superintelligence' (2014); featured expert on existential AI risk in the series

Artificial General Intelligence (AGI) timelines and development trajectoriesArtificial Superintelligence (ASI) existential risk and control problemsAI safety research and alignment techniques
View Analysis
The Last Invention

The Last Invention · Oct 2, 2025

EP 2: The Signal

Author of 'Superintelligence'; discusses early AI researchers' optimistic timelines and lack of safety considerations.

Alan Turing's Contributions to AI PhilosophyWWII Enigma Codebreaking and Machine Intelligence OriginsDartmouth Summer Program (1956) and AI Field Founding
View Analysis
The Last Invention

The Last Invention · Oct 2, 2025

EP 1: Ready or Not

Philosopher who published 'Superintelligence' (2014); foundational work on AI existential risk

Artificial General Intelligence (AGI) timelines and developmentArtificial Superintelligence (ASI) existential riskAI alignment and control mechanisms
View Analysis
The TED AI Show

The TED AI Show · Apr 11, 2025

The magic intelligence in the sky | Good Robot

Philosopher who helped popularize paperclip maximizer thought experiment; faced criticism for past controversial statements

AI Existential Risk and SuperintelligencePaperclip Maximizer Thought ExperimentAI Safety and Alignment
View Analysis