Create your account

Analyse episodes, create alerts, spot trends before they go mainstream

Already have an account? Sign in

People

Geoffrey Hinton

Mentioned in 29 analyzed podcast episodes across 10 shows

A pioneering AI researcher and Nobel Prize winner who made foundational contributions to deep learning, including the backpropagation algorithm and neural network training techniques that enabled modern AI. After decades advancing the field, he left Google to publicly advocate for addressing AI existential risks and safety concerns. Podcasts discuss him as both a seminal figure in AI's development and an important voice warning about the technology's potential dangers.

Episode Appearances

Making Sense with Sam Harris

Making Sense with Sam Harris · Apr 10, 2026

#469 — Escaping an Anti-Human Future

Cited as example of AI pioneer who became deeply concerned about AI risks after recent breakthroughs

AI Safety and AlignmentArtificial General Intelligence (AGI)AI Arms Race Dynamics
View Analysis
Latent Space: The AI Engineer Podcast

Latent Space: The AI Engineer Podcast · Apr 3, 2026

Marc Andreessen introspects on The Death of the Browser, Pi + OpenClaw, and Why "This Time Is Different"

Pioneer in neural networks who lived to see breakthroughs; example of researcher vindication

AI Scaling Laws and Moore's Law AnalogyLLM Reasoning Breakthroughs (O1, OpenClaw)Agent Architecture and Unix Philosophy
View Analysis
Breaking Points with Krystal and Saagar

Breaking Points with Krystal and Saagar · Feb 24, 2026

2/24/26: Ro Khanna Sounds Off On DNC, Markets Crash, AI Exec Loses Control Of Bot, UFO Files

Nobel Prize winner who quit Google to speak publicly about AI existential dangers

DNC Transparency and Gaza Policy ImpactIran War Powers Resolution and Congressional AuthorizationEpstein Files Suppression and DOJ Redaction
View Analysis
This Day in AI Podcast

This Day in AI Podcast · Feb 20, 2026

Gemini 3.1 Pro, Claude Sonnet 4.6 & The OpenClaw Hire That Killed the Chatbot Era - EP99.35

Used as example in AI model testing for creating a 'doom center' monitoring application

AI Model BenchmarkingAgentic AI WorkflowsAI Model Pricing Strategy
View Analysis
Latent Space: The AI Engineer Podcast

Latent Space: The AI Engineer Podcast · Feb 12, 2026

Owning the AI Pareto Frontier — Jeff Dean

AI pioneer mentioned as co-inventor of the distillation technique in 2014

Pareto Frontier OptimizationModel Distillation TechniquesTPU Hardware Design
View Analysis
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas · Feb 9, 2026

343 | Tom Griffiths on The Laws of Thought

AI researcher who developed backpropagation algorithm enabling training of multilayer neural networks

Bayesian Inference and Probabilistic ReasoningFormal Logic and Mathematical Foundations of ThoughtNeural Networks and Deep Learning Architecture
View Analysis
Possible

Possible · Feb 4, 2026

CryptoPunks creators: from art experiment to cultural movement

AI researcher at University of Toronto; cited as example of persistence in AI despite 1990s skepticism

CryptoPunks creation and evolutionNFT immutability and smart contract designDigital identity and profile pictures
View Analysis
Huberman Lab

Huberman Lab · Feb 2, 2026

How Dopamine & Serotonin Shape Decisions, Motivation & Learning | Dr. Read Montague

AI pioneer who persisted with neural networks when they were considered ineffective

Dopamine as learning algorithmTemporal difference reinforcement learningSerotonin-dopamine opposition
View Analysis
Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST) · Jan 25, 2026

VAEs Are Energy-Based Models? [Dr. Jeff Beck]

Credited with developing negative sampling and contrastive learning methods for representation learning

Energy-Based Models and Bayesian InferenceVariational Autoencoders (VAEs)Joint Embedding Prediction Architectures (JEPA)
View Analysis
This Day in AI Podcast

This Day in AI Podcast · Jan 23, 2026

The AI Productivity Paradox: Why Doing More Feels Like Burnout: EP99.31

AI cognitive overload and productivity paradoxContext management and knowledge graphsEnterprise AI implementation strategies
View Analysis
This Day in AI Podcast

This Day in AI Podcast · Jan 19, 2026

2026 Existential Crisis, Claude Code Hype & Is SaaS Dead? EP99.30-WIZARDS

Claude Code capabilities and limitationsAgentic AI workflows vs collaborative AIEnterprise AI cost management
View Analysis
Making Sense with Sam Harris

Making Sense with Sam Harris · Jan 16, 2026

#453 — AI and the New Face of Antisemitism

AI pioneer; stated that current deep learning approaches are not the path to AGI but did not elaborate on causal limitations

AGI and Causal Reasoning LimitationsAI Alignment and Existential RiskLLM Scaling and Computational Limits
View Analysis
3 Takeaways™

3 Takeaways™ · Jan 13, 2026

A Smarter, More Hopeful Future of Work - If We Get Artificial Intelligence Right (#284)

Neural networks inventor and Nobel Prize winner cited for predicting radiologist displacement within 5 years, which hasn't materialized

AI and job displacementFuture of work and employmentAutomation and wage inequality
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Jan 12, 2026

AI's Most Dangerous Truth: We've Already Lost Control

Co-creator of backpropagation algorithm; left Google to warn about AI dangers; noted CEO perspective on intelligence

AI Existential Risk AssessmentAI Safety Testing and Red TeamingAgentic vs. Non-Agentic AI Design
View Analysis
Tech Won't Save Us

Tech Won't Save Us · Jan 8, 2026

We All Suffer from OpenAI’s Pursuit of Scale w/ Karen Hao [Replay]

Deep learning pioneer whose belief in computational intelligence influenced scaling strategy; now advocates for AI existential risk concerns

OpenAI's Strategic Evolution and Mission DriftScaling Paradigm in Generative AI DevelopmentLabor Exploitation in Content Moderation and Data Preparation
View Analysis
Moonshots with Peter Diamandis

Moonshots with Peter Diamandis · Jan 6, 2026

Elon Musk on AGI Timeline, US vs China, Job Markets, Clean Energy & Humanoid Robots | 220

Artificial General Intelligence TimelineUniversal High Income ImplementationSpace-Based Data Centers
View Analysis
The New Yorker Radio Hour

The New Yorker Radio Hour · Dec 26, 2025

The Company Behind the A.I. Boom

Godfather of AI software; quit Google to warn humanity about AI risks; represents pessimistic view versus Huang's optimism

Nvidia's market dominance and competitive moat in AI hardwareJensen Huang's leadership philosophy and technical visionParallel computing architecture and neural network synergy
View Analysis
Your Undivided Attention

Your Undivided Attention · Dec 18, 2025

America and China Are Racing to Different AI Futures

Nobel Prize winner in physics; participated in International Dialogues on AI Safety in Shanghai

US-China AI Competition and Race DynamicsArtificial General Intelligence (AGI) Development PhilosophyAI Safety and Existential Risk Management
View Analysis
Radiolab

Radiolab · Dec 12, 2025

The Alien in the Room

Collaborator with Sejnowski on early machine learning research that established foundation for modern deep learning

Neural Network Architecture and Learning MechanismsMachine Learning vs. Rule-Based ProgrammingLarge Language Models and Transformer Architecture
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Dec 8, 2025

Go All In on AI: The Economist’s Kenneth Cukier on AI's Experimentation Era

One of three 'Godfathers of AI'; pioneered backpropagation enabling recursive deep learning; foundational to modern AI development.

AI Hype vs. Reality AssessmentMachine Learning and Deep Learning FundamentalsData-Driven Decision Making in Organizations
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Nov 24, 2025

How AI Will Save Humanity: Creator of The Last Invention Explains

Lifelong AI believer since 1972; now publicly warns of existential risk; exemplifies shift from acceleration to caution among pioneers

AI Existential Risk and SafetyArtificial General Intelligence (AGI) Development TimelineAI Regulation and Governance
View Analysis
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas · Nov 24, 2025

336 | Anil Ananthaswamy on the Mathematics of Neural Nets and AI

Persisted in neural network research during AI winter; contributed to backpropagation algorithm development

Perceptron Convergence ProofBackpropagation AlgorithmGradient Descent Optimization
View Analysis
The Last Invention

The Last Invention · Nov 20, 2025

EP 8: The Accelerationists

Effective AccelerationismAI Safety vs AI ProgressExistential Risk Portfolio Management
View Analysis
Digital Disruption with Geoff Nielson

Digital Disruption with Geoff Nielson · Nov 17, 2025

AGI Is Here: AI Legend Peter Norvig on Why it Doesn't Matter Anymore

Expresses concern that AI development is dangerous; represents more pessimistic view than Norvig

AGI Definition and TerminologyLanguage Model Scaling and EffectivenessAI Safety and Responsible Development
View Analysis
The Last Invention

The Last Invention · Oct 16, 2025

EP 4: Speedrun

AI pioneer; connectionist researcher; received Turing Award with Bengio and LeCun; expressed concerns post-ChatGPT

AI Existential Risk and SuperintelligenceOpenAI Founding and Mission DriftCompetitive Dynamics in AI Development
View Analysis
The Last Invention

The Last Invention · Oct 9, 2025

EP 3: Playing the Wrong Game

Pioneer of neural networks and backpropagation; rejected by AI community for decades before 2012 ImageNet validation

Deep Blue vs. Gary Kasparov chess match (1997)Symbolic AI and expert systems approachNeural networks and connectionist AI
View Analysis
Making Sense with Sam Harris

Making Sense with Sam Harris · Oct 2, 2025

#435 — The Last Invention

Quit Google to publicly warn about AI existential risks; called 'Godfather of AI' for his foundational work

Artificial General Intelligence (AGI) timelines and development trajectoriesArtificial Superintelligence (ASI) existential risk and control problemsAI safety research and alignment techniques
View Analysis
The Last Invention

The Last Invention · Oct 2, 2025

EP 1: Ready or Not

Nobel Prize-winning AI researcher who quit Google to publicly warn about existential risks from AI development

Artificial General Intelligence (AGI) timelines and developmentArtificial Superintelligence (ASI) existential riskAI alignment and control mechanisms
View Analysis
Big Technology Podcast

Big Technology Podcast · Mar 19, 2025

Why Can't AI Make Its Own Discoveries? — With Yann LeCun

Co-developer with LeCun of foundational deep learning ideas

Large Language Model LimitationsScientific Discovery and AIAI Reasoning and Planning
View Analysis