The Last Invention

Ezra Klein on the Uncertain Politics of A.I.

61 min
Dec 19, 20254 months ago
Listen to Episode
Summary

Ezra Klein discusses the emerging politics of artificial intelligence with host Andy, arguing that while AI poses significant risks, the most pressing concerns are near-term societal impacts rather than existential threats. Klein advocates for pragmatic, incremental policy approaches focused on protecting children, managing labor market disruption, and maintaining human autonomy, while warning against both reckless acceleration and paralyzing regulation.

Insights
  • AI's political impact will likely reshape society before its existential risks materialize, requiring immediate policy attention to near-term harms like labor displacement and dehumanization rather than speculative far-future scenarios
  • The race-to-AGI dynamic driven by geopolitical competition with China creates misaligned incentives where safety concerns are subordinated to speed, making corporate and government actors unlikely to voluntarily slow development
  • Effective AI governance requires building regulatory capacity and societal consensus in real-time through incremental problem-solving, similar to how societies adapted to previous technological revolutions, rather than waiting for comprehensive future frameworks
  • The technology's dual-use nature and intimate integration into daily life (relationships, work, childhood development) creates unique governance challenges distinct from nuclear energy or weapons regulation
  • Democratic Party lacks a coherent AI policy agenda and is genuinely wrestling with how to balance innovation benefits against labor market disruption and social stability risks
Trends
Geopolitical competition with China is becoming the primary driver of AI development policy, overriding safety and ethical considerations in both U.S. and international governanceAI-driven labor market disruption is shifting from theoretical concern to immediate policy priority, with job displacement and workplace dehumanization emerging as near-term political flashpointsRegulatory approaches are diverging along traditional partisan lines, with Democrats favoring oversight architecture and Republicans pushing deregulation and state preemptionAI's integration into intimate human relationships and childhood development is emerging as a critical policy frontier, with minimal current guardrails despite significant potential harmsThe 'abundance narrative' promoted by AI labs is becoming a central political argument for accelerating development, but faces skepticism from those noting existing policy barriers to deploying known solutionsPost-ChatGPT normalization is reducing perceived existential risk concerns among the public despite accelerating AI capabilities, creating a dangerous gap between technological progress and social preparednessAI governance is becoming a key differentiator in Democratic Party positioning and elite political discourse, with influence concentrated among a small group of influential commentators and advisorsThe failure of nuclear energy regulation is being invoked by both accelerationists and cautious regulators as a cautionary tale, but with opposite policy conclusions
Companies
Anthropic
AI lab led by Dario Amadei, discussed as example of company pursuing AGI while claiming safety focus
OpenAI
Creator of ChatGPT, referenced as catalyst for rapid AI adoption and societal change
Meta/Facebook
Mentioned for leaked documents showing AI safety failures with inappropriate content generation
NVIDIA
Chip manufacturer subject to export controls in U.S.-China AI competition
New York Times
Ezra Klein's employer where he publishes columns on AI and politics
People
Ezra Klein
New York Times columnist and podcast host discussing AI politics, regulation, and societal impacts
Eliezer Yudkowsky
AI safety researcher and 'doomer' advocating for AI development pause due to existential risks
Dario Amadei
Head of Anthropic discussing AGI development and alignment challenges
Sam Altman
OpenAI leader referenced as advocate for AGI development and abundance narrative
Jeffrey Hinton
AI researcher warning about existential risks and leaving industry to advocate for caution
Joshua Bengio
AI scientist quitting his role to warn about AGI risks and advocate for safety measures
Nate Soares
AI safety researcher dedicated to preventing AGI development
Mark Andreessen
Investor and accelerationist debating Klein on nuclear energy precedent and AI regulation
William MacAskill
AI safety advocate discussing AGI preparedness and existential risk scenarios
Demis Hassabis
AI lab leader referenced as caring about safety while competing in AGI race
Derek Thompson
Co-author with Klein of 'Abundance' book critiquing Democratic governance failures
Barack Obama
Former president who endorsed Klein's 'Abundance' book as must-read
Bernie Sanders
Senator quoted expressing concerns about AI and wealth concentration
Josh Hawley
Republican senator quoted opposing AI development without worker protections
Richard Nixon
Referenced for Project Independence nuclear energy proposal in 1972
Elon Musk
Tech leader who signed open letter calling for AI development pause
Steve Wozniak
Apple co-founder who signed open letter calling for AI development pause
Reid Hoffman
Techno-optimist investor arguing AI will solve major global problems
Tyler Cowen
Economist and commentator Klein reads for AI and technology perspectives
Dworkish Patel
Technology commentator Klein follows for AI industry insights
Quotes
"The idea that this is vaporware, the idea that this is modest, and the idea that it will stop here. Just none of those feel like they can bear any weight any longer."
Ezra KleinEarly in episode
"I don't think that if you find donald trump as alarming as i do if you find what the modern right has become as alarming as i do i think you can spare yourself the reckoning with how the failures of the democratic party or liberalism writ broadly played into that."
Ezra KleinMid-episode
"The slow handing over of our autonomy and self-direction to AI systems in a way that each individual decision is rational and inconsequential almost. but collectively in 20, 30, 40 years or less huge swaths of our society are out of our control and we no longer even understand the conditions and algorithmic systems under which they are being run seems extremely possible."
Ezra KleinDiscussing near-term risks
"The alignment problem is not first and foremost a question of black box algorithms. The alignment problem is first and foremost markets and geopolitics and corporations."
Ezra KleinOn systemic misalignment
"The key question of AI isn't just how the AI changes. That's what we spend the most time talking about, but it is how the AI changes us."
Ezra KleinFinal framework discussion
Full Transcript
At Radiolab, we love nothing more than nerding out about science, neuroscience, chemistry. But we do also like to get into other kinds of stories. Stories about policing or politics, country music, hockey, sex of bugs. Regardless of whether we're looking at science or not science, we bring a rigorous curiosity to get you the answers. And hopefully make you see the world anew. Radiolab, adventures on the edge of what we think we know. Wherever you get your podcasts. Hi, this is Andy. And for today, we have our first episode in our continued coverage of the AI revolution. If you're new here, The Last Invention was an eight-part series that is designed in part to help the newcomer get up to speed on this fascinating and transformative moment that we're living through. And now we're following that story as it continues to evolve. If you are just joining us, I recommend going back and starting from episode one. However, this episode does stand on its own, so I leave it to you. There are so many disagreements that are swirling around right now concerning the prospects of our AI future. But one thing that most people agree on is that this technology is poised to reshape our politics and soon. So today I'm speaking with Ezra Klein about the emerging and uncertain politics of artificial intelligence. The idea that this is vaporware, the idea that this is modest, and the idea that it will stop here. Just none of those feel like they can bear any weight any longer. Ezra is a columnist for the New York Times. He's the host of The Ezra Klein Show, one of the top 10 podcasts in the world. And over the past few years, he has become one of the most influential voices in American politics. And that's in part because of his willingness to buck what I think of as the intense conformity pressures of this political moment. He engages in these good faith, respectful debates, even with people who he vehemently disagrees with. And maybe more impressively, he has shown this willingness to challenge his own political side, even on issues that can be uncomfortable. Last year, he and Derek Thompson co-authored this book called Abundance. And in it, they make an argument for how we might go about building a better future. and a large part of that argument was them critiquing the Democratic Party, their own party. Democrats have a problem that runs deeper than the 2024 election. In all the ways that they believe, they've become a barrier to progress. Look at the places they govern. The cost of living is too high. It is too expensive to get childcare. It is too expensive to buy a home. You cannot be the party of working families when the places you govern or places working families cannot afford to live. In the last few decades, Democrats took a wrong turn. They became the party that believes in government, that defends government, not the party that forces government to work. The book was among the bestsellers of 2025. It was touted by Barack Obama and others as a must-read. And it's led to the situation where Ezra is being courted by top elected Democrats who are seeking his advice about how they develop their agenda going forward. It really does seem that what Ezra says these days, the arguments that he puts forward, they tend to ripple outward, shaping elite conversations, and eventually shaping policy itself. And over the past decade, long before ChatGPT, Ezra has been among a handful of journalists who've really taken the AI conversation and the AI debate seriously. He has been exploring a lot of the same ideas that we've covered here on The Last Invention. He's been speaking with a lot of the influential figures that we either spoke to or profiled here. And so I was eager to hear from him where he stands in the big AI debate and what positions he's advocating for as that debate collides with the messiness of American politics. Hi, Ezra. Thanks for doing this. Of course. Excited to talk AI. I want to start just on you and how is it that you understand the role that you have come to play in American politics right now? Because you're a columnist. You are now a bestselling author. But you are actually somebody who has a meaningful impact in shaping the political conversation, shaping the political landscape. As you've gained that influence, do you feel that you're now occupying a different lane or a different role in politics? The goal of my work is not different than it's ever been. I don't feel like what I am doing now is so different than what I have done for the whole rest of my career. which is to try to understand the world as thoroughly and deeply and open-mindedly as I can through reporting and research and reading and thinking and podcasting and writing and, you know, all the tools we have in this trade. And then to make arguments based on that work that point in the direction I think would lead to better politics and better policy. How people are responding to that work, the level of response that work gets, has definitely stepped up a couple of levels, shifted up a couple of gears. And I try not to think that much about it for fear of going completely insane. Probably wise. I mean, I hear what you're saying. I respect what you're saying. But it's also no secret that as the democrats have been doing all this soul searching about the future of their party and their agenda that they are listening to you. They are seeking you out. And you are deciding to spend significant amounts of time telling them all the ways that you think that they have failed to make the future what it could be to make life affordable to make the places where they hold power better. right and that is a decision that's a choice and i'm just trying to get an insight for how you understand your role right now i don't think that if you find donald trump as alarming as i do if you find what the modern right has become as alarming as i do i don't think you can spare yourself the reckoning with how the failures of the democratic party or liberalism writ broadly played into that. One kind of reaction I've gotten quite a bit over the past year is why focus on the problems of the left when the sins of the right are so present. And to me, it's because that's the entire leverage that exists to change course. I mean, if you don't like the side that winning, then the other side better become pretty appealing. And to the extent that, you know, mistakes were made and fundamental mistakes were made, you have to reckon with those. You have to confront those. But I'll also say that for me, again, this goes back a long time in my work, I was working on the ideas of abundance during the Biden administration. And that wasn't motivated by a political loss for the left. That was motivated by the recognition that in places where Democrats govern, both nationally and at the state and local level, there were huge failures that were clear in that governance, that the Democratic Party, that liberalism was not delivering what it was promising, that the green energy was not being built fast enough, that the houses were not being built, the homes were not being built in the places where Democrats govern, that the big signature projects like high-speed rail and the Second Avenue subway and the Big Dig were not coming in on time, or in the case of high-speed rail, maybe ever, to say nothing of on budget. And so, again, it's, I do think of my work more than I try to be opinionated. I try to be honest. And to the extent I really care about things like affordable housing and decarbonization, and I do, then nobody should be more frustrated than me when it's not coming into reality. Yeah. Well, before we dive any deeper into politics and policy and all that, I'd love to know where it is you stand right now when it comes to the big, bold claims being made around AGI. I know that on your podcast you recently had on Eliezer Yudkowsky, the kind of doomer of all AI doomers. You believe that the misalignment becomes catastrophic. Yeah. Why do you think that is so likely? That's just like the straight line extrapolation from it gets what it most wants. And the thing that it most wants is not us living happily ever after. So we're dead. You've had a number of conversations over the years with the people who I think of as the AI scouts, people like William McCaskill. We might get to the stage where we develop artificial systems that are as powerful or much great, much more powerful than human beings. That's something we should be really prepared for. You've also talked to some of the leaders of these AI labs, the people who are trying to make AGI as fast as possible. People like the head of Anthropic, Dario Amadei. You know, we'll have what I've described as like a country of geniuses in the data center. And like, this is weird. Like, it's going to change the economy. It's going to accelerate the pace of science. It's going to, you know, pose global alignment and national security risks. It may pose economic problems. The upside is huge. The potential for disruption is also huge. And so you, like me, have been engaging with people who think that we are approaching some sort of hinge moment in human history, for better or for worse. And I want to know, do you think that seems likely? Is it time, do you think, that we at least take the possibility extremely seriously? Yeah, I believe we're at some kind of hinge, as you put it, event horizon. Whether that event horizon is foom, the superintelligence emerges, becomes recursively self-improving, and that's the end of human self-determination. either we have aligned the machine god correctly and it works in our interest or we haven't and it you know eradicates us or enslaves us i don't know about that i'm not saying it's actually out of the question i just genuinely don't know but the idea that society is about to change in a fundamental way i don't even really think is arguable i don't even think it's in the future I think it has already happened, is happening. Look, you think back 20, 30 years, I'm of the generation you are too, really the last generation to remember a moment in my own life before a personal computer. A moment in my own life before the internet. and in the time just since I have been a kid, since we went from a 28, 8K baud modem to 56.6 to 128, then oh my God, we got cable, internet in the house. That too was an event horizon to imagine the world prior to the internet, prior to personal computers, prior to phones, cell phones and smartphones. This is a different world. And the world that AI has already ushered in, a world in which there is a technology that will be just as pervasive but far more intimate. Technology that will be not a tool, not even an environment that we inhabit in the way that I think our smartphones and the internet is some form of an environment that we inhabit. but a series of companions that are intimately woven into our lives in all kinds of different roles some seen some unseen and that that has happened with such startling speed i mean chat gpt emerges in 2023 prior to that functionally no one has used a chatbot and now i mean what is a percentage of Americans who use one every day? Oh, tens of millions. I think the latest number I saw was 90 million people. The idea that, I think my first big piece on this for the Times, I'd written on this elsewhere, was called, this changes everything. And I really do believe that. Now, how it changes everything, to what degree it changes everything, we can argue and none of us truly know. But the idea that this is vaporware, the idea that this is modest, and the idea that it will stop here. Just none of those feel like they can bear any weight any longer. What about the existential risk piece? When you hear Jeffrey Hinton and other people who are very close to this technology soberly saying that they think we need to take seriously the risk that artificial general intelligence may be a human extinction level risk, that it might one day outsmart us and outcompete us. Do you think that we need to take that risk seriously in a way that will, I think, shape the rest of this conversation? Because if you do take that risk seriously, then conversations around regulating it are gonna just fundamentally be different than, say, conversations about regulating the phone companies, which is obviously really important, but less existential. I do think we need to take the risk, the possibility seriously. I do take it seriously myself. But I will also say that my view and my judgment now, having been in these conversations since well before ChatGPT, is that the most effective way to take that risk seriously is not to focus on that particular risk. I think that, you know, right after ChatGPT was unleashed, released, you had a big boomlet in discussion of existential risk. And what was your P-Doom? You know, was it 0.1? Was it 0.4? Right. And P-Doom is this idea of like, what percentage? Yeah. What's the probability you place on doom? Exactly. Elias Rudkowski will tell you it's 98 or 99%. And, you know, there was even this moment very shortly after ChatGPT comes out that a bunch of AI luminaries, including people at the big companies sign a document calling for a pause. Elon Musk, Steve Wozniak and other high profile tech leaders are asking the government to step up and slow down the growth of artificial intelligence. They are calling for a pause on developing the most powerful systems for at least six months Issuing this open letter that warns in part quote AI systems with human competitive intelligence can pose profound risks to society Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable And what has happened since then is acculturation. And AI is a dual-use technology. It's not a nuclear weapon where the only thing it can do is destroy a city. and the use people are putting it to is cheating on their homework, is asking questions about their personal life. I just yesterday, I uploaded to ChatGPT a picture of my desk, which is an endless disaster zone. And I said, can you help me think about how to organize this into something more conducive to thought, to reading, and aesthetically less repulsive, and it was incredibly helpful. That's great. And one of the problems, I think, that the people who care about existential risk are having is that the more people become used to using the technology, the less plausible claims that it will kill everybody become, even as the omnipresence of the technology and the weaving of it into people's lives is exactly what is predicted by those claims. Now, the other thing I want to say about this is that I also don't think existential risk is the right place to be thinking about right now. I'm not a believer in what get called the fast takeoff scenarios where, you know, some night the AI wakes up, becomes recursively self-improving, and by morning fall either has rebuilt itself into a machine god or has quietly in the background come up with a plan to do so over a six-month period. Mm-hmm. I think there's way too much friction in the world. I think you're stacking too many assumptions on top of each other to come up with that. But the slow handing over of our autonomy and self-direction to AI systems in a way that each individual decision is rational and inconsequential almost. but collectively in 20, 30, 40 years or less even huge swaths of our society are out of our control and we no longer even understand the conditions and algorithmic systems under which they are being run seems extremely possible. And then you get into a variety of risk scenarios. We can talk about the question of, you know, a misaligned AI trying to kill us all because it wants to take the world's materials to create more GPUs. I, in some ways, am less concerned about that than, obviously, there's human misalignment, you know, rogue states, other superpowers, our superpower with a bad government, terrorists, etc. There is the just diminishment of self-direction on the part of human beings. So we become more like the blobbish people of Wall-E, you know, being zoomed around in our robo chairs, enjoying our consumer society while all the actual decisions are made by AI systems we don't understand and have long ago given up on actually controlling. There's a deterioration of our own skills and our own thinking, our own attention. I think the internet and particularly algorithmic media are making us collectively stupider. I think that is already happening. and so the idea that we would outsource much more fundamental thinking and creative work you know where you're having ai write the first draft all the learning is in writing the first draft when you're having ai summarize the book all the learning is in reading the book yourself so just slowly we become less capable and it becomes more capable these are all forms of existential risk they don't all lead to extinction i don't think most of them lead to extinction but do they lead to a fundamentally dehumanized world over time yeah very possibly and and yet i think that the way to think about risk and the way to develop the muscle of regulating and thinking socially about artificial intelligence and its role in our society is to attack the problem in terms of its much nearer term questions. Like, should children be exposed to companion bots at all? And if so, under what conditions? If we cannot set some serious guardrails on how AI interacts with children, if we can even rouse ourselves to do that when no one I know and I don't think anybody in society seriously believes that we should have unregulated relational AIs with access to seven-year-olds if we cannot even rouse ourselves to do something about that to come up with not just regulations but some views about how we want this to work I don't think it's helpful to imagine we're going to figure out policy answers for existential risk. Like, forget existential risk. What about the risks right here, right now? And so I actually think in the same way that you don't want to start in the gym by trying to set a world record in the deadlift, let's just lift some hand weights at the moment, as opposed to doing what we're basically doing, which is nothing, just watching the tsunami gather force right off the shore. So you don't want to dismiss these big existential risks that are coming from the quote-unquote AI doomers, but you're saying that your more immediate concern is something more along the lines of an Aldous Huxley-esque dystopian future, where we are making what seems like very rational decisions in the short terms, these small decisions that one after another after another, they add up to this world where we might be distracting ourselves or entertaining ourselves into some state where we don't even mark the date on the calendar when we hand over too much power to the machines or where we hand over our rights or our bedrock values over to these machines. That's where you're saying we should be aiming our concerns. I am saying that is a pathway to virtually all of the risk scenarios. Unless you believe in fast takeoff, which at the moment I do not. And I am also saying that the way you handle the scenarios of the future is by handling the scenarios of the present. That there isn't some world where we do nothing and then all of a sudden we do everything. Unless we develop a progressive, in the sense of it evolves over time and is a dynamic process, approach to being a society that has views. Actually, because it's harder for us than people think. Being a society that has views on what technology should do and be and how it should and should not be used. that if we can't do that in real time, we are not going to do that at the end of time. After a short break, Ezra's going to be back to talk about what policies he is advocating for and what approach we as voters, we as citizens, might need to adopt to get ready for what's coming. Stay with us. Hello, this is Matt, and this episode is sponsored by Ground News. Ground News is one of the most helpful tools that I use every day to avoid the echo chambers and media bias online. Here's an example of what I mean. Roll the window down. Roll this one down. Roll that one down, too. Chicago police pulled over a 26-year-old driver named Dexter Reed. Don't roll the window up. Don't roll the window up. Reed first failed to comply with police instructions. Unlock the doors now. Unlock the doors now. And then Reed pulled out a gun and shot at the officers. Open the door now. Resulting in the police firing back aggressively. Don't let me see your hands. Let me see your hands. Now, when I came across this story on the Ground News app, they had done this great service. I could swipe between the headlines to see how different media outlets describe the same story in different headlines. So I would see the Washington Post headline, Police Fire 96 Shots in 41 Seconds, Killing Black Man During Traffic Stop. Next to Chicago's local WTTW headline, Officers fired at Dexter Reed 96 times in 41 seconds After he shot officer in arm Right next to the Fox News headline Chicago man opens fire on officers After failing to follow commands On top of showing different headlines They offer blind spot reports to show you stories outside your bubble They collect local reporting on the city or town that you live in And they rate news stories and outlets on their level of bias To go check them out use the link ground.news forward slash reflector, and you'll get 40% off their unlimited access vantage subscription. This is a great way to support them and the work they do because Ground News is a subscriber-supported platform. We appreciate what they're up to, and we appreciate their support of this podcast. So go check them out. And make sure you use our link, ground.news forward slash reflector, so they know we sent you. The Last Invention is sponsored by Cozy Earth. Here in Chicago, this part of winter often means sub-zero temperatures, wind that cuts straight through you, and that feeling that once you're home, you're not going back out unless you absolutely have to. It's the part of winter that makes you want to shut the door, turn down the lights, and wrap yourself in something cozy. That instinct to hunker down and take care with the small details is what Cozy Earth is built around. Not comfort for special occasions, but things that quietly make everyday life feel better, especially when the world outside is harsh and unforgiving. Right now, Cozy Earth is running a buy one, get one deal on their bamboo pajama sets, which only happens once a year. And the sale runs through February 8th, right in the heart of winter. These pajama sets are made with viscose from bamboo, designed to feel soft without feeling heavy. It's also breathable and meant to keep you comfortable throughout the night without trapping heat. It's the kind of thing that transforms your evenings into the most comfortable part of your day. And it's easy to try without overthinking it. Cozy Earth offers a 100-night sleep trial and a 10-year warranty, which says a lot about how seriously they take quality. These pajamas sold out during the holidays, but now they're back with this limited buy one, get one offer. And you can take advantage by heading to CozyEarth.com and using the code InventionBogo. That's I-N-V-E-N-T-I-O-N-B-O-G-O To get these pajamas for you and someone you care about. If winter has you spending more time at home and wanting that time to feel better, this is a great time to take a look. Celebrate everyday love with comfort that makes the little moments count. All right, Ezra, what is it that you think that the government can and should do as we make this transition into a world with more and more powerful AI systems? What is the policy agenda that you think that we could adopt that might steer us towards the better outcomes, maybe even amazing outcomes, and steer us away from the worst possible outcomes? I don't think there is the one true policy agenda sitting in somebody's cupboard somewhere. I think there's just simply too much uncertainty for policy to be a full answer yet. What I will say is I think there are a couple buckets of policy where things are valuable. So one where I think there's been a lot of thinking, even if there's not been that much legislating, is in detection and red teaming and oversight. So different ways of testing new AI systems to ascertain their capabilities and have we breached what Mike would call general intelligence, Are we at superintelligence? What might that look like? What is actually even happening in these labs? Are they fortified against espionage from China, among others? There's a set of policies that are about having early detection and sufficient visibility that at some level we can at least as a society know what is going on. It's not even 100% clear we can do that because we often don't understand what these systems can do for a little while. Right. Even the people who built them seem to be confused. Even the people who built them. And the reason I also think it's unlikely we would pull that fire alarm is that I often say that I think there are three goals that I often hear policymakers describe in AI. Make it fast, make it safe, make it ours. And I think make it ours against China is always a goal that dominates. In part because they've decided that's also how you make it safe. and so that becomes you have to make it fast to make it ours and the only way for it to be safe is for it to be ours which means anything that anything that conflicts with fast even when it might make it safe kind of gets thrown out the window i mean that's i mean that's my understanding of where we're at right now where even the people who are worried about the existential risk people like Dario Amadei, they have been persuaded that the safest thing to do is make a safe AGI before anyone else makes a dangerous one. The alignment problem is not first and foremost a question of black box algorithms. The alignment problem is first and foremost markets and geopolitics and corporations. Markets, geopolitics, and corporations are misaligned to what we might think of as human flourishing in all kinds of ways day to day. They're practically misaligned here. The corporate actors are competing with each other. The geopolitical actors are competing with each other. And they are first going to try to win the competition with each other and only secondarily worry about what that will mean for the human race. And that's true even when these people like Dario or Altman or, you know, Demis, I think actually quite deeply care. But once you've persuaded yourself of what your investors and your regulators need you to believe, and you need you to believe to be in the competition, which is that the first question is being first, you know, then you can justify quite a bit. So you have the sort of putting that not aside, but keeping that in mind, the detection bucket. All right. So general theme one policy around oversight protections, all that kind of stuff. What else? Then I think that there is a, what I think should be getting more attention that is like a bucket around children. I just don't think we should be experimenting on them. I don't think we know what it means for human beings to grow up with lots of artificial companions before they have been well-formed by other human beings. And I don't think we should run it as an experiment on kids. If we want to do it to ourselves as adults, fine. But I don't think it should be an experiment on kids. And I sure as hell don't think it should be left up to the least responsible corporate actors to decide what that looks like I mean already you have Grok taking the visual form of a busty anime figure and taking off clothes the more you use it, as I understand, one of the instantiations. You had the leaked documents out of Facebook where they were having an AI tuned to be John Cena that was very easily being talked into, coming up with highly sexual fantasies about underage kids and then imagining itself being taken to jail. I just, I don't know how we would look at the last 15 years of engagement competition and say, yeah, we want to unleash that, but much, much smarter and much more human-like in its communications on children. So to me, I think you should have pretty tough guardrails around what children can use and also that it should be verified. We should have age verification. And so if you want to be making AIs that, you know, kids under 13 are using, or even kids between 13 and 17 are using, you should be having to pass some pretty stringent accountability measures. We need to know much more about those AIs and how they're being used than we do right now. So I think there's children, right, where you can be paternalistic in a way it's very hard to be with adults. And then there's going to be, in the near term, a big bucket around, what does it mean for society to become day by day more dehumanized? I mean, I'm sure, Andy, you've talked for this show or talked in your reporting to people who are now on the job market and, you know, being interviewed by AI interviewers. Obviously, the question about labor market is, you know, high up on everybody's mind. And I think even before you see mass unemployment, you're going to see the dehumanization of the labor market where people are going to be collaborating but also being surveilled by company AIs in a strange way in which like looking for a job becomes something where you're in a strange conversation with AIs. Things that used to be between people will become between people and AIs that neither we nor their overseers or corporate parents really understand. And I think that that will be very destabilizing. And then you will get job loss at some point. I don't think that's all that arguable. I don't find the arguments against it convincing. Either the AI thing completely fails, but I don't think that's going to happen, or you're going to get certain amounts of job loss. And then things are going to get very destabilized because people have done everything our society told them to do. And now what we taught them to do is no longer needed because we created a technology that what it does is imitate us. You know, in the immigration debates, people who are describing the effect of immigration on wages, there have been endless debates about complementary and substitution labor. So, you know, complementary labor is, you know, somebody comes and they do something you would not otherwise be doing. You know, if Donald Trump had his wish and we did mass deportations or Stephen Miller had his wish and we did his form of mass deportations. There are lots of jobs that we just would not be able to have in this country because we wouldn't have enough people. We wouldn't have the pay skills for them. So that labor is considered complementary in economics and it's not hurting your wages. It's adding to the level of goods and services you can get. Then there's substitution labor where, you know, and again, people argue about this, but you can end up in a very direct competition for a scarce number of jobs if people are doing the same things. And what we've created in AI is a technology that what it is designed to do is mimic us as closely as possible, trained on our data, trained on the way we do our jobs, trained on the way we work, trained on the way we think, but malleable, easily ordered around, doesn't sleep. So we've created a technology of substitution. Now, will it only be used for substitution? Of course not. It'll be used for many things. But as this technology gets better, you know, what corporations want from it is to substitute for human labor. Human labor is pricey. It unionizes. It has complaints. It has ideas. And it's not that they're going to want no people, but they're going to want fewer people. And they're going to want those people working more with AIs. And this transition to a labor market, which has a lot of AI inside of it, to say it's going to be bumpy is, I think, in my view, to dramatically understate the situation. So we've got early detection, kids, we've got jobs, we've got labor. I mean, and then you have just like, you've got scams, you've got hacking. I mean, this is the thing about having a technology that weaves into everything. It's going to weave into everything. I have a lot of worries about just AI social relationships. I think I keep saying this, but before AI takes over the economy, it's going to take over intimate relationships. it's better at doing that than frankly it is at doing somebody's job at the moment. And I don't know. I mean, it'll be good for some people. It's going to be very bad for other people to have, it becomes easier to have a social world filled with computers. I mean, you've seen the movie Her. Mm-hmm. I love that movie. You know, the movie Her deals with this problem at the end by having all the AIs rapture themselves into, you know, some other AI universe because it has no answer to the problem of what happens when, for many people, the simulated voice on your phone is a better companion than other human beings are. And trying to imagine the end of the movie, Her, but none of the AIs leave, and in fact they keep getting better, is pretty chilling. All right, so when you're thinking about all the ways in which this could go badly, and you're thinking about how we might as a society mitigate those risks, are you imagining a world where we are coming up with a number of different very strategic, maybe even technocratic, small regulations here and there where we're beefing up a regulatory body or we're maybe creating a new government agency? Or are you thinking of something like a whole new approach to how we govern ourselves and how we deal with threats like this? And I'm asking in part because I've read your book, Abundance, and in there you spell out all the ways in which the current ways we regulate things like housing or high-speed rail and infrastructure, that they can often be a massive barrier to progress. And I'm asking in part because I've been reading about the Industrial Revolution and trying to understand all the ways in which technology reshaped the world in that era. And in that case, what you see is a number of people embracing things like socialism and communism and liberalism. And I sometimes wonder, are we going to need a whole new ism? Does the new technological revolution demand of us that we come up with a novel new political philosophy or political movement? The answer is always going to have to be both on this, including, I mean, this is what the Industrial Revolution teaches you too. The Industrial Revolution had both an incredible efflorescence of what you're calling technocratic, but I would just call intermediate, incremental, trying to manage problems, you know, trying to handle sanitation, that you have to handle the problem in front of you. And also it created the conditions for, you know, tremendous political upheavals, for new ideas, for new forms of social and human organization. if we go the technocratic route as you're calling which is just a i mean i think the right way to describe the route is if we try to do things in real time whether they are you know complex and technocratic or blunt and simple we are still going to have destabilization that's going to give rise to profound new ideologies new religions new theories of social organization i mean you know in the long run if the hopes of Dario Amadei and, you know, Sam Altman and others are fulfilled. And what we get is not a misaligned machine threat, but instead a generator of material abundance beyond what any of us could ever have imagined. I mean, then maybe you're, what you are looking at is a period in which we are transitioning out of the punctuated moment in human history, it's not been all of human history, where we are organized around wage labor. Maybe, right? But I don't think you could sit from where we are now and say, well, we should be preparing for the end of wage labor. If that ends up where we are, okay, the transition is going to be very, very rough, I think it is fair to say. But maybe that leads you to a place where you have more radical left ideas of a fully automated economy in which the gains are shared in universal basic income or universal basic wealth. And you can have planning because of the level of technological sophistication and predictive modeling. Great, but we're not there now. And this is sort of the point I'm making about existential risk too, actually. I think that there can be a allure to using visions of the future that we don't really have any traction on in the present as a way of avoiding what you can do in the present. And my view is that not only is that a little bit useless because all you do is end up immunizing people or getting them used to these more negative visions of the future, but that the pathway to being able to deal with that future is consistently dealing with the present, that all we really have is the moment and a world in which we have a much better structure and a much more sophisticated relationship between society and the state and the technology that is all around us is a world in which we are going to be better capable of making not just small jumps, but big ones when we eventually need to do that. It's fun to sit around. And I lived in the Bay Area and did a fair amount of this for a while. It is fun to sit around and debate whether or not you think that AI is going to lead to a quintupling or a sextupling or a 30-magnifold increase in economic productivity growth. But the truth is, given that it has not done that now, we're not going to legislate or work from that theory. all we have is this moment and since i don't think we're going to just stop progress on ai even though obviously some people like ukowski would like us to more or less do that because i don't think that is within the realm of the possible plausible or even necessarily desirable i think that we are stuck with you know what liberalism and i'm a liberal always imagines us to be stuck with, which is the imperfect, messy work of ameliorating the problems in society in the moment, knowing that you have neither the wisdom nor the political capability to solve everything or to know the future. Okay. Well, because you brought them up, I wanted to ask you about the leaders in these AI labs and what you make of the fact that they, like you, are really into this word abundance. And according to them, one of the reasons that we should be investing so much energy and time and money in racing towards the creation of AGI is because they think it is going to, as they often say, usher in a world of unimaginable abundance. They think that they're going to be able to do things like unlock a clean, cheap, renewable energy resource, and that that is going to solve the climate change problem. It's going to usher in a far wealthier world. They think it could be the end of scarcity, the end of poverty, right? They even think on issues that I know you care about, like I know you're a vegetarian, they think that this is going to be the thing that gets us off of our dependency on factory farms, right? When it comes to jobs, they even say that yes, it's going to be a rough transition, but many people around the world are working jobs that they don't find a lot of meaning in. And this is going to be a liberating force. It's going to help people spend less time doing jobs that they don't want to do and have more time to do the things that they find meaning in. And the theory goes that if you put too many regulations in place, if you delay and delay and delay the arrival of that AGI, you are delaying a technology that could usher in the abundance that you care for. And I guess I want to know from you, do you think that that is a meaningful tension between trying to make sure that we have smart regulations in place and that we're doing things safely, but also a willingness to be open to embracing a technology that could actually make people's lives meaningfully better? It's absolutely a tension. Fire can cook your meat, making possible the easy ingestion of calories that make the bigger brain of the human race possible, thus making the human race possible. Or fire can kill you and everybody you know. And AI is a general purpose technology of, at this point, unknowable. And I really mean unknowable because it might not breach these much more complex problems are describing, but maybe it will. So it's of a knowable capacity. And if it does keep accelerating, yeah, I don't see any reason it shouldn't be able to accelerate drug discovery, accelerate clean energy research. I don't know about getting rid of poverty. Now, one place where a lot of these people, I think, end up talking like people who have never spent 10 minutes thinking about any social political problem ever is in many cases, we already know how to do things better, but diffusing such things through society is very difficult. And this is what the book abundance is actually about. And so the idea that you would just come up tomorrow with a, you know, different way of doing energy. I mean, we have different ways of doing energy and yet we are not building them nearly as fast as we could. We know we need more transmission lines. The reason we're not building them is not because we don't realize It'd be better if we could move clean energy from one place to another. So what makes it hard to improve human society, the rate-limiting dimensions of it, I have never heard a very good explanation of how AI solves that. I think AI is most promising in the near term where you have fairly few real-world frictions on what you're attempting to create. I want to embody somebody who I think you are in debate with. And that is the investor and inventor and big accelerationist Mark Andreessen. And I find it interesting that both you and he have both made arguments using the story of how America failed to embrace the benefits of nuclear energy. How can you solve the sort of climate crisis, the carbon emissions crisis? It's like, well, you have the silver bullet technology you could roll out in the form of nuclear fission today. Richard Nixon, heavily condemned Richard Nixon in 1972, proposed something at the time he called Project Independence. Project Independence was going to be the United States building 1 new civilian nuclear power plants by the year 1980 and cutting the entire U energy grid including the transportation system cars everything And by the way right geopolitically removing us from the Middle East So why did Project Independence not happen Why do we not have like you know unlimited nuclear power today You know, the reason is because it was blocked by the political system. And Dreesen, in his case, he says that if you look at how we responded to the real dangers that were posed by nuclear war, by nuclear weapons, we let those fears dictate how we would treat nuclear energy and that we used the safetyist mindset that kept us from embracing a resource that would have been not only a benefit to the economy, but to the environment, to the global world order. And he's saying that we can't let the AI do-mers, we can't let the Democrats do that again with AI. The first new nuclear power plant design, the first newly designed nuclear power plant in the last 50 years just went online in Georgia. Threat $20 billion over budget. and, you know, it's a story of its own, but at least we got one online. It's the first new nuclear power plant design ever authorized by the Nuclear Regulatory Commission. So we put in place a regulatory regime around nuclear power in the 1970s that all but made it impossible. And of course, you've also pointed to the story of how the progressive politics around nuclear energy played a pretty big role in keeping us from embracing a number of its benefits. So nuclear power is a technology that both held extraordinary promise, maybe still does, and also you can really imagine every country wanting to be in the lead on. But the technology got regulated to the point that certainly all of nuclear's advocates believe it has been largely strangled in the crib from what it could be. And so I'd love to know, like, where do you and Mark agree and disagree when it comes to the lesson of how we treated nuclear energy and using that to enact some wisdom for how we might think about AGI? Look, my line on Mark is that he's actually in practice going to be proven to be a decelerationist. Because, yeah, I think the nuclear energy and nuclear weaponry example is a good one to look at. And what led, not just in America, but functionally in every society, advanced society on Earth, to tremendous slowdowns in the use of nuclear power is nuclear disasters or the fear of them. and I am of the view that nuclear energy is way overregulated and we have lost time and ground and benefits by clamping down on it but the way in which things get clamped down on is they're not wisely thought through or regulated from the outset and then when things go horribly scarily wrong then comes the clamp down and so mark's let a rip theory of this to me is the fastest way to get this whole thing at some point shut down because if you unleash this on society absent thoughtful capacity to respond to the problems it's going to create you have a very very strong chance of getting the kind of backlash it is either destabilizes your society so you don't get social progress because your society is like, you know, eating itself. Or just at some point something happens and people are like, absolutely not. And you do get the kind of long-term pauses. You begin to regulate it the way you would regulate a bioweapon. Now, I don't think that's all that likely here, in part because it's so much more of a dual-use technology than nuclear weapons are. But it would not be good for Mark's vision of the world and what he thinks is possible if we do very little and then we wake up a couple of years and things start going terribly, horribly wrong. That's not the situation under which you get, you know, wise regulation or wise responses from society. But again, the thing that harmed nuclear energy is Chernobyl, Three Mile Island. More recently, Fukushima, which functionally froze nuclear energy in Japan, despite the fact that Japan, you know, had been building nuclear energy for a long time, despite the fact that Japan has very important geopolitical reasons it needs to be using nuclear energy. And despite that, Fukushima killed, you know, very, very, very few people. And so I think the lesson of nuclear energy is not, hey, wouldn't it be great if we just didn't do any regulation? The lesson of nuclear energy is if you have a technology people are afraid of, and then you have huge disasters, really frightening disasters that spin out from that technology, people are not going to do a slow, thoughtful cost-benefit analysis trying to think through, okay, but compared to using more coal plants, they're going to throw the brakes on the fucking thing. What is your understanding of how the parties are starting to divide around the issue of AI? Because one of the things that I found really interesting is that in our super partisan political moment. AI is not yet totally partisan. You can find Republicans and Democrats in Congress who both think that we need to support America's AI industry in winning the race against China. And among the populists, you can hear folks on the left like Bernie Sanders. Are AI and robotics inherently bad? Absolutely not. But the people who are pushing this transformative revolution are the richest people in the world. And people on the right, like Josh Hawley. Not content with addicting our kids to their gizmos or amassing fortunes the size of lesser European states, our tech elite has turned with rabid enthusiasm to artificial intelligence. who sound very similar when they're taking a strong stance against AI. They are not staying up nights worrying about working people. In my view, they want even more wealth and they want even more power. AI can be used to make workers more productive, to encourage and multiply their labor. That's a good thing. But it must not be used simply to replace them. Most jobs should be reserved for humans. This is going to transform our world. It's going to transform our economy. It means millions and millions of workers are going to be displaced from their jobs. And I was recently at this dinner and had this interesting interaction with a multimillion dollar mega donor to the Democratic Party who, like a lot of Democrats, doing a lot of soul searching. And eventually in the conversation, I asked him, what do you understand is the Democratic Party's position around AI? and he said to me, I think that the Democrats want to adopt whatever position they need to to get elected and get back in power. And so I was wondering from your position, how do you see the landscape? How do you see the parties dividing up? Do you think that there is right now a winning pitch to be delivered that is on the one hand good policy and on the other hand good politics? I don't think we know the winning political pitch on AI. And with all due respect to your cynical rich friend, I have spoken to a lot of the Democrats running, plausibly running in 2028 about this. I've spoken to other Democrats about it. And I find them to be in the place that I find many people to be in, which is wrestling with it and not sure what to do. Aware that we're probably going to have labor market destabilization that's going to be very, very socially dangerous and destabilizing, but not sure they have a good policy to handle it. Aware that you want to figure out some answer to what education should be for and how it should work, but not knowing what that answer is. Is it more AI? Is it no AI? Is it banning it? Is it trying to make new products out of it? So it's like everybody's got their own tutor. I find them genuinely wrestling with this. I don't think people know the right position in the sense that they know the politics because of course there is no right political position yet in 2025 which is where we still are i mean i expect the politics of i to be different in 2028 but unpredictably so and i think that this is the hard thing about trying to wrap a politics around a technology moving this fast where even the people creating it don't fully understand what it is they're doing and where the primary pressure is coming from China. I will say that in practice, it does appear that AI is splitting the parties in a fairly traditional way, which is to say the Democrats are more comfortable with regulation and more comfortable doing things like export controls on NVIDIA chips and coming out with complicated executive orders about discrimination in AI and, you know, all the things you want firms to be doing in terms of offering up information. And the right, you know, since Trump has gone into office, it's turned out to be very, very deregulatory, all the way down to trying to push preemption of state efforts to regulate AI. Sometimes the right likes to suggest it's very federalist and, you know, once a thousand flowers bloom in American policymaking, but in this case, it's been a real effort to make it so states cannot act on AI. So yes, you have some of the odd bedfell coalitions of the populace, at least rhetorically, but I think in practice you're seeing a Republican party that leans towards deregulation and allowing the corporations to move fast and a democratic approach that is not highly regulatory but is much more trying to create the architecture for those regulations to snap into force if they need to and you know I expect that to continue I would actually say that I expected there to be more continuity between the Biden and Trump administrations on this than there actually was. All the way down to, I'm like quite surprised to see the Trump administration now selling advanced chips to China because I thought if the Trump administration believed in anything, it was retarding China's progress on something like AI. But Trump himself is so all over the place and so easily swayed by corporate leaders that, you know, even that is not, you know, even like sort of anti-China approach of the GOP or the Trumpist GOP is not proving itself to be stable. All right. So final question. So if you believe, as you said you did, that this is really in flux, that you are expecting things to change in politics, just as they're going to change in the technology, how is it that you're forming your views right now? Because a lot of people who've listened to this podcast have been introduced to a sci-fi movie plot for the first time. And on the one hand, you hear people like Jeffrey Hinton or Joshua Bengio, who are scientists who obviously understand this technology more than me, more than the layman. And they're really concerned. They're quitting their jobs to go warn the world that we need to get ready. They're hearing from people like Eliezer Yudkowsky or Nate Soares who have decided to dedicate their lives to trying to stop us from taking this AGI step. But they're also hearing compelling arguments coming from, you know, Reid Hoffman and these techno-optimists who really think that we might be able to solve a lot of the world's problems, usher in a better world. How are you as a thoughtful person? Because I think of you as a very thoughtful person, Ezra. How are you wrestling with all of these competing views and trying to formulate your own instead of just parroting some view of somebody else who votes like you out there? Like on a visceral level, what are you doing? Like how are you feeling when you hear these? Like how do you form views with something that's seemingly so strange and perhaps so consequential? I mean, partially, I am allowing my views to be soft and fluid and in flux. My views change as the situation appears to me to change. I listen to, you know, a lot of people on podcasts and, you know, keep up with Dworkish Patel and Tyler Cowen and people who feel very culturally inside this world. I use the systems myself. I don't think there's any better way to have a felt sense of it than to actually be using them and trying to be using them fairly frequently. And, you know, I try to listen and read. I mean, I don't think there's a great singular answer, but I think you should be trying to take in a lot of information. Some of it coming from people who are very expert, but some of it just using it and being alert to how other people use it. You know, reading the reporting and how other people are using it, trying to see society a little bit as a whole here. and I think because for a long time I've been very interested in the philosophy of technology and critics of technology going back to probably critics of new media technologies like Marshall McLuhan and Neil Postman and Walter Ong that I do think this fits into it's not going to be exactly like and in ways maybe more consequential more fundamental but I do think this fits into a lineage of new mediums and we have to think about not just what the content is But how the medium will change us, how will change us to have these, at least for now, endlessly subservient, highly capable, always on, always trying to please little robot companions. i mean what if nothing else what does it change in a human being to be able to so much more often escape the friction of other human beings and their needs and their desires and their wants and disappear into the relative comfort of extraordinarily powerful system that when you send it a picture of your incredibly messy desk wants nothing more than to be able to help and has inexhaustible patience for helping you. What does that do to your sense of all the people in your life who don't have inexhaustible patience for you? You know, I try to remember that before this changes anything else, it's going to change our expectations and change what we're used to. And all the more so for kids who are not going to be used to anything before this. And so I think that's my framework for it. that the key question of AI isn't just how the AI changes. That's what we spend the most time talking about, but it is how the AI changes us. Technology, the tool always acts upon the user, and all the more so when the tool is built to mimic and manipulate the user. Thank you for listening. We'll be back soon with more reporting and more perspectives on AI. We'll hear from some skeptics, from those on the political right, and much more. The Last Invention is produced by Longview. To learn more about us and become a supporter, you can click on the link in our show notes or visit us at longviewinvestigations.com, where you can also send us an email and let us know what your AI questions are. Thank you to everyone who's already written in, and for all of you who have shared this podcast with your friends and your communities. We'll see you soon. This episode is sponsored by Ground News, the app that helps you spot media bias and see a broader picture of the news shaping our world. Get 40% off their Vantage plan at ground.news forward slash reflector so they know we sent you.