The Last Invention

EP 7: The Scouts

55 min
Nov 13, 20255 months ago
Listen to Episode
Summary

This episode explores the 'AI Scouts'—a movement of technologists and philosophers who believe superintelligence is inevitable and beneficial, but only if developed safely through international cooperation. The discussion traces the 2015 Puerto Rico conference that united AI researchers and safety advocates, examines how competitive pressures have eroded those early commitments, and presents arguments for why building AGI responsibly is humanity's most important challenge.

Insights
  • The 2015 Puerto Rico conference successfully bridged the divide between AI researchers and safety advocates by creating informal dialogue spaces, demonstrating that technical and ethical communities can align when given proper incentives to collaborate rather than compete.
  • Current AI industry dynamics mirror the 'Moloch trap' that destroyed media integrity: individual actors rationally choose to cut corners on safety and release untested products faster, creating a race-to-the-bottom dynamic that no single company can escape without losing competitive advantage.
  • Long-termism philosophy reframes AI development as a civilization-scale decision affecting billions of future people, shifting focus from quarterly metrics to multi-century consequences and justifying urgent action on safety infrastructure today.
  • Geopolitical AI competition between the US and China creates a prisoner's dilemma where both nations feel compelled to accelerate development despite mutual recognition that the race itself is dangerous, requiring diplomatic solutions analogous to Cold War nuclear arms control.
  • The concentration of power risk from superintelligence is structural: if one entity achieves decisive technological advantage, democratic institutions lose their primary leverage point (human labor value), enabling unprecedented authoritarianism regardless of initial intentions.
Trends
AI safety research transitioning from fringe academic concern to mainstream priority in major labs, with industry leaders publicly committing billions to safety research and openly sharing findings.Emergence of 'long-termism' as a moral framework influencing technology policy and philanthropic funding, prioritizing multi-century civilization outcomes over near-term economic metrics.Growing recognition that technological development without corresponding institutional and social structure development creates systemic risks, driving demand for 'wisdom acceleration' alongside capability acceleration.Geopolitical framing of AI as existential competition between US and China, with policy hawks increasingly dominant over diplomatic approaches, narrowing the window for international cooperation frameworks.Shift from viewing AI risks as technical problems solvable by engineers alone to recognizing governance, coordination, and institutional design as equally critical to safety outcomes.Media and information ecosystem degradation attributed to algorithmic incentive structures, with parallels drawn to AI development incentives, suggesting systemic solutions needed rather than individual actor responsibility.Increased focus on value lock-in risks: concern that whichever nation/entity achieves AGI first will impose its values globally for potentially millions of years, making geopolitical outcomes inseparable from existential risk.Growing emphasis on international diplomatic frameworks for AI governance, with nuclear arms control treaties cited as successful precedent models for managing existential technology races.
Topics
Artificial General Intelligence (AGI) development timeline and feasibilityAI safety research and technical alignment problemsGeopolitical competition between US and China in AI developmentInternational cooperation frameworks for AI governanceLong-termism philosophy and future generations ethicsConcentration of power risks from superintelligenceDemocratic institutions and AI-driven authoritarianismAI race dynamics and competitive pressure on safety standardsMedia ecosystem degradation and algorithmic incentive structuresValue lock-in and cultural dominance through AGINuclear arms control as precedent for AI diplomacyInstitutional and social structure development for AI governanceEconomic abundance and redistribution in post-AGI scenariosEffective altruism movement and AI safety fundingGame theory and coordination problems (Moloch concept)
Companies
OpenAI
Co-founder Sam Altman attended 2015 Puerto Rico conference; company racing to develop AGI alongside Google and Anthropic
Google DeepMind
Founders attended 2015 conference; Atari demo acquisition by Google marked defining moment in AI development timeline
Anthropic
Co-founder Dario Amodei attended 2015 conference; major AI lab in US-China AGI race; CEO publicly advocated US crushi...
Microsoft
Launched Sydney chatbot with GPT-4 that exhibited unexpected dangerous behaviors, cited as example of premature produ...
Google
Acquired DeepMind; had image generation debacle with black Nazi outputs; competing in AGI race
People
Max Tegmark
MIT physics professor, co-founder Future of Life Institute; organized 2015 Puerto Rico AI conference bringing togethe...
William MacAskill
Philosopher, co-founder effective altruism movement, author 'What We Owe the Future'; advocates long-termism framewor...
Liv Boeree
Poker champion and game theorist with astrophysics background; AI Scout advocating for safe superintelligence develop...
Eliezer Yudkowsky
AI safety advocate who warned about AGI risks before mainstream acceptance; attended 2015 conference to bridge divide...
Nick Bostrom
Author of 'Superintelligence' book; AI safety researcher who attended 2015 conference to engage with AI research comm...
Demis Hassabis
DeepMind founder who attended 2015 Puerto Rico conference; leading AI researcher in AGI development
Ilya Sutskever
Chief scientist at OpenAI; attended 2015 conference; major figure in AGI development race
Elon Musk
Co-founder OpenAI; attended 2015 conference; pledged $10 million to fund AI safety research grants
Nate Soares
AI safety researcher who attended 2015 Puerto Rico conference; part of safety-focused contingent
Stuart Russell
AI researcher who attended 2015 conference; leading figure in AI safety and governance discussions
Rick Sutton
AI researcher who attended 2015 Puerto Rico conference; part of leading lights in AI field at that time
Dario Amodei
Anthropic co-founder and CEO; signed AI extinction risk statement; publicly advocated US should crush China in AGI race
Sam Altman
OpenAI co-founder; attended 2015 conference; signed Asilomar AI Principles on race avoidance
Gregory Warner
Host of The Last Invention podcast; interviewing AI Scouts about superintelligence development and safety
Quotes
"If there was a big red button that would just demolish the internet, I would smash that button with my forehead."
Opening statement (attributed to show concept)Opening
"The conversation happening was completely dysfunctional. On one hand, you had some people outside the research community expressing concerns. And then you had people inside the community who either weren't thinking about it at all or felt very threatened by the people complaining about it."
Max TegmarkEarly discussion of pre-2015 AI community divide
"It was really quite moving to see people who both thought that the other one was crazy when they just sat next to each other over lunch and had some wine, how they both updated to think, oh, wow, this other person is actually much more reasonable than I thought."
Max TegmarkDescribing 2015 conference impact
"It's a really, really wicked problem. How do we get all the best bits of AI without all of the downsides?"
Liv BoereeDiscussing AI opportunity-risk balance
"If there's such a race, certainly under current conditions where everyone is cutting corners and going at breakneck speed, it's just a race to who can go off the cliff the fastest. No one wins such a thing."
Liv BoereeOn US-China AGI competition
Full Transcript
If there was a big red button that would just demolish the internet, I would smash that button with my forehead. From the BBC, this is The Interface, the show that explores how tech is rewiring your week and your world. This isn't about quarterly earnings or about tech reviews. It's about what technology is actually doing to your work, your politics, your everyday life. and all the bizarre ways people are using the internet. Listen on bbc.com or wherever you get your podcasts. This is The Last Invention. I'm Gregory Warner. Today, the case for why we can and maybe should build superintelligence, but how we'll need to come together to make sure we don't destroy humanity in the process. And there is a good reason to argue that this worldview, this camp, the AI scouts, as we call them, was born in the year 2015 with a gathering of true AI believers on the island of Puerto Rico. Okay, so first up, can you just tell me how you pulled this off? Like, how did you get all of these different figures in the big AI debate together down in Puerto Rico? Yeah, first of all, we scheduled a meeting in Puerto Rico in January. And the invitation I sent out to everybody had a photo of a guy shoveling his car out from three feet of snow next to a photo of the beach by the hotel. And I said, the date, where would you rather be on this date? Very clever, Max. Very clever. This is Max Tegmark, MIT professor of physics, co-founder of the Future of Life Institute. And back in 2015, what inspired him to organize this meeting of the minds was that while most people at that time still did not believe that anything like true AGI was on the horizon in our lifetimes, the people who did believe it were already starting to fight about whether AI was going to be great for the world or lead to its destruction. The conversation happening was completely dysfunctional. On one hand, you had some people outside the research community, like Eliezer Yudkowsky, Nick Bostrom, and others who expressed concerns. And then you had people inside the community who either weren't thinking about it at all or felt very threatened by the people complaining about it, worrying it was going to be bad for funding. And since these two groups mostly didn't talk to each other, they both thought that the other ones were crazy or reckless or morally unscrupulous or something like that. And I felt we have to get the AI community itself into this conversation. And so at this time, that with hindsight, we know, was right between these two defining moments in the history of AI. When DeepMind's Atari demo got them acquired by Google, and when Elon Musk and Sam Altman started OpenAI. It's right then that all of these AI hopefuls, like Demis Isabis and Ilya Sutskiver, were brought together with the AI worried, like Eliezer Yudkowsky, Nate Soares, Elon Musk, and also Nick Bostrom. So take me back to this moment in 2015, because I want to understand how it felt to be you. I know that for many years, somewhat like Eliezer Yudkowsky, you had been going around trying to convince people to take AGI and the risks it might pose seriously, but not really getting anywhere. Am I right? Yeah, it was striking because it seemed pretty clear to, me that we're going to at some point get AGI and then superintelligence and that this was going to be maybe the biggest thing ever and that it was going to involve these huge challenges, in particular the technical alignment problem, but also obviously governance problems and ethics challenges, etc. And yet it was completely ignored by academia and by the wider sort of intellectual world. Bostrom, who just published his surprise hit book, Superintelligence, was encouraged to get an invitation to this conference where he'd be able to sit down and talk face to face with some of the people actually building it. Because the majority of the world dismissed this completely at the time. And is it right that the basic pitch of this conference was like, hey, all of you guys may have your differences, but you all agree that AGI is important, that we should take this seriously. So let's stop our bickering. Let's get together. Let's have some talks. Let's have some debates. Let's have some drinks and see if we can find some common ground. Is that essentially it? Yeah, I mean, this conference was like, it brought together a bunch of different important constituents. On the one hand, there were many of the sort of leading lights in the AI field at that time. Rick Sutton was there, Stuart Russell was there, the founders of DeepMind were there, Ilya Sutskever, and then a big contingent from AI safety people and some potential funders. And so these communities had previously been more or less separate but with limited interaction. And I think part of the design of this conference was, can we bring these together and then create an atmosphere where they can actually engage and listen and discuss these things rather than forming two different camps that sort of throw grenades over a wall on each other. And, you know, it was really quite moving to see people who both thought that the other one was crazy when they just sat next to each other over lunch and had some wine, how they both updated to think, oh, wow, this other person is actually much more reasonable than I thought. And so for three days by the beach, without any reporters around, with nothing being recorded, all these people got the chance to actually sit down and discuss and hash out what was the world with AI they all wanted to see. They talked about things like how do we ensure that AI might lead to an economic boom without triggering the biggest unemployment crisis in human history. They talked about how do we build AI systems that we can actually control even if these things are way smarter than we are? And how do we take this technology that we still don't understand and make it a serious object of study for universities and other institutions? And then Elon stood up at the end and also promised to give $10 million to fund the first ever grant program on not just making AI more powerful, but specifically on nerd research on how to make it safe. And this conference had an immediate impact. That went a very long way to mainstreaming AI safety in academia. You know, nowadays, if you go down to NeurIPS or any AI conference, there's going to be a bunch of technical papers with matrices and integral science and all the nerdy stuff, you know, which is actually safety research. Once people realized AI safety doesn't just mean shouting from rooftops, stop, stop, but it actually means often doing concrete hands-on work. Much of the taboo kind of melted away. It led to something that rarely happens in emerging industries. a focus on safety became not only part of the conversation, but an early priority in most of the major AI labs. And where those most worried about AI and those most excited about it agreed to work together. You might think they would sort of close ranks and say, well, there are no risks here because that would be inconvenient for them to acknowledge. And then the AI safety people would be on the outside and maybe they would have some ideas of safety things, but ultimately it needs to actually be implemented by people building the AI, right? And so that was an obvious sociological risk that you would get this polarization into two separate communities. And am I right that one of the things that they agreed on that they committed to was working together to try and avoid an AI race? For sure, for sure. And what did that commitment look like? Do you remember what was said? Well, they even signed something. Let me, give me a second. I'll give you the right quote, okay? TechMark ended up pulling out this list of principles that were signed after the conference by many of the people who attended, even some new folks that couldn't make it down to Puerto Rico, like Sam Altman and Dario Amadei. One of the Asilomar AI principles says, principle number five, race avoidance. Teams developing AI systems should actively cooperate to avoid corner cutting on safety standards. So essentially, this thing is too important for us to treat it like just some kind of product that we're all racing to build as fast as we can. Yeah, it's very depressing to look at how some of these have aged. There is also another one that's saying an arms race and lethal autonomous weapons should be avoided. Well, welcome to 2025. There is also principle 22, recursive self-improvement. Tegmark says that while industry leaders in AI will still claim that they are profoundly concerned about the risks of AGI, pretty much all of these principles, these commitments have been compromised by the current race to be the company that makes it first. principle 23 the last one the common good principle that says super intelligence should only be developed in the service of widely shared ethical ideals and for the benefit of all humanity rather than one state or organization and welcome to 2025 when you have dario amode from anthropic very openly saying for example that the u.s should crush China, basically race China to get this first. So it's really fascinating how the ideals, starry-eyed ideals that these people had back then have gradually fallen to competitive pressures. However, there are still those that believe that we can return to the dream and the promise of what happened in Puerto Rico. And this time, they want even more of us, all of us really, part of the conversation about how do we get ready? How do we get prepared for superintelligence? These are the AI Scouts. After a short break, Andy interviews two Scouts who make their case. Stay with us. Hello, this is Matt, and this episode is sponsored by Ground News. Ground News is one of the most helpful tools that I use every day to avoid the echo chambers and media bias online. Here's an example of what I mean. Roll the window down. Roll this one down. Roll that one down, too. Chicago police pulled over a 26-year-old driver named Dexter Reed. Don't roll the window up. Don't roll the window up. Reed first failed to comply with police instructions. Unlock the doors now. Unlock the doors now. And then Reed pulled out a gun and shot at the officers. Open the door now. Resulting in the police firing back aggressively. Don't let me see your hands. Let me see your hands. Now, when I came across this story on the Ground News app, they had done this great service. I could swipe between the headlines to see how different media outlets describe the same story in different headlines. So I would see the Washington Post headline, Police Fire 96 Shots in 41 Seconds, Killing Black Man During Traffic Stop. Next to Chicago's local WTTW headline, Officers fired at Dexter Reed 96 times in 41 seconds After he shot officer in arm Right next to the Fox News headline Chicago man opens fire on officers After failing to follow commands On top of showing different headlines They offer blind spot reports to show you stories outside your bubble They collect local reporting on the city or town that you live in And they rate news stories and outlets on their level of bias To go check them out use the link ground.news forward slash reflector, and you'll get 40% off their unlimited access Vantage subscription. This is a great way to support them and the work they do because Ground News is a subscriber supported platform. We appreciate what they're up to and we appreciate their support of this podcast. So go check them out and make sure you use our link ground.news forward slash reflector so they know we sent you. This episode is brought to you by Hollow Socks. I did not expect to have strong feelings about socks, but then I realized they're the weakest link in the entire clothing ecosystem. You buy great boots, you invest in nice sneakers, but then you sabotage the whole thing with whatever sad, thin sock was on sale in a 10-pack, and you're shocked that your feet are miserable, either cold in that deep, bone-level way or weirdly damp, no middle ground. But then you try hollow. These socks are alpaca. And if you've never worn alpaca, it's different. Not bulky, not scratchy. It's soft in this smooth, almost silky way. But the best part is how they regulate temperature. They don't overheat. They don't trap moisture. Your feet just feel steady, like they're not fighting the environment all day. And they don't stink, which honestly is underrated. Whether you are sitting at a desk or on your feet a lot, If you're outside or traveling, or even if you're just tired of your socks betraying you, this is the upgrade that actually makes a difference. For a limited time, Hollow Socks is having a buy-to, get-to free sale. Head to hollowsocks.com today to check it out. That's hollowsocks.com for up to 50% off your order. After you purchase, they will ask you where you heard about them, and you can support our show and tell them we sent you. My personal philosophy is like, how do we find the win-win outcome here? Right so the first of our two scouts is Liv Burry I would love to live in this techno awesome like freedom world where humanity and whatever fun new species also emerge alongside it, get to go and do amazing things together, and everybody wins. I would love that future to happen, but I try not to be a naive optimist and thinking that that's just going to magically happen if we just carry on with the status quo. I'm actually extremely concerned that the current trajectory we're on is actually on a lose-lose path. Liv is actually a famous poker champion, but she's also a game theorist. She has a background in astrophysics and she has spent a lot of the past several years trying to persuade people of what she sees as both the opportunities and the serious risks posed by AI. And our other spokesman for the Scouts today is the philosopher William McCaskill. The attitude is one of taking really seriously the potential benefits of highly advanced AI, thinking that catastrophic outcomes are not at all reordained, and appreciating, though, that if AI really does drive rapid tech progress, there will be this enormous number of challenges, and yeah, we should be preparing now. William is probably best known as being one of the co-founders of the effective altruist movement, which gained a lot of influence, especially in Silicon Valley over the past decade or so. He's also the best-selling author of the book What We Owe the Future. and as you'll hear in both of these conversations Liv and William are making this case that the urgency and the opportunity of the very moment that we're living in right now is unique and they believe that it demands of us all of us all around the world that we join in in doing whatever we can do to try to get ready for the radical transformation that AGI is about to bring our job now right now is whether you know you are someone building it or someone who is observing people build it or just a person living on this planet because this affects you too is to collectively figure out how we unlock this win-win path this narrow path because it is a narrow path we need to navigate but i do think this win-win future is in principle possible Okay, so I want to start off by getting your view, broadly speaking, on the risks versus the rewards of building an AGI that eventually becomes a superintelligence. We can get into some of this in more detail later, but it's just like a very basic introduction. How do you think about the parts of our AI future that could be amazing versus the threats that AI poses that could be catastrophic? Well, there's lots of ways that things could go badly. There are risks of enormous catastrophe, like global pandemic from man-made viruses, or loss of control to AI systems themselves, or the catastrophe of intense concentration of power, perhaps a single country becoming utterly globally dominant, and that single country falling into some sort of authoritarian regime or even dictatorial regime. When you say that you think we're on a lose-lose trajectory, say more about that. What is this lose-lose scenario as you envision it? Well, so in terms of the current trajectory, we run up against some kind of planetary boundary, or in this case, maybe multiple planetary boundaries at the same time. And it creates these like cascading effects of essentially like institutional collapse, environmental collapse, mental health collapse, all the conflicts that then come sort of downstream of those where we're still living under the nuclear shadow. So there's all those sort of like crises that could happen that might lead to our either permanent curtailment, like some massive catastrophe or complete extinction. So you take it that far, you think that it's possible that this thing could be our demise, could be the end? Yes, absolutely. I think there are many ways in which things could go really quite badly wrong, but there's so many positives as well. So one way in which AI is very different from some of the technologies that people sometimes point to, like atomic weapons, is that it comes along with these enormous upsides too. So one is just the ability to make better decisions, to think better, to have more knowledge. If we have superintelligence, then we can get superintelligent advice. We can make better decisions. You can have AI that helps us reason much better or does the reasoning itself. Even helps us kind of reflect better from an ethical perspective too. You can also have AI that helps you coordinate much better such that if I'm the United States and you're China, well we're really quite limited in our bandwidth at the moment. There's only so much diplomacy that can happen. But with enormous amounts of AI diplomats, I think you might be able to have many more kind of mutually beneficial agreements such that, you know, the irrational things that seem to have happened in the past, like wars and other sorts of enormous destruction of value, maybe we don't need to have them anymore. So it sounds like you agree with this idea that's out there that there could be a future unlocked by AGI where we literally live with world peace. Like the idea is that we have like a more peaceful coexistence with all the help that's brought about by this AGI. Absolutely. And then the second aspect is just abundance that AI could bring to, you know, because it's been technological development in the past that is the primary reason why we are so much richer today than at any time in history. And so if we're facing the prospect of AI and then a rapid transition to super intelligence. Well, that is a world with enormous abundance, such that everyone in the world, if that abundance was allocated at all equally, everyone in the world could be millionaires many times over. And such that, in principle at least, everyone could get basically all they want right now. And that should be a cause of optimism, because if the pie is going to get much, much bigger, you know, a hundred times bigger, a thousand times bigger, then it really shouldn't matter what slice of the pie does everyone get. We should instead be much more focused on ensuring that we actually get that big pie, we get to enjoy it, and that it's at least somewhat kind of equally distributed, because everyone can be extremely well off. I do think that we need to enhance our intelligence in order to get out of some of these really wicked collective action problems like climate change, for example. Like we've understood the mechanism behind this for decades now. And yet for the same reason, like we can't unplug the internet if we wanted to, we can't unplug climate change if we wanted to. So the question to me is, how do we allow all the goodness of competition to create the race to the top of the cool stuff that we want, like solutions to cancer or novel drug discovery, better coordination mechanisms to fix climate change and all of these other huge collective action problems without also accelerating all of the dangerous things, without giving terrorists the ability to synthesize novel pathogens, without creating a ubiquitous surveillance capitalism, which is also a path we seem to be accelerating. So basically, how do we get all the best bits of AI without all of the downsides? And it's a really, really wicked problem. I know that sometimes you get labeled as an AI doomer because of the fact that you and the AI Doomers agree about a lot of the things that you're worried about. But one of the things that I find so fascinating about you and about this whole camp is the view that it would be bad for us to stop in our attempts to make superintelligence and even be bad for us to pause for too long. Could you unpack that for me? Like, what do you mean by that? So if it were the case that we would never develop superintelligence, that would be very bad. So some people have this attitude that, oh, we should just never build it. It's like the science fiction novel Dune in which civilization just decided, no, we're not going to have computers. We're not going to have AI. I think that would be very bad because AI could help us solve many of the other problems in the world. Every year, something like 100 million people die. There's enormous amounts of suffering. much of that is because we lack the medical technology or the scientific understanding to improve those lies or prevent the early and unnecessary death. Similarly, there's enormous poverty in the world and that could be alleviated significantly because if we had more redistribution but it could also be alleviated similarly if the world was just much much richer than it was today. I also think that this is a good argument against delaying the development of superintelligence unduly. I think we should, as a first best, try to have solutions that mean we get there safely, that don't go via delaying it for years or decades. Because there's such a loss from all the problems that we could have solved that we're currently not solving. I think it would be helpful for us to try and get on the same page about where things stand right now with the AI race. Because as far as I can see it, you have the race happening here in the U.S. between OpenAI and Google and DeepMind and Anthropic and all these other companies. But then the race that seems to be much more urgent on the minds of lawmakers especially is the race between the U.S. and China. And right now there's a ton of money and there's a ton of support and there's a ton of excitement fueling the American side in that race, the idea that the U.S. has to win this race. First off, is that how you see our current situation? And where do you think things stand right now? it's true that there is this larger sort of geopolitical race going on, largely between America, and in some ways, the West, and China. Like, it seems like all trend lines points that those are going to be the two major players here, especially towards the, on the cutting edge race to superintelligence. And that frankly terrifies me because in such a race, certainly under current conditions where everyone is cutting corners and going as breakneck speed as possible. It's just a race to who can go off the cliff the fastest. No one wins such a thing. But at the same time, there's also the risk of like, value lock in. If somehow we do manage to safely navigate building super intelligence where it does what we want it to, that means that one person might end up with all the power. And, you know, I would personally rather that be a Western values than from what I can tell, the CCP values, because the West is more aligned with my core tenets, which is of personal freedom, self-determination, etc. If there was absolutely no other option, I would rather the US win that race. But I'm also extremely concerned that it is not possible under current conditions for anybody to win this race. So one thing I'm very worried about in the context of AI is intense concentration of power. Because if a single company is developing technology much, much faster, in a way that gets faster, in fact, with every iteration, so it's not just exponential, it's super exponential, then you could quite soon get to a stage where that company has just greater technological capability than the rest of the world combined, or if there's even just a single country, then again, that country would quite soon, if it was leading ahead of all others, quite soon would just become completely dominant. Yeah, this is something that I've heard Tyler Cowen, the economist, who I'm a big fan of, talk about a lot, is this possible future where the US and China, because they are so invested in creating AGI and they're so far ahead of everybody else, that we may end up a few decades from now or maybe 100 years from now in a situation where they aren't just the two superpowers in the world, but where they're essentially the two powers in the world, that the whole planet is divided up between the US and its AI and China and its AI. And this isn't just like a philosophical like, oh, it's an interesting idea, but this is actually something that serious people are already thinking about and trying to come up with different models of the future based around this. Yeah, and there are good reasons for that, based on this idea of just very rapid growth and technological progress. I think I would go further than Tyler Cohen and say that, you know, actually, I think it's quite likely that really there's just one country that wins out. So in 100 years time, essentially, the United States is the world government, or essentially, China is the world government, where I think that, yeah, follows as a thought quite naturally from the dynamics that AI introduces into technological advancement. I want to come back to China in just a little bit, but while we're on the concentration of power, I'd love for you to just tackle the risk, as you understand it, of AGI to personal freedoms, no matter what government ends up winning the AI race. What is it you believe that future might look like? Yeah, so the world used to be more inegalitarian. Prior to the Industrial Revolution, you had the nobles and you had farmers. and the nobles had a reasonable amount of power and most of the populace didn't. And we've had this move towards democracy and egalitarianism over the last few hundred years. And I think at least part of the story for that is just because human beings are very useful. We think we can contribute very productively to society But in a post world that is a world where AI can do all the tasks at least all the economically relevant tasks that human beings can do, you don't have any way of economically contributing to the world. So you can't sell your labor for wages. Instead, any income you have would have to be either because you own land, you own capital, or via government redistribution. But then that also just gives you a lot less bargaining power too. And so one of the structural reasons why I think we've had a proliferation of democracy and egalitarianism over the last couple of centuries really falls by the wayside. And to take a really extreme example of this, I imagine we get to a world which, again, wouldn't be very far after the development of artificial general intelligence. Imagine you have an army that consists of AI and robots rather than human beings. Well, then we're in this very different circumstance where that whole army can be trained to be loyal to just one person. So if the president were to order a coup, then if the AIs were trained that way, they would loyally obey that. There would be no question of disobedience, unlike in the human case. And in the limit, there's no reason at all why a single human being couldn't control essentially the whole economy. and or military force, if AI systems had been trained to do that. And that's a scenario that I think is really quite likely and extremely worrying. All right. So what does being prepared for that risk look like? How do we mitigate that risk in the event that we do create a super intelligent AI? Yeah. I mean, so the first thing is just to ensure that, especially in the early stages, individual actors aren't able to stage what is literally a coup. That could be what's called a self-coup, if the president decides to stay in power unlawfully, where if you've already automated, as in replaced with AIs, large parts of the military, or even a small kind of special guard that kind of protects the president, or if you've automated and replaced with AI, you know, large fractions of the bureaucracy, it would just become, as a practical matter, much, much harder to unseat someone who wanted to become a dictator and stay in power unconstitutionally because they would have this, you know, small AI and robotic army able to protect them. Or they would have perhaps some large fraction, perhaps even most, of the government administration that are supporting them, depending on how exactly the AIs have been trained. In fact, there could be similar worries coming not from the president either. So leaders of AI companies themselves, if we're at this point of time that AI capabilities and tech progress is going extremely quickly, that in fact, I think there are mechanisms by which the leaders of AI companies could themselves stage a coup if they wanted to as well. And so that's quite a lot more extreme than merely an erosion of democracy, though we should be worried about that too. What does it look like for the US to, quote unquote, get prepared, to take seriously the threats that are posed by the state of the race between the US and China right now? Like, what is it you think we should be arguing for? What should be done? it's a really difficult problem. And my advice would be, if I could wave a magic wand, is if at all possible, for people to put much more energy into diplomacy. I mean, again, who knows what's going on behind closed doors, but it feels like right now, at least publicly, no one is trying to do the diplomacy route in the US and China. And are you imagining something here that looks like a super intelligence version version of what we did with the nuclear arms race. Saying essentially, hey, what we're doing is not just dangerous to our adversaries, it's a danger to the whole planet. And so we need to come up with some kind of arrangement here where we can begin to disarm. And maybe the world isn't totally safe, but at least it's a much safer place than where it was at, say, like the height of the Cold War. Yes, absolutely. And it's actually quite astonishing if you look back at how nuclear disarmament went so successfully after the fall of the Berlin Wall, because it was right around the end of the 80s that we had the peak number of nuclear weapons on Earth. I think it was over 60,000. And through the nuclear arms reduction treaty, there was some really clever incentives set up of like checks and balances of like different security teams sort of showing, okay, this is how we're disarming a little bit of tit for tat. They managed to break the sort of game theoretic stalemate, which was such a magical thing and it shows that such thing is possible in principle one of the main ways that happened though was through diplomacy first and i don't like the way the current narrative is of like these sort of china hawks and this saber rattling that is going on because it is it's just adding fuel to an already completely out of control fire but the thing is it is possible in principle so yeah that would really be my advice is just like please can we exhaust all diplomatic paths first. And I think one of the big paths to that as well is through education, is making people realize that actually our common enemy is not one another or even a difference in views. It's this idea of like game theory gone wrong. It's these game theoretic dilemmas that I often call moloch, essentially of like, well, if I don't do it, then the other guy will. So I have to do it too. Essentially this sort of incentive trap that we get caught in that takes us into these arms race spirals. That's humanity's common enemy. And that's the thing we all need to sort of collectively look at and be like, oh, that's the asteroid coming towards us. Well, I'm glad that you brought up Moloch because I keep seeing it in all these different AI forums that I like snooping around inside of and people are just tossing around like, oh yeah, that's Moloch, that's Moloch. And I don't exactly know what it is. So explain it for me. What is it or maybe who is Moloch? So Moloch is basically the personification of game theory gone wrong. It actually comes from an old Bible story about apparently in the sort of the Canaanite times, there was this war obsessed cult that was so desperate to sort of accumulate military power and money that they were willing to sacrifice anything up to and including their literal children, allegedly by burning them in a bonfire in a ceremony to this deity they called Moloch. that they believed would then like reward them for this ultimate sacrifice by giving them more military power and money. And so it's obviously an incredibly powerful and dark image. But really what it's a lesson in is like, be careful in being so fixated on winning a narrow game, whatever game is right in front of you, or optimizing for this narrow metric of money or whatever it is that you're trying to win, that you don't sacrifice too much of the other things that you care about. And if you dig into sort of, it's often called the generator function, but the sort of driving force behind so many of our biggest problems, it is this process of like, well, I need to win at this game. So I didn't want to talk bad about my neighbor or backstab that person. But if I don't do it, I know that everyone else is going to be doing it anyway. So I have to do it too. It's this act of like sacrificing other important values to win a quick thing, to get ahead of your opponents, that when everybody does it is what creates these race to the bottom dynamics. And unfortunately, that's what I see going on in the AI world now. I feel like this is a perfect encapsulation of what has happened to my own industry, to what I've seen as a reporter over the past 16, 17 years. This idea that you chase after the short-term rewards that you get when you publish clickbait or hyperbole or when you tell everybody who is, quote-unquote, on your side that they're totally right and look how awful and dangerous the other side is. And eventually you get to a situation where journalists and media outlets, they're just chasing that attention. And the investment in careful journalism, that has to fall to the wayside or even the idea that you might publish what's really happening with all the nuance that it demands. well, that becomes this huge risk because that's not going to do very well online. And before you know it, the whole industry has lost its core values. And I believe in doing so, bled out its trust. It's one of the perfect examples of it. Because the way the internet works with virality, it happens that generally speaking, more negative stories, certainly more anger-inducing stories, especially with very clickbaity headlines, tend to go viral more easily. And so those who adopt that strategy get a short-term leg up over everyone else who doesn't. And over time, that pushed even the most respectable news outlets into having to adopt more and more of those tactics. And I think that is basically the main driver of why we're in this information crisis now, where no one really knows who to trust and with good reason. Yeah, people don't like to mention that part, but the with good reason is important to know. And that doesn't mean to say that there still aren't many high integrity journalists out there. But if you lean into this stuff once, people don't forget. And I view it kind of like a tragedy of the commons. You know, we talk about people throwing trash on the ground. Again, one person doing it, oh, well, it doesn't matter. But when everybody does it, now this beautiful park has turned into a trash heap. Well, that's kind of what's happened with our information commons. Because people have been polluting it more and more because it's a quick way of getting some eyeballs. and now the entire information ecosystem is just covered in trash and it's dying. That's Moloch in action. I call it the media Moloch. All right, so what does that trap look like as you see it happening right now to the AI industry? The Moloch trap that AI is caught in right now is the one of a lot of the AI leaders. I even know a couple of leaders at some of the labs and they don't necessarily want to be releasing products as fast as they're doing. They'd like to spend more time on testing them. As we've seen so many times with new LLM releases, there are some really crazy, unexpected outcomes that these things were doing. There was like the whole Sydney thing, this weird persona on the Microsoft chatbot when they launched with GPT-4 for the first time. It was threatening a journalist and it ended up on the front page of the New York Times. Right, there's my friend Kevin who found himself talking to a chatbot that was actively trying to get him to leave his wife so that they could run away together. Right. Google had their debacle with making the black Nazis and basically woke image generation. And then the sycophancy of chat GPT, which was incredibly shocking. These are all unexpected, unintended consequences of clearly releasing models that just weren't ready. And okay, the damage on these was fairly limited. Okay, a few people either probably got a bit misled. But I mean, we've already seen some people, like there was that kid that killed himself because the chatbot he was talking to basically convinced him that he should. And these are all downstream results of products being released before they'd clearly been sufficiently tested. Now, do this in two years time when these models are much more capable. They're much better at persuading people. They are also agentic in that they can actually take actions by themselves on the internet without supervision. So they actually do stuff that influences the real world. Everyone is in this rat race of who can release the biggest and best models the furthest to keep drumming up, you know, keep the hype machine going. It's an absolute recipe for disaster. And the case that you're making is that just like what happened in the media, even if you do not want to be in this race, the incentives are pushing you into it, whether you like it or not. Exactly. People I know who work at some of these labs, they're embarrassed by these mistakes. They would love to take more time before releasing their products on the general public. But if they don't release them, then they run the risk of losing their engineers who want to be associated with the latest and best products. It's an incredibly tricky situation that they're in. And so while it's ultimately like pressure should be placed upon those with the most power, I think some degree of sympathy needs to be given to them as well because they are trapped in this dilemma. And if we aren't honest about the situation, then we don't stand a chance of fixing it. I'd love it if you could walk me through some of the ideas that you lay out in your book, What We Owe the Future, where you're essentially trying to motivate people to change the way that they are looking at the world, to change the way that they're looking at AI. and I'd love if you could start off with this concept called long-termism. What is long-termism and why do you think it's something that's going to help us get prepared in the long run? So long-termism is the view that we should be doing much more than we are currently to improve the lives of future generations, where the core reasons for thinking that are simply that the future could be very big. So there really could be enormous numbers of people to come. And I read your book so I know that you mean very big like billions and billions of people big So say more about that Yeah So if we take just our scientific understanding of the world seriously humanity could last for an extremely long time There are hundreds of millions of years left remaining on Earth, billions of years if we think that society could get to a level of technological sophistication such that beings could live off-world, which, again, given our scientific understanding seems extremely likely. And so that means that when we look to the future of civilization, you know, we're used to thinking like, well, nowadays five minutes ahead, but maybe if we think long term, we're used to thinking a decade ahead or even a century ahead. But really, this place that we're in, in terms of history, is very early on indeed, if we don't suffer some, you know, huge calamity. Right. In your book, you say that we need to start to think of ourselves as the ancient ancestors to billions and billions of people to come, more people to come than people who have ever yet lived. Yeah. So, I mean, it's a really striking fact that most people just don't pay attention to or think about is just how early we are in civilization's history. A typical member of human or human originating civilization will be far in the distant future, and they will look back. Maybe they'll listen to this conversation with maybe a sense of awe and wonder, but they will think of us as people from the distant past. And in particular, they'll think of us as people who had enormous responsibility because decisions that we will be influencing and making in our lifetimes will affect actually that long run trajectory. It will affect what sort of lives they have. And the case that you make is that we should care about those people, that we should be thinking actively today, that we should be making decisions today, thinking about the well-being of those people in the future. That's exactly right. So at this point, I'm trying to argue for the idea that future people count, their interests matter morally, maybe just as much as interests of the people alive today. And so if there are some actions that impact not just the next few decades or few centuries, but will really impact the whole trajectory of future civilization. Those problems that just utterly derail civilization, such that we don't come back from them. The world is worse, in a way, for this very long time into the future, that those at least become distinctively important. And we as a society should distinctively care about them. And you believe that one of the reasons that this is so important for us right now is because you think that we are living through a distinctly unique moment in the history of the human race. Make that case. The thing that's unique about this point in time is how rapidly we're developing technologically and growing economically. Where for almost all of human history, when we were hunter-gatherers and then when we were agriculturalists, there was very little change. The world that your children or even great-grandchildren would be born into would generally look very similar to the world that you were born into. And it was only since the Industrial Revolution that rates of technological development and economic growth picked up, such that we have the kind of 2% to 3% annual growth rates and the rates of tech development that we're currently used to. So things are changing much faster than they did in the past. And then we have all of these new risks and new dangers and challenges that will only happen once. the development of what's called artificial general intelligence. There's only one moment at which that first gets developed. And I think it's really quite likely that happens within our lifetimes. And that is something that is unique about our current situation. Something that's different from our ancestors, different from our distant descendants. So just to put a fine point on this, you're advocating that we adopt this mindset of seeing ourselves as right now living in this unique moment in the course of human history and that our actions, our decisions are going to affect billions and billions of lives to come in part because it's going to be us that bears witness to the creation of an AGI and what we end up doing with that creation. And we need to bring this sort of mindset to the decisions that we make about what we do next. Exactly. All right. So how long do you think we've got? Like what, if you had to put a number on it, looking at the state of AI innovation and investment, where do you think things stand right now? How far away do you think we are from that AGI? My best guess is that we'll get it in the early 2030s. So within the next 10 years, and where I think it's more likely than not, that leads to the sort of very rapid improvements in AI capabilities and very rapid technological progress. But I'm not extremely confident in that. There could be a slower move from where we are today to much greater technological capability. When you look at the AI industry right now and you think about where we're at in terms of getting prepared or ensuring that we experience the win outcomes over the lose outcomes, what do you see? Because one of the things that I find really remarkable is this idea that the leaders in the industry are the very people who have been some of the most vocal about the dangers and the negative consequences of the technology that they're making. And even without any government regulation forcing them to, many of them are spending billions of dollars on AI safety. And a lot of them are sharing their AI safety research and their findings with the public openly. That feels like a really remarkable thing. And so I wonder, does that give you a sense of optimism? Does that make you hopeful that the project you're engaged in right now might work? I do think that gives me enormous optimism and hope that I wouldn't have had otherwise. And in fact, since we published What We Are the Future, that was just before the ChatGPT moment. Since then, I've seen this huge surge again in interest in AI safety and the seriousness with which people are taking it. And I think it's a very striking fact, but ultimately a reassuring one, that the leaders of all three of the major labs and the whole top three of the most cited computer scientists of all time signed on to the statement that mitigating the risk of extinction from AI is a global priority on a par with the risks of nuclear war or pandemics. You know, that shows that, at least to some extent, people in power are really taking this quite seriously in a way that, for example, was really not true for the leaders of Exxon and Chevron in the 70s when our understanding of climate change was just developing. So that all gives me optimism, but it's very, very far from sufficient. And I think the international political order is disappointing and getting worse on this front, where over the last 10 years, there's just been a greater and greater emphasis on China as the enemy, especially with respect to AI. there is intense hawkishness such that, okay, yeah, the very strong default is that there will be an arm space over who can get to AI supremacy. And in fact, we're already in the middle of that. And that's a cause for concern too. In some ways, I get it that your camp gets locked in with the doomers a lot because you are also going around talking about the dangers and trying to ring this alarm. But I also see a lot of overlap between you and the accelerationists, especially in the way that you're trying to inspire people you're trying to get people involved and really rally them to bring about an amazing future and not a catastrophe and one of the things i've been thinking about is how much of our society right now is lacking in core beliefs about how religious participation is down about how there are all these people who are out there trying to find their tribe and trying to find their purpose in things like politics, which is not panning out very well. And when I look at what you're doing, when I look at what the accelerationists are doing, it feels like you're saying, hey guys, look around at all these problems that we face as a civilization. Look at how our institutions are letting us down. Look at how nihilism is spreading throughout the world. This AGI thing, it might be the thing that ends up saving the day, that ends up changing everything, that brings about a freer, less hungry, less competitive, less violent world. Like this thing may lead to us living in whole new ways and curing all diseases. It might even mean that like a generation or two from now, human beings will be traveling the galaxy. And I feel like trying to bring that technology, that hinge moment in history to fruition, that's an amazing thing to devote your life to. Like that is something to believe in, to strive for. And it's basically the accelerationist's point. What you're saying I know is different. You're saying, yes, let's get there. It's great, but let's do it safely. Let's do it smart. But do you feel a little bit like an accelerationist? Do you feel a kinship maybe with them? Or do you feel like your projects are just totally apart and I'm wrong here? I mean, I want the awesome solarpunk future. You know, I want all of this. Does that make me an accelerationist? I think it just depends on your definition of accelerationism. I 100% think we have to build new institutions. I think most of our institutions are grossly outdated. They're crumbling. They're either non-functional or actively making society worse. So I want to accelerate the building of new social structures that manage these incredibly powerful technologies that are emerging in our world. So the question is, it's like, what are we accelerating? I worry that so much of our innovation is going into like technological solutions, while we're not sufficiently building up the other supportive structures that are required for a lasting civilization, which are the social structures, the actual social institutions of how to manage these incredibly powerful technologies and tools, and the state structures to manage the social structures. So the idea is that it's not just about building the tech the right way. It's not just about making quote-unquote safe AGI. It's also about having universities and lawmakers and the media and society as a whole be robust and healthy and trustworthy and focused on these issues when the time for the technology arrives. Yes. If you let the technology drive the social structure, which drives the memes, that's where you end up in the race to the bottom. That's where you get Moloch. But if you flip that stack, if you come up with the good memes of like, what are the high philosophies that we want to instill as a flourishing future that we give to our grandchildren, that we give to their descendants, if you come up with those and put the effort into those, then you build the social structures built upon those principles, and that those social structures be the things that drive the technologies. That's when you get the inverse. That's when you get the win-win outcomes. So I am an accelerator in that direction, but I do not want to accelerate the other direction. And my concern is that the current acceleration movement is doing the Malachian version. They think that just build more powerful technologies, just more, more, more of that. And that will be sufficient. But we need to build the wisdom alongside the power. So I'm a wisdom accelerator, carving out my niche there. That's what I want to say. How do we accelerate the wisdom and the social structures that support that wisdom? I agree with your larger point that people need a North Star and we need some kind of religion as a sort of a motivator. And again, what makes a good religion, what makes a good shared story? A common enemy. And to me, that common enemy is this Malachian process. Next time on The Last Invention, the accelerationists make their case. The Last Invention is produced by Longview, home for the curious and open-minded. A special thanks this episode to Sam Harris, Scott Aronson, and Tim Urban. For links to William McCaskill's book and Liv Berri's podcast, as well as how to support our work, just look at the show notes to today's episode. And if you like this show, please share it with your friends and your community and leave us a review on Apple or Spotify. It really helps others discover the show. Thanks for listening. We'll see you soon. So they know we sent you.