Idol money lies in your current account picking crumbs out of its belly button wondering, should I eat them? But when you start investing with Monzo, your money's always busy. You turn on regular investments, invests your spare change, and tops up your stocks and shares' isre. It even helps you make sense of risk and return. Monzo, the bank that gets your money moving. You could get back less than you invest. Monzo current account required UK residents 18 plus TCC supply. Hello, Matt here. Before we get into this week's episode, I wanted to pop in real quick to let you all know about another podcast from our team here at Longview called Reflector. On Reflector, we mixed together historical backstories with on-the-ground reporting to tell context-obsessed stories about the beliefs that are shaping the world. To find it, just search for Reflector on whatever app you are using to listen to this right now. This is the last invention. I'm Gregory Warner, and our story begins with a conspiracy theory. So Greg, last spring, I got this tip via the encrypted messaging app signal. This is reporter Andy Mills from a former tech executive. He was making some pretty wild claims. And I wanted to talk to him on the phone, but he thought his phone was being tapped. The next time I was out in California, I went to meet with him. I'm really kind of contending with who I am in this moment. Up until a few months ago, I was an executive in Silicon Valley. And yet here I am sitting in a living room with you guys talking about what I think is one of the most important things that needs to be discussed in the whole world, right? Which is the nature in which power is decided in our society. And he told me the story that a faction of people within Silicon Valley had a plot to take over the United States government. And that the Department of Government Efficiency, Doge under Elon Musk, was really phase one of this plan, which was to fire human workers in the government and replace them with artificial intelligence. And that over time, the plan was to replace all of the government and have artificial intelligence make all the important decisions in America. I have seen both the nature of the threat from inside the belly of the beast if it were in Silicon Valley and seen the nature of what's at stake. Now this guy, his name is Mike Brock, and he had formerly been an executive in Silicon Valley. He'd worked alongside some big name guys like Jack Dorsey, but he'd recently started a sub-stack. And he told me that after he published some of these accusations, he had become convinced that people were after him. I have reason to believe that I've been followed by private investigators. For that and other reasons, I traveled with private security when I went to DC in New York City last week. He told me that he had just come back from Washington, DC where he had met with a number of lawmakers, including Maxine Waters, and debriefed them about this threat to American democracy. We are in a democratic crisis. This is a coup. This is a slow motion soft coup. And so this faction, who is in this faction? What is it? Like the Masons or something? Or is it like a secret cult? Well, he named several names, people who are recognizable figures in Silicon Valley. He claimed that this quote unquote conspiracy went all the way up to JD Vance, the vice president. And he called the people who were behind this coup. The accelerationists. The accelerationists. It was a wild story. Yeah. But you know, some conspiracies turn out to be true. And it was also an interesting story. So I started making some phone calls. I started looking into it. And some of his claims I could not confirm. Maxine Waters, for example, did not respond to my request for an interview. Other claims started to somewhat fall apart. And of course, eventually doge itself somewhat fell apart. Elon Musk ended up leaving the Trump administration. And for a while, it felt like, you know, it was one of those tips that just doesn't go anywhere. But in the course of all these conversations I was having with people close to artificial intelligence. I realized that there was an aspect of his story that wasn't just true, but in some ways, it didn't go quite far enough. Because there is indeed a faction of people in Silicon Valley who don't just want to replace government bureaucrats, but want to replace pretty much everyone who has a job with artificial intelligence. And they don't just think that the AI that they're making is going to up end American democracy, they think it is going to up end the entire world order. The world, as you know, it is over. It's not about to be over. It's over. I believe it's going to change the world more than anything in the history of mankind, more than electricity. But here's the thing. They're not doing this in secret. This group of people includes some of the biggest names in technology, Bill Gates, Sam Altman, Mark Zuckerberg, most of the leaders in the field of artificial intelligence. AI is going to be better than almost all humans at almost all things. A kid boy and today will never be smart of an AI. It's the first technology that has no limit. So wait, so you get a tip about like a slow motion coup against the government and then you realize, no, no, this is not just about the government. This is pretty much every human institution. Well, yes and no. Many of these accelerations think that this AI that they're building is going to lead to the end of what we have come to think of as jobs, the end of what we traditionally thought of as schools. Some would even say this could usher in the end of the nation state, but they do not see this as some sort of shadowy conspiracy. They think this may end up literally being the best thing to ever happen to humanity. I've always believed that it's going to be the most important invention that humanity will ever make. Imagine that everybody will now in the future have access to the very best doctor in the world, the very best educator. The world will be richer and can work less and have more. This really will be a world of abundance. They predict that their AI systems are going to be the thing that helps us to solve the most pressing problems that humanity faces. Energy breakthroughs, medical breakthroughs. Maybe we can cure all disease with the help of AI. They think it's going to be this hinge moment in human history, where soon we will be living to maybe be 200 years old, where maybe we'll be visiting other planets, where we will look back in history and think, oh my god, how did people live before this technology? It should be an era of maximum human flourishing where we travel to the stars and colonize the galaxy. I think a world of abundance really is a reality. I don't think it's utopian, given what I've seen that technology is capable of. So these are a lot of bold promises, and they come from the people who are selling this technology. Why do they think that the AI that they are building is going to be so transformative? Well, the reason that they're making such grandiose statements and these bold predictions about the near future, it comes down to what it is they think that they're making when they say they're making AI. This is something that I recently called up my old colleague, Kevin Rousse, to talk about. Kevin, how is it that you describe what it is that the AI companies are making? Am I right to say that they're essentially building a super mind, like a digital super brain? Yes, that is correct. He's a very well-sourced tech reporter and a columnist of the New York Times. Also co-host of the podcast, Hard Fork. And he says that the first thing to know is that this is far more of an ambitious project than just building something like chatbots. Essentially, many of these people believe that the human brain is just a kind of biological computer, that there is nothing special or supernatural about human intelligence that we are just a bunch of neurons firing and learning patterns in the data that we encounter. And that if you could just build a computer that sort of simulated that, you could essentially create a new kind of intelligent being. Right, I've heard some people say that we should think of it less like a piece of software or a piece of hardware and more like a new intelligent species. Yes, it wouldn't be a computer program exactly. It wouldn't be a human exactly. It would be this sort of digital super mind that could do anything a human could and more. The goal, the benchmark that the AI industry is working towards right now is something that they call AGI, artificial general intelligence. The general is the key part because a general intelligence isn't just really good at one or two or 20 or 100 things, but like a very smart person can learn new things, can be trained in how to do almost anything. I guess this is where people get worried about jobs getting replaced because suddenly you have a worker, like a lawyer or a secretary and you can tell the AI to learn everything about that job. Exactly. I mean, that is what they're making and that's why there's a lot of concerns about what this could do to the economy. I mean, a true AGI could learn how to do any human job, factory worker, CEO, doctor, and as ambitious as that sounds, it has been like the stated on paper goal of the AI industry for a very long time. But when I was talking to Kevin Rus, he was saying that even just a decade ago, the idea that we would actually see it within our lifetimes, that was something that even in Silicon Valley was seen as like a pie in the skydream. People would get laughed at inside the biggest technology companies for even talking about AGI. It seemed like trying to plan for building a hotel chain on Mars or something. It was like that far off in people's imagination. And now if you say you don't think AGI is going to arrive until 2040, you are seen as like a hyper conservative, basically, Luddite in Silicon Valley. I know that you are regularly talking to people at OpenAI and Thropic and DeepMind and all these companies. What is their timeline at this point? When do they think they might hit this benchmark of AGI? I think the overwhelming majority view among the people who are closest to this technology, both on the record and off the record, is that it would be surprising to them if it took more than about three years for AI systems to become better than humans at at least almost all cognitive tasks. Some people say physical tasks, robotics, that's going to take longer. But the majority view of the people that I talk to is that something like AGI will arrive in the next two or three years, or certainly within the next five. I mean, holy shit. Holy shit. That is really soon. This is why there has been such insane amounts of money invested in artificial intelligence in recent years. This is why the AI race has been heating up. Right. This is to accelerate the path to AI. But this has also really brought more attention to this other group of people in technology, people who I personally have been following for over a decade at this point, who have dedicated themselves to try everything they can to stop these accelerationists. The basic description I would give to the current scenario is if anyone builds it, everyone dies. Many of these people like Elias or Yudkowski are former accelerationists who used to be thrilled about the AI revolution and who for years now have been trying to warn the world about what's coming. I am worried about the AI that is smart enough. I am worried about the AI that builds the AI that is smart enough to kill everyone. There's also the philosopher Nick Bostrom. He published a book back in 2014 called Super Intelligence. Now, a super intelligence would be extremely powerful. We would have a future that would be shaped by the preferences of this AI. Not long after Elon Musk started going around sounding this alarm. I have exposure to the most cutting edge AI and I think people should be really concerned about it. He went to MIT. I mean, with artificial intelligence, we are summoning the demon. Told them that creating an AI would be summoning a demon. AI is a fundamental risk to the existence of human civilization. Musk went as far as to have a personal meeting with President Barack Obama, trying to get him to regulate the AI industry and take the existential risk of AI seriously. But he, like most of these guys at the time, they just didn't really get anywhere. However, in recent years, that has started to change. The man dubbed the Godfather of Artificial Intelligence has left his position at Google. And now he wants to warn the world about the dangers of the very product that he was instrumental in creating. Over the past few years, there have been several high-profile AI researchers. In some cases, very decorated AI researchers. This morning, as companies race to integrate artificial intelligence into our everyday lives, one man behind that technology has resigned from Google after more than a decade. Who have been quitting their high-paying jobs, going out to the press, and telling them that this thing that they helped to create poses an existential risk to all of us. It really isn't existential threat. Some people say this is just science fiction. And until fairly recently, I believed it was a long way off. One of the biggest voices out there doing this has been the Sky Jeffrey Hinton. He's like a really big deal in the industry, and it meant a lot for him to quit his job, especially because he's a Nobel Prize winner for his work in AI. The risk I've been warning about the most, because most people think it's just science fiction, but I want to explain to people it's not science fiction, it's very real, is the risk that we'll develop an AI that's much smaller than us, and it will just take over. And it's interesting when he's talking to journalists trying to sound this alarm. They're often saying, yes, we know that AI poses a risk if it leads to fake news, or like what if someone like Vladimir Putin gets a hold of AI? It's inevitably, if it's out there, going to fall into the hands of people who maybe don't have the same values, the same motivations. And he's telling them, no, no, no, this isn't just about it falling into the wrong hands. This is a threat from the technology itself. What I'm talking about is the existential threat of this kind of digital intelligence taking over from biological intelligence. And for that threat, all of us are in the same boat, the Chinese, the Americans, the Russians, we're all in the same boat. We do not want digital intelligence to take over from biological intelligence. Okay, so what exactly is he worried about when he says it's an existential threat? Well, the simplest way to understand it is that, hinting and people like him, they think that one of the first jobs that's going to get taken after the industry hits their benchmark of AGI will be the job of AI researcher. And then the AGI will 24 or 7 be working on building another AI that's even more intelligent and more powerful. So you're saying AI would invent a better AI. And then that AI would invent an even better AI. That is one way of saying it. Yes, exactly. AGI now becomes the AI inventor and each AI is more intelligent than the AI before it, all the way up until you get from AGI artificial general intelligence to ASI, artificial super intelligence. The way I define it is this is a system that is single-handedly more intelligent, more competent at all tasks than all of humanity put together. I've now spoken to a number of different people who are trying to stop the AI industry from taking this step, people like Connor Leihy, he's both an activist and a computer scientist. So it can do anything the entire humanity working together could do. So for example, you and me are generally intelligent humans, but we couldn't build semiconductors by ourselves. But humanity put together can't build a whole semiconductor supply chain. A super intelligence could do that by itself. So it's kind of like this. If AGI is as smart as Einstein or way smarter than Einstein, I guess. An Einstein that doesn't sleep, that doesn't take bathroom breaks, right? And Liss Rever and has memory for everything. Exactly. ASI, that is smarter than a civilization. A civilization of Einstein's. That's how the theory goes, right? Like you have the ability now to do in hours or minutes, things that take a whole country or maybe even the whole world, a century to do. And some people believe that if we were to create and release a technology like that, there'd be no coming back. Humans would no longer be the most intelligent species on earth. And we wouldn't be able to control this thing. By default, these systems will be more powerful than us, more capable of gaining resources, power, control, etc. And unless they have a very good reason for keeping humans around, I expect that by default they will simply not do so. And the future will belong to the machines, not to us. And they think that we have one shot, essentially. One shot. Like one shot meaning we can't update the app once we release it. Once this cat is out of the bag, once this genie is out of the bottle, whatever that was. Once this program is out of the lab, is it work? Basically, unless it is 100% aligned with what humans value, unless it is somehow placed under our control, they believe it will eventually lead to our demise. I guess I'm scared to ask this, but like, how would this look like a global disaster? Or are we talking about it getting control of CRISPR and releasing a global pandemic? Yes, there are those fears, for sure. I want to get more into all the different scenarios that they foresee in a future episode. But I think the simplest one to grasp is just this idea that a superior intelligence is rarely, if ever, controlled by an inferior intelligence. And we don't need to imagine a future where these ASI systems hate us, or they break bad or something. The way that they'll often describe it is that these ASI systems, as they get further and further out from human level intelligence, after they evolve beyond us, that they might just not think that we're very interesting. I mean, in some ways, hatred would be flattering. Like if they saw us as the enemy and we were in some battle between humanity and the AI, which we've seen from so many movies, but what you're describing is just, like, in difference. Right. I mean, one of the ways that people will describe it is that like if you're going to build a new house of all the concerns you might have in the construction of that house, you're not going to be concerned about the ants that live on that land that you've purchased. And they think that one day the ASIs may come to see us the way that we currently see ants. You know, it's not like we hate ants. Some people really love ants, but humanity as a whole has interests. And if ants get in the way of our interests, then we'll fairly happily kind of destroy them. This is something I was talking to William McCaskill about. He is a philosopher and also the co-founder of this movement called the Effective Altruists. And the thought here is, if you think of AI as we're developing as like this new species, that species as its capabilities keep increasing, so the argument goes, will just be more competitive than the human species. And so we should expect it to end up with all the power. That doesn't immediately lead to human extinction, but at least it means that a survival might be as contingent on the goodwill of those AI's as the survival of ants are on the goodwill of human beings. We'll be back right after this break. The world moves fast. You work day, even faster, pitching products, drafting reports, analyzing data. Microsoft 365 co-pilot is your AI assistant for work built into word, excel, PowerPoint, and other Microsoft 365 apps you use, helping you quickly write, analyze, create, and summarize. So you can cut through clutter and clear path to your best work. Learn more at Microsoft.com slash M365 co-pilot. The last invention is sponsored by Cozy Earth. We all know how obvious it is when you don't sleep well. Everything feels harder the next day. Your energy is off, your focus, even your mood. Good sleep really does shape everything that comes after. That's the idea behind Cozy Earth's comforters. They're designed with careful attention to detail, using naturally breathable, temperature-regulating materials that help you settle into deeper rest. The construction creates this soft cloud-like feel without being heavy or trapping heat, so you stay cool and comfortable all throughout the night. It's thoughtful design around something we all depend on, a great night's sleep. Try one for yourself, risk free. Cozy Earth offers a 100 night sleep trial so you can see how it feels in your own home. Their comforters are built to last and come with a 10-year warranty. Head to CozyEarth.com and use the code Invention for up to 20% off. And if you get a post-purchase survey, be sure to mention you heard about CozyEarth right here on the last invention. Experience the craft behind the comforter and make everyday feel a little more intentional. Deep fake born didn't come out of nowhere. It was allowed to spread while governments dragged their feet and tech companies shrugged. I'm staring at myself in this video that I know I haven't made. This is what it looks like to feel violated. This season on Understood. If you follow the trail, who does it lead to? These images they would like hunting me and the biggest platform was Mr. Deep Fakes. Understood. Deep fake born empire. Available now on CBC Listen or wherever you get your podcasts. If the future is closer than we think. And if one day soon there is a at least reasonable probability that super intelligent machines will treat us like we treat bugs, then what do the folks worried about this say that we should do? Well, there's essentially two different approaches to the perceived threat. Some people who are worried about this, they simply say that we need to stop the AI industry from going any further and we need to stop them right now. We should not build ASI. Just don't do it. We're not ready for it and shouldn't be done. Further than that, it's not just I am not trying to convince people to not do it out of the goodness of their heart. I think it should be illegal. It should be logically illegal for people and private corporations to attempt even to build systems that could kill everybody. What would that mean to make it illegal? Like how do you enforce that? Yeah. Joe, what are you going to outlaw algebra? Right, you don't need uranium in a secret center, you can just build it with code. Right, but you do need data centers and you could put in laws and restrictions that stop these AI companies from building any more data centers and a number of other laws. There are some people, though, who go even further and say that nuclear-armed states, like the US, should be willing to threaten to attack these data centers. If these AI companies like OpenAI are on the verge of releasing an AGI to the world. Wait, so even bombing data centers that are in Virginia or in Massachusetts, I mean, like they see it as that great of a threat. They believe that on the current path we're on, there is only one outcome, and that outcome is the end of humanity. If we build it, then we die. Exactly. And this is why many people have come to calling this faction the AI doomers. The acceleration is slight to call Doomer. That was a kind of pejorative coined by them. And very successfully, I must say. I disavow the Doomer label because I don't see myself that way. Some of them have embraced the name Doomer. Others of them just like the name Doomer, they often will call themselves the realists. But in my reporting, everyone calls themselves the realists, so I didn't think that would work. I consider it to be realistic, to be calibrated. And one of the reasons that they bark at the name is that they feel like it makes them come off as a bunch of anti-technology luttites. When in fact, many of them work in technology, many of them love technology, people like Connor Leigh, he, I mean, they even like AI as it is right now. I mean, he uses ChadGPT. He just tells me that from everything that he sees, where it's headed, where it's going, we have no choice but to stop them. If it turns out tomorrow, there's new evidence that actually all these problems I'm worried about are less of a problem than I think they are, I'd be the most happy person in the world. Like this would be ideal. All right, so one approach is we stop AI in its tracks. It's illegal to proceed down this road we're on. But that seems challenging to do, given how much is it already invested in AI, and frankly, how much potential value there is in the progress of this technology. So what's the alternative? Well, there's another group of people who are pretty much equally worried about the potentially catastrophic effects of making an AGI and at leading to an ASI, but they agree with you that we probably can't stop it. And some of them would go as far as to say, we probably shouldn't stop it because there really is a lot of potential benefits in AGI. So what they're advocating for is that our entire society essentially our entire civilization needs to get together and try in every way possible to get prepared for what's coming. How do we find the win-win outcome here? One of the advocates for this approach that I talked to is Liv Bere. She is a professional poker player and also a game theorist. Oh, job now, right now, whether you're up someone building it or someone who is observing people build it or just a person living on this planet because this affects you too, is to collectively figure out how we unlock this narrow path because it is a narrow path we need to navigate. We should be really focusing a lot right now on trying to understand as concretely as possible what are all the obstacles we need to face along the way and what can we be doing now to ensure that that transition goes well? This faction, which includes figures like William McCaskill, what they want to see is the thinking institutions of the world, the universities, research labs, the media, joined together to try and solve all of the issues that we're gonna face over the next few years as AGI approaches. So you mean not just leave this up to the tech companies? Exactly. They wanna see politicians brainstorming ways to help their constituents in the event that the bottom falls out of the job market, right? Right, or prepare communities to have no jobs, I guess. Some of them go that far, right? Like universal basic income. And they also wanna see governments around the world, especially in the US, start to regulate this industry. What are the concrete steps we could take in the next year to get ready? So we'd like regulations that say, when a big company produces a new, very powerful thing, they run tests on it and they tell us what the tests were. Jeffrey Hinton, after he quit Google, he converted to this approach, and he was talking to me about the kinds of regulations that he wants to see. And we'd like things like whistleblower protection. So if someone in one of these big companies discovers the company is about to release something awful, which hasn't been tested properly, they get whistleblower protections. Those are to deal though with more short term threats. Okay, but what about the long term threats? What about this idea that AGI poses this existential threat? What is it that we could do to prevent that? Okay, so I can tell you what we should do about ourselves taking over. There's one good piece of news about this, which is that no government wants that. So governments will be able to collaborate on how to deal with that. So you're saying that China doesn't want AGI to take over their power authority. The US doesn't want some technology to take over their power and authority. And so you see a world where the two of them can work together to make sure that we keep it under control. Yes. In fact, China doesn't want an AGI to take over the US government, because they know it will pretty soon spread to China. So we could have a system where the research institutes in different countries that were focused on how are we gonna make it so that it doesn't want to take over from people. It will be able to if it wants to. So we have to make it not want to. And the techniques you need for making it not want to take over are different from the techniques you need for making it more intelligent. So even though the countries won't share how to make it more intelligent, they will want to share research on how do you make it not want to take over. And over time I've come to calling the people who are a part of this approach, the scouts, like the boy scouts. Be prepared. Like the boy scouts, yes. Exactly. And it turned out that after I ran this name by William McCaskill, so what if I called your camp the scouts? So a little fun fact about myself is I was a boy scout for 15 years. He actually was a boy scout. And so I thought, okay, the scouts. Maybe that's why I've got this approach. But the key thing about the scouts approach, if it's going to work, is they believe that we cannot wait, that we have to start getting prepared and we have to start right now. This is something that I was talking about with Sam Harris. The reasons to be excited and to want to go, go, go, or all too obvious, except for the fact that we're running all of these other risks, and we haven't figured out how to mitigate them. Sam is a philosopher, he's an author, he hosts the podcast, making sense, and he's probably the most impassioned scout that I know personally. There's every reason to think that we have something like a tightrope walk to perform successfully now, like in this generation, right, not 100 years from now. And we're edging out onto the tightrope in a style of movement that is not careful. If you knew you had to walk a tightrope and you got one chance to do it, and you've never done this before, like what is the attitude of that first step and that second step, right? We're like racing out there in the most chaotic way. Lately, yeah, and just like we're off balance already. We're looking over our shoulder, fighting with the last asshole we met online, and we're leaping out there. Right, and you've been on this for a long time in 2016. I remember you did this big TED talk. I watched it at the time, it had millions of views, and you were essentially saying the same thing. You were trying to get people to realize that we have a tightrope to walk, and we have to walk it right now. Well, I wanted to help sound the alarm about the inevitability of this collision, whatever the timeframe. We know we're very bad predictors as to how quickly certain breakthroughs can happen. So Stuart Russell's point, which I also cite in that talk, which I think is a quite brilliant changing of frame, he says, okay, let's just admit it is probably 50 years out. Let's just change the concepts here. Imagine we received a communication from elsewhere in the galaxy, from an alien civilization that was obviously much more advanced than we are, because they're talking to us now, and the communication reads thus, people of Earth, we will arrive on your lowly planet in 50 years, get ready. Just think of how galvanizing that moment would be. That is what we're building, that collision, and that new relationship. Coming up on the last invention. All the world better die! Something bad over the country! Why is all the worry about the technology going badly wrong? And why are people not worried enough about it not happening? The accelerationists respond to these concerns. Existential risk for humanity is a portfolio. We have nuclear war, we have pandemic, we have asteroids, we have climate change, we have a whole stack of things that could actually in fact have this existential risk. So you're saying that it's going to decrease our overall existential risk, even as it itself may pose to some degree an existential risk? Yes. Researchers tell us what they saw that changed their minds. I was a person selling AI as a great thing for decades. I convinced my own government to invest hundreds of millions of dollars in AI. All my self-worth was on the plan that it would be positive for society. And I was wrong. I was wrong. And we go back to where the technology fueling this debate began. Basically, this is the holy grail of the last 75 years of computer science. It is the genesis, the url like philosopher stone of the field of computer science. The last invention is produced by Longview, home for the curious and open-minded. To learn more about us and our work, go to Longview Investigations.com. Special thanks this episode to Tim Irvin. Thanks for listening. We'll see you soon.