The Last Invention

EP 4: Speedrun

62 min
Oct 16, 20256 months ago
Listen to Episode
Summary

This episode traces how AI safety advocates—including Elon Musk, Sam Altman, and Demis Hassabis—shifted from warning about existential AI risks to racing to build advanced AI systems faster than competitors. The narrative follows OpenAI's founding, internal conflicts, and the eventual release of ChatGPT, revealing how competitive pressure and scaling ambitions overrode safety-first intentions.

Insights
  • AI safety concerns paradoxically accelerated rather than slowed AI development; advocates believed building AI first was the only way to ensure it remained safe
  • Competitive dynamics and winner-take-most market beliefs drove OpenAI to release ChatGPT 3.5 before it was ready, prioritizing speed-to-market over safety benchmarks
  • The tension between scaling AI for safety research and the risks of scaling itself remains unresolved; Dario Amade's departure suggests internal disagreement on this tradeoff
  • Data acquisition practices (scraping the internet without explicit consent) became normalized as companies raced to train larger models
  • Regulatory capture and lobbying efforts by AI leaders (Musk meeting Obama, governors) failed to produce meaningful oversight, leaving competitive pressure as the dominant force
Trends
AI safety researchers increasingly adopt 'move fast and break things' mentality to stay competitiveNonprofit-to-for-profit conversions in AI companies signal shift from mission-driven to venture-backed growth modelsScaling laws becoming dominant AI development strategy, replacing algorithmic innovation as primary path to capability gainsCompetitive AI races between well-funded labs (OpenAI, DeepMind, Anthropic) driving acceleration over cautionData scraping and copyright concerns emerging as legal and ethical flashpoints in LLM trainingFounder conflicts over control and direction becoming critical inflection points in AI company trajectoriesPublic demonstrations (AlphaGo, ChatGPT) used as competitive signaling and market validation toolsEffective Altruism and AI safety frameworks influencing but ultimately losing influence within commercial AI development
Topics
AI Existential Risk and SuperintelligenceOpenAI Founding and Mission DriftCompetitive Dynamics in AI DevelopmentAI Safety Research vs. Capability ScalingScaling Laws and Compute RequirementsAlphaGo and Game-Playing AI BreakthroughsChatGPT Development and Market Release StrategyRegulatory Approaches to AI GovernanceData Acquisition and Copyright in LLM TrainingNonprofit to For-Profit Conversion in TechFounder Conflicts and Leadership DisputesDeepMind vs. OpenAI CompetitionMechanistic Interpretability and AI SafetyAnthropic's Founding and Safety FocusElon Musk's AI Involvement and Exit
Companies
OpenAI
Central subject; founded as nonprofit to build safe AGI, converted to for-profit, released ChatGPT
DeepMind
Competitor that developed AlphaGo; Demis Hassabis-led lab acquired by Google, triggered OpenAI's founding
Google
Acquired DeepMind; perceived as monopolistic threat that motivated OpenAI's creation as counterweight
Microsoft
Major investor in OpenAI; provided supercomputer infrastructure enabling massive scaling to 10,000 GPUs
Tesla
Elon Musk's company; stock dropped 9% after Musk's Joe Rogan podcast appearance discussing AI risks
Anthropic
Founded by Dario Amade and safety-focused researchers who left OpenAI over safety prioritization concerns
Stripe
Greg Brockman recruited from Stripe to join OpenAI as co-founder
Y Combinator
Sam Altman was president; positioned him as influential Silicon Valley figure before OpenAI
Baidu
Dario Amade worked on scaling laws project at Chinese internet conglomerate before OpenAI
SpaceX
Elon Musk's company; mentioned as his 'most important project' in competition with AI development
People
Elon Musk
Co-founder of OpenAI; warned governors about AI risks, invested in DeepMind, quit OpenAI over control disputes
Sam Altman
OpenAI president/CEO; Y Combinator leader; convinced Musk to fund OpenAI; led ChatGPT release strategy
Demis Hassabis
DeepMind founder; developed AlphaGo; meeting with Musk in 2012 became Silicon Valley turning point
Dario Amade
OpenAI safety researcher; advocated for scaling to 10,000 GPUs; left to found Anthropic over safety concerns
Ilya Sutskever
OpenAI co-founder; recruited from Google; pushed for game-playing AI; supported aggressive scaling
Greg Brockman
OpenAI co-founder; recruited from Stripe; part of core leadership team
Peter Thiel
Arranged 2012 meeting between Musk and Hassabis that became pivotal moment in AI history
Geoffrey Hinton
AI pioneer; connectionist researcher; received Turing Award with Bengio and LeCun; expressed concerns post-ChatGPT
Lee Sedol
Go grandmaster; defeated by AlphaGo in 2016 match; resignation marked AI's creative capability breakthrough
Barack Obama
U.S. President; met with Musk to discuss AI risks; understood importance but took no regulatory action
Nick Bostrom
Author of 'Superintelligence'; book influenced Sam Altman's thinking on AI existential risks
Keach Hagey
Wall Street Journal reporter; wrote biography of Sam Altman; interviewed for podcast series
Quotes
"AI is a fundamental risk to the existence of human civilization. AI is a rare case where I think we need to be proactive in regulation instead of reactive, because I think by the time we are reactive in AI regulation, it's too late."
Elon Musk2017 National Governors Association meeting
"The AI race was started by the people who weren't about it. It was started by the exact people, Sam Altman, Elon Musk, Demis Asabis, Daru Amadeh, who said, at least nominally, that they were the most concerned and they wanted to prevent this to happen."
Andy (podcast guest)
"The goal of OpenAI is to make the future good and avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So are we? So it is a bad idea to create a structure where you could become a dictator if you choose to."
Ilya SutskeverEmail to Elon Musk
"I tried for years. This seems like a scene in a movie. Nobody listens. Nobody listens. Nobody listens. Nobody listens."
Elon MuskJoe Rogan Experience appearance
"In order to be able to study the safety challenges of very powerful AI systems, you have to have very powerful AI systems to use as your testing grounds. You can't sort of learn about safety on a Formula One car by practicing on like a Jalopy of a 10-year-old Honda Civic."
Dario Amade
Full Transcript
Hello, Matt here. Before we get into this week's episode, I wanted to pop in real quick to let you all know about another podcast from our team here at Longview called Reflector. On Reflector, we mixed together historical backstories with on-the-ground reporting to tell context-obsessed stories about the beliefs that are shaping the world. To find it, just search for Reflector on whatever app you are using to listen to this right now. The progressive development of man is vitally dependent on invention. It is the most important product of his creative brain. His ultimate purpose is the complete mastery of mind over the material world, the harnessing of the forces of nature to human needs. This is the difficult task of the inventor who is often misunderstood and unrewarded. Does anybody know who wrote that passage? This is the last invention. I'm Gregory Boyner. So now for the main event. There are some like Tesla, Edison, the Wright Brothers, Ford, Jobs, and the 2017 meeting of the National Governors Association. All these governors from Red States and Blue came together in a room to find out what they could do to prepare for the future. From a man who seemed to be ushering in the future. I'm really thrilled to introduce a man who's arguably the personification of technological innovation. Please join me in welcoming Elon Musk. The governors are eager to ask him about his plans for Tesla, about electric car infrastructure, about how to get ready for autonomous vehicles and even SpaceX flights. They want to know what does Elon see as the next big tech on the horizon? What would you want things to look like in five to ten years with autonomous vehicles, electric vehicles? Well, I think things are going to grow exponentially. So there's a big difference between five and ten years. But no one seems prepared for where Elon wants to take this conversation. I have exposure to the most cutting edge AI and I think people should be really concerned about it. I keep sounding we long-veh, but until people see robots going down the street, they don't know how to react. He tells them the best thing that lawmakers can do to prepare for the future is make sure that humanity has a future. AI is a fundamental risk to the existence of human civilization. AI is a rare case where I think we need to be proactive in regulation instead of reactive, because I think by the time we are reactive in AI regulation, it's too late. Because what's going to happen is robots will be able to do everything better than us. I mean, all of us. Yeah. I'm not sure exactly what to do about this. And at first it seems like the governors may be thinking he's pulling their leg, but he just keeps going. And when I say everything, the robots will be able to do everything. There's nothing. Let's move back to you rolling out the Model 3 this year, right? How many quarters would it look? After a moment of uncomfortable silence, the lawmakers eagerly move on. They never return to the subject. Now it would not be long after this very public warning that Elon Musk himself was accelerating to build that very technology he seemed so alarmed about. And he wasn't alone. Today, how some of the very people most concerned about artificial superintelligence came to decide, one after another, that the best way to protect the world from this technology was for them to build it first and build it fast. The AI race was started by the people who weren't about it. It was started by the exact people, Sam Altman, Elon Musk, Demis Asabis, Daru Amadeh, who said, at least nominally, that they were the most concerned and they wanted to prevent this to happen. They are the exact people who actually brought us into the situation we are in now. And they're still doing it. So Andy, walk me through how we get from Elon Musk, warning about AI to him trying to build AI, like going from doomer to accelerationist. All right. So this all started with a meeting between Elon Musk and Demis Asabis. The Demis Asabis of Deep Mind, the Child Protogy. Right. The gamer of gamers, the child genius. Back in 2012, Peter Teal set up this meeting between these two men. And in the years since, that meeting has become like a Silicon Valley folk tale. Like I heard about it from dozens of people who I spoke to for the series. And some of them were saying that if AI becomes even half as powerful as people think that it's going to, the future will look back at this meeting and see it as some kind of turning point. It was the Demis meeting with Elon that we sort of broke. He was Peter Teal himself recently told a version of the story to my old colleague, Ross Douthhead. The rough conversation was, you know, Demis tells Elon I'm working on the most important project in the world. I'm building a superhuman AI and Elon responds to Demis. Well, I'm working on the most important project in the world. I am attorneyous interplanetary species. As the story goes, Musk says, I'm sending us to Mars so that if anything terrible happens here on planet Earth, you know, nuclear war, some kind of civilization ending pandemic, we've got this escape valve. We can actually travel to other planets. Our species can survive. And then Demis said, you know, my AI will be able to follow you to Mars and then Elon's sort of went quiet. And this is a huge trigger for Musk where he is like, who is this guy? What is he trying to do? Why is he telling me that he is ultimately doing something that might kill us all? Karen Howe, the author of Empire of AI, she says that Musk quickly decides that he wants to keep his eye on Demis. And so he invests in deep mind to keep tabs on the company. So his reaction is to be concerned and maybe a little freaked out of them to say, here's some money. So I know exactly what you're up to. Yeah, exactly. So Musk becomes an early investor in deep mind. And a few years later, after the impressive Atari demo, the AI that mastered space invaders. Right. Yes, without any training. Yes. Google quickly jumps in wanting to acquire deep mind. Elon actually tries to get in the way of that and buy deep mind himself. And it doesn't work. And when Google acquires deep mind, that's the moment where suddenly Elon starts going out in public and really sounding the alarm about what he believes are the existential dangers of AI. I don't think most people understand just how quickly machine intelligence is advancing. Mark Mywood's AI is far more dangerous than NUX. I think that's the single biggest existential crisis that we face. Keech Hagee from the Wall Street Journal. She says that this is what inspired Elon to start going out and trying to lobby lawmakers and President Barack Obama. Elon even has a meeting with Obama. And it was kind of interesting because there was a sense that yes, Obama understood the risks. And yes, he also understood how important AI was going to be for the economic development of the country. It promises to create a vastly more productive and efficient economy. He gave an interview to wire it around this time, you know, saying, if properly harness can generate enormous prosperity for people, opportunity for people, can cure diseases that we haven't seen before. But it could increase inequality. It can suppress wages. And so we're going to have to develop new social constructs in order to embrace fully. And yet, Elon left that meeting with a sense that Obama wasn't really going to do anything about the existential risk piece. Elon, he doesn't just strike out with President Obama. He's also talking to Vanity Fair. He's talking at colleges. He's speaking at conferences. Like, remember, this is the Tony Stark era Elon Musk. We're talking about here. Right. Like, this is the time when Elon Musk had pretty broad market appeal to lots of audiences, especially on the subject of technology. This is Elon at his peak celebrity pre controversies that would follow. But even for him, he feels like no one's taking him seriously. And so he starts to host these dinners where he would invite other tech leaders, sometimes other billionaires, and they would get together and try to brainstorm away that they could stop Demis Isabis in Google from making some sort of civilization ending AI. All right. So first he tries to buy deep mind, fails at that. When he goes to the leader of the free world, tries to warn him, that doesn't work. Go to the press, give some speeches. Then it's theme dinners where basically the theme is how do we save the world from super intelligence? That's pretty much the story, yes. And one of those dinner guests was none other than Sam Oldman. It is my belief that in the next few decades, someone will build a software system that is smarter and more capable than humans in every way. And that very quickly it will go from being a little bit more capable than humans to something that is like a million or a billion times more capable than humans. Who was Sam Oldman at this time? Sam Oldman was the president of Y Combinator, which basically meant he was like the king of Silicon Valley. Kichegi actually wrote a biography of Sam Oldman called the Optimist, which is really good. I recommend people check it out. And she says that by 2015, Oldman was already almost this mythical figure in Silicon Valley. He had helped to turn companies like DoorDash, Instacart, Airbnb into household names. What's Sam Oldman's superpower? How would you sum up what he's so good at? Sam Oldman is a once in a generation fundraising talent. He's an incredible storyteller. He can convince people that he can see the future. He can sort of summon companies into being just by persuasion. He is also kind of a fixer with lots of relationships all around Silicon Valley, people who will him favors and could sort of make anything happen in Silicon Valley that anyone wanted. It turns out that since Oldman was young, he had always been enamored with this idea of making a true AI thinking machine. But by the time he's having this meeting with Elon Musk, he had read the book Super Intelligence by Nick Bostrom, and he had come to believe that if AI was made irresponsibly, that it could possibly lead to the end of the human race. And he even began blogging about this idea that if AI happens, it could be the most consequential thing that ever happened to humanity, but it could also be dangerous. So when he goes to this meeting with Musk, Oldman starts talking to him about this idea that he is also very, very worried about AI potentially going wrong and becoming an existential threat to humanity. He pitches him on the idea that if you want to stop a dangerous AI, if you want to stop him as a stop, as if you want to stop Google, then what we need to do is make a safe AI before they make a dangerous one. What do you think about the idea of us creating a lab that counters Google? Why don't we make a lab that would create the same technology, the same AGI technology as a counterweight to Google, except it would be non-profit, it would be open source, and it would be for the benefit of humanity. And Elon says, great, let's do that. And basically convinces Elon to bankroll this thing. And by the end of the year, they have created OpenAI. We started a group called OpenAI. It is a nonprofit. The goal is to build general, super AI for the benefit of humanity. OpenAI is structured as a 5.1c through nonprofit to help spread out AI technology so it doesn't get concentrated in the hands of a few. What is going to be your sort of biggest differentiator then? Like OpenAI versus the mega corpse. I hope that our biggest differentiator is number one, we do the best research in the world. And number two, we care the most about how it gets deployed. Okay, so OpenAI, the company behind G-G-G-P-T, came out of this plan to stop Google by beating Google their own game. But do it in a totally different way than Google was doing it because it was going to be a nonprofit. Right, the idea is to create almost like an anti-Google. They called it a nonprofit research lab. They didn't even call it a tech company. And at the core of that lab is this mission that not only are they going to make the super mind, the AGI, but that they are going to ensure that this thing is good for the entire planet. Right. Okay, so when they start OpenAI, what's it like at first? So the first thing that's quite interesting is in order to pull off what they wanted to do, they needed to recruit talent. So they needed to break up Google's monopoly on AI research talent. And they used their nonprofit ethos and this mission-driven idea to very effectively poach a bunch of researchers from Google and then also bring a bunch of new PhD grads into the founding team. And I remember even with Google purchasing deep-mind, most people in technology still don't really buy into the idea that AGI is coming anytime soon. And so the people that primarily ended up joining OpenAI were self-selected so-called AGI believers, people that were there for the crazy quest to try and recreate human intelligence. And they were of one of two camps. There were the people who were AGI believers, but doomers that were really focused on the AI safety orientation of we're ultimately trying to recreate this thing in order to prevent existential risk. And there were the accelerationists who were like, we believe in this thing because we think it's going to bring us to utopia. They were both there together in this one lab working on this project. They were both there together in that one lab. And at the time, they philosophically did not seem that different because they were compared to the rest of the field which just did not think that this idea of creating AGI was really something that held water. The doomers and the accelerationists are just two sides of the same coin. They both believe in AGI. There were only so many of them. So they were all kind of banded together based on that shared belief and excitement and fear around doing this journey together. Essentially, they were sending out this signal to the world of technology. And in response, they end up actually bringing together a really fascinating mix of people. They were able to poach Ilya Suskevar, one of the guys behind the image net win with Hinton from his job at Google. They get Greg Brockman to join them from Stripe. They eventually bring in this guy, Dario Amade, who had also worked at Google. And these were people leaving big paying very stable jobs in the world of technology to come work at this new research lab because as they said it, they truly believed in this mission and how important it was. Okay. So they walk away from these big paychecks, these stable jobs and they build what? Do they begin to make it open AI? Well, at first, Ilya is very excited about the idea that they should make an AI that's going to go head to head in some kind of game against Dimas. Legendarily, Ilya Musk is a gamer. And so he's pushing them down that path. But there's also just this kind of looseness, right? There are research labs. So there's this sense of like, let a thousand flowers bloom. Like, what path might lead to AGI? We don't know. Let's try this one. But after months and months of this, without any real meaningful progress, suddenly Dimas Hassabas strikes again. Now, the weight is almost over in less than one hour from now. Man will face off against machine in an epic game of go. The competitors are grandmaster Isadal of Korea. And he's taking on the artificial intelligence supercomputer called Alpha Go. Now, the first game of the five. In 2016, Dimas and the team of DeepMind, they thundered back onto the public stage again. This time to play the game Go. The Game of Go is the holy grail of artificial intelligence. For many years, people have looked at this game and they've thought, wow, this is just too hard. Everything we've ever tried in AI, it just falls over when you try the Game of Go. And so that's why it feels like a real litmus test of progress. If we can crack go, we know we've done something special. Are you familiar with Go? I was obsessed with Go as a kid. I remember, I didn't really get into chess, although I got into briefly, but Go was like the stones were very beautiful, the pieces are just these black and white stones. Mm-hmm. What I think is cool about it is that it is an ancient Chinese game. And I was looking it up and it appears to us that we don't even know how old it is. Like there's records of people playing Go like 2000 years ago. It's older than chess, right? Way older than chess. And what's crazy about it too is that you can be playing for a while and not even know who's winning. It's that complex. Mm-hmm. Jasmine Sun, who was one of the tech writers that I spoke to about this, she told me, you could play Go every day of your life and you would never play the same game twice. The Game of Go is like an ancient Chinese game that is known for having more possible board positions than the number of atoms in the universe. You can't even calculate the positions. It's unfathomable. It's literally unfathomable, right? Like there's no way through some sort of like brute force search that you can just like search every possible move and compare them all against each other, right? Like you can't do that. You can't memorize the strategy. This was a game that pretty much no expert system could have hoped to truly master, which is exactly why Demisisabis wanted to create an AI that could. So even if you took all the computers in the world and ran them for a million years, that wouldn't be enough compute power to calculate all the possible variations. But another way that it's really different from chess is that Go players, because there is no way for them to really calculate their way to victory. The ones who become masters of this game are often described as having some sort of deep instinct. Or often they use the word intuition. Champion Go players are known for being really intuitive, I suppose, for having some sort of like deep feel of like strategy in the board that you cannot learn by like memorizing any sort of rulebook. If you ask a great Go player why they played a particular move, sometimes they'll just tell you it felt right. So you can, the one way you can think of it is that Go is a much more intuitive game, whereas chess is a much more logic based game. A lot of game players call Go a very quote unquote human game. So Demis and DeepMind, they create this AI system called AlphaGo. And the way that they train it sounds like sci-fi. And it actually plays into one of the big fears that the dooms have about how AGI might one day turn into ASI and you know replace us all. And it starts off like this. So at first they just loaded up with a whole bunch of data of human beings playing Go so that it can find its own patterns and see that that's working for this person and that's working for that person. But again, like Go has more possible board positions than the number of atoms in the universe. There are so many moves and positions that no one has ever thought of yet. And then what they do is they make a identical copy of the AI system so that the AI can play against itself millions and millions of times each time learning new strategies and gathering more data and learning new strategies and gathering more data. The self play is what they call it. This is like the thing that really makes a system not just like quite good but super human playing Go because it's able to put in so many reps like an infinite amount of practice that no like ordinary player ever could. Interesting. So that makes me think of that idea from Malcolm Gladwell, the 10,000 hours thing like you take 10,000 hours to become an expert in something. But this thing can basically log the equivalent of 10,000 hours of practice in like a month or something. What was even crazier than that? Because Alpha Go can play itself so quickly that in the span of a week it could play more than a human could in centuries. Wow. Like hundreds of years worth of non-stop playing in a week. Hello and welcome to the Deep Mine Challenge Game 1 Round 1 live from the four seasons here in Seoul, Korea. In March of 2016 they set up this showdown against the player Lee Sedol who was often described as the greatest player of his generation. In gardeners all this attention more than 100 million viewers. All these journalists are there. Everyone in Korea is watching. The game is huge there. Everyone here is very excited for the match of the century. Journalists from Asia and around the world about 350 members of the press are here to see if artificial intelligence can really beat human intelligence. Now I should say just a level set here that pretty much everyone thinks that Lee is going to win. Even Google believes that their system is very likely to lose. Dimitris Hsaba himself later said that the team was given just a 5% chance of bullying this off. Which is interesting because even though chess had fallen to AI and jeopardy had fallen to AI, the go with scene is just too hard or maybe too human. Yeah, it's too something. Lee is too good. The game is too complex for an AI and maybe even just like it's not ready yet. Maybe this like one day it could happen but not today. And in the middle of one of the matches during move 37 as it's called. Alpha go ended up doing this weird thing that seemed to prove the doubters may have been right all along. Interesting. Alpha go played this move which I want to hear more about in a second. But Lee has left the room. It makes a move on the board that to all the spectators to lease it all looks almost like a mistake. It's a very surprising move. It's a surprising move. I wasn't expecting that. I don't really know if it's a good or bad move at this point. Like the deep mind Alpha go team, they were watching it and even they thought it had made a mistake because it seemed weird. It seemed bad. It was something that no human player would ever do had ever done in a position like that. So while the cameras are rolling where the world is watching, the deep mind team is like, damn did this thing just malfunction? Like a glitched. Yeah like a glitch. The professional commentators almost unanimously said that not a single human player would have chosen move 37. I can't believe what I see right now. And yet in the end. I think you resigned. Oh my gosh. Alpha go wins. Yeah Lee has. I'm getting where Lee has resigned. Yeah. I believe. And the battle between man versus machine a computer just came out the picture. Each mind put its computer program to the test against one of the brightest minds in the world and one. Alpha go beat a professional player who has 18 go world championships under his belt. And then suddenly people start to go back and re examine that move. Move 37. It went beyond this human guide and it came up with something new and creative and different. And they realized that it was the turning point in that match. That move was what got everyone to say in the go community and the broader community even Lee himself to say, oh the computer like this machine can be creative. It can be intuitive. And it can sort of like master this thing that I always thought was a human task. I see this move. I feel something changed. Maybe he just comes show human something we never discover. Maybe it's beautiful. So the AI played a move that no human had ever made in any at least recorded go game. And that means it discovered its own original strategy. And people would go as far as to say that it had something like an original thought and original idea. And what do we know about how it did that? How did Demis saw this in his team at deep mind explain move 37. Well, they really wanted to know. And so they spent time digging through the code and looking inside the guts of the system. They wrote a paper about it. And while they were able to you know, glean some information because it's one of those connectionist neural net AI toddler styles of AI. Right. Remember the trade off that Joshua Benjiro was talking to us about to get this kind of impressive performance, this level of intelligence. You just have to accept that you're not going to get satisfying answers to these kinds of questions. You have to accept some level of mystery. This is the black box. And when I asked Joshua Benjiro about this Alpha Go moment with Alpha Go, I thought, Oh, now we're getting close to something important. He said, this is when he realized that AI was now entering a whole new era. And he wasn't the only one all across Silicon Valley across the world of technology. People were singing Demis's praises. People were a buzz about Alpha Go. And of course, yet again, Elon Musk doesn't like this one bit. Deep minds show a force in Alpha Go freaked Elon out a lot and made this sort of ambling approach that Open AI had at the beginning of let a thousand flowers bloom. All the researchers can kind of pursue their own different areas. It made him have little tolerance for that. We now know in some detail what happened inside of Open AI during this time because a number of their internal emails were revealed as a part of a lawsuit. And so you can see in these emails, Elon Musk telling Altman and the leaders at Open AI, just how frustrated he is that they're losing to Demis still. And what of him? He says, Open AI is on a path of certain failure relative to Google. There obviously needs to be immediate and dramatic action or else everyone except for Google will be consigned to irrelevance. What does immediate and dramatic action mean? Well according to Keach Hegey, this is when Musk was saying where is our game player? Elon wanted to drive like fighting tit for tat with a deep mind. And he really wanted to respond to it by showing an even cooler and harder game that AI could beat. We need to challenge deep mind in some kind of public display of our dominance in a game with an AI game player. And Altman and the team at Open AI, they were saying to Musk, well we do have a way that we think we could beat deep mind. We've got this strategy. But for us to implement that strategy, we're going to need way more compute power and we're going to need more money. So they pitch him on this idea that the nonprofit Open AI could have a for profit arm. And then that way Altman could go out and do the thing that he's best at, right? He could go get investor money that they can use to up their compute power. They start discussing how are we going to convert the nonprofit into a for profit and all of a sudden Musk and Altman start putting heads because when they go to form the for profit, the question becomes who will be the official CEO of the for profit and both of them want to be the CEO and they cannot agree. And Musk comes back and basically says, no way. I gave you all this money to start a nonprofit. If you're going to turn that nonprofit into a for profit, then I should be the head of that. He even thought about folding it into Tesla and just making it an arm of a for profit company he was already running. He wanted to be CEO and have controlling voting power. So Elon says if it's going to be for profit, then he's going to be in charge. Yes, or he's going to walk. And so now Open AI has a decision to make. Luzy Elon Musk and his celebrity and his money and his tech prowess or change the structure of their company that they designed specifically not to have one person be the ultimate controller of this technology that they think is going to be so powerful that no one man should wield it, right? Right. One Open AI co founder, Ilya Susquevere, he wrote to Elon in this email and he says, quote, the goal of Open AI is to make the future good and avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So are we? So it is a bad idea to create a structure where you could become a dictator if you choose to. So what does Elon do? Well, he responds to this email, guys, I've had enough. This is the final straw. And then Musk decides in a huff if this is not going to stay a nonprofit and it's converting to a for profit where I am not in total control. I am leaving. And not long after he quits Open AI, but within a few months, he starts going around on a very different kind of campaign, essentially telling people that he has also quit trying to sound the alarm about AGI and what it's going to give rise to. Four, three, two, one, boom. Thank you. Thanks for doing this, man. Really appreciate it. You're welcome. Very good to meet you. Nice to meet you too. And thanks for not lighten this place on fire. You're welcome. This time that Elon goes and makes his first appearance on the Joe Rogan experience. This is the infamous episode where Musk smoked pot on camera. Yes. I mean, it's legal, right? It's totally legal. OK. Just a backhoe and marijuana in there. How does it work? Did people get upset at you if you do certain things? And as you remember, this became like a whole big thing. The stock value of electric car manufacturer Tesla tumbled 9% Friday morning. Billionaire Tesla head Elon Musk. What is he up to? Shares in Tesla took a hit today shortly after video was posted of CEO Elon Musk apparently smoking pots. But of all the spectacle in this moment, there was this other part of the podcast that even I didn't really notice until I went back recently and listened to it. You scared the shit out of me when you talk about AI between you and Sam Harris. I realized like, oh, well, this is a genie that wants to tell the bottle you're never getting it back in. That's true. You honestly legitimately concerned about this is like AI, one of your main worries in regards to the future. It's less of a worry than it used to be, mostly due to taking more of a fatalistic attitude. Musk basically tells Rogan, hey, I did my best to warn people. I tried to convince people to slow down AI to regulate AI. This was futile. I tried for years. This seems like a scene in a movie. Nobody listens. Nobody listens. Nobody listens. Nobody listens. Nobody listens. Nobody listens. You've met with Obama and just for one reason. Just talk about AI. Yes. I met with Congress at a meeting of old 50 governors and talked about just AI danger and I talked to everyone I could. No one seemed to realize where this was going. After short break, open AI, now without Musk, stumbles into a breakthrough that will transform the industry and yet again, make the people most concerned about building AI safely decide that they need to build this even faster. Stay with us. The last invention is sponsored by CozyEarth. We all know how obvious it is when you don't sleep well. Everything feels harder the next day. Your energy is off, your focus, even your mood. Good sleep really does shape everything that comes after. That's the idea behind CozyEarth's comfortors. They're designed with careful attention to detail, using naturally breathable, temperature-regulating materials that help you settle into deeper rest. The construction creates this soft cloud-like feel without being heavy or trapping heat, so you stay cool and comfortable all throughout the night. It's thoughtful design around something we all depend on, a great night's sleep. Try one for yourself, risk-free. CozyEarth offers a 100-night sleep trial so you can see how it feels in your own home. Their comfortors are built to last and come with a 10-year warranty. Head to CozyEarth.com and use the code Invention for up to 20% off. And if you get a post-purchase survey, be sure to mention you heard about CozyEarth right here on The Last Invention. Experience the craft behind the comfort and make everyday feel a little more intentional. Deep fake porn didn't come out of nowhere. It was allowed to spread while governments dragged their feet and tech companies shrugged. I'm staring at myself in this video that I know I haven't made. This is what it looks like to feel violated. This season on Understood. If you follow the trail, who does it lead to? These images they would like hunting me and the biggest platform was to miss the deep fakes? Understood. Deep fake porn empire. Available now on CBC Listen or wherever you get your podcasts. Okay, so once Elon Musk walks out, what happens at OpenAI? Well, it turns out that even though this was a nightmare for everyone at OpenAI and they were worried that this might spell the end of the company, it turned out to be a blessing in disguise. I talked to Andre Carpathi, who worked for both Elon at one point and OpenAI. He said, in the beginning, OpenAI was trying to copy paste DeepMind. In the end, it turned out that DeepMind had to copy paste OpenAI. As Kei Chegi was saying to me, pretty much the whole time that Elon was at OpenAI, he was pushing them towards this AI game player strategy. And after Elon left in 2018, over in the corner, a completely different researcher had a breakthrough with a completely different technology, a language model. And it was only after he was gone that instead they focused eventually on language. And that is how by 2022, they flip everything and have dimmised in Google chasing after them instead of the other way around. The next generation of artificial intelligence is here. It's called chatGPT. But before that would happen, there was another split inside this company. And it's a split that some people in Silicon Valley think may end up being far more consequential, even then Elon Musk's. And this is the paradox of Dario Amade. Who is Dario Amade? Why does he end up at OpenAI? And what is it that he really contributes to the team there? Dario is, like Demis actually, has a neuroscience background, which is one interesting thing. He is also very interested in the brain, which sort of unseupenforming a lot of his theories for AI systems and how they should work. But he was really known for at OpenAI was his emphasis on safety. Jasmine Sun and Kevin Russe, they're both working on a book right now about AI. And Dario is one of its central characters. He is by his own admission kind of a nervous person. And he was really a pioneer, not just in developing AI systems, but in worrying about them and how they might go wrong. He does end up being extremely concerned about AI risk and the potential for systems much smarter than us to develop their own goals, become unaligned with opposed to, or just sort of not caring about human goals and then to sort of end up taking over our screen humans up. So Dario was one of the people who came to OpenAI as someone whose motivation was more about stopping a dangerous AI. Yeah. He's very worried about superintelligence. He's associated with this group called the Effective Altruists and he says that he comes to OpenAI in large part because of this altruistic safety focused mission that they have. And how do you sum up what it means for someone like Dario to study AI safety? Like what exactly is AI safety? So AI safety is a big field. It contains a bunch of different subfields. One of them that Dario and his colleagues have been very instrumental in is called Mechanistic Interoperability. That's a very long name. I've told them they should rebrand it to something people I can actually pronounce that they had listened to me. But basically, Mechanistic Interoperability is the science of figuring out how AI models make decisions, why they behave like they do, what is going on inside the guts of the system. And then, they're targeting the mysterious AI black box. Yes. Making that interpretable to humans. Okay. So he's trying to probe the AI to figure out why it's doing what it's doing and is it going to do anything we don't want it to do? Yes. That's one part of AI safety. There's other parts of it too. The other thing that Dario was really into is size. Early on in his career, Dario Amade had worked on a project at Baydu, the Chinese Internet at Conglomerate that dealt with these so-called scaling laws. And this was a theory at that time, it was an unproven theory that basically the key to making an AI system more intelligent was just making it bigger and training it on more data. This was countercultural in AI research at the time. Lots of people were theorizing that you needed some clever new algorithm or some very different architecture to make these models smarter. But Dario and his colleagues sort of had this idea that you could actually just make them bigger and the systems would get smarter. And so the idea here is just that if we take an already promising neural network AI system and we just make it bigger, that maybe like the human brain, which is bigger than the bird brain or the cat brain, a smarter, this thing will also get smarter and smarter and maybe one day will even become a general intelligence. Yes. And essentially this is sort of his best guess at how companies like OpenAI are going to get more intelligent systems. It's not by training them on more specialized data. It's not by coming up with clever efficiency hacks. They are just going to make the models bigger and that is going to take care of a lot of the problems. So his theory is that if you just take a promising AI model for OpenAI that became their language model, that is a neural net that looks for patterns in text and language, then what you just need is a massive amount of text and data to pump into it as well as a massive amount of GPU computer chips, which are of course very expensive. And this is why they're pretty sad to lose Elon Musk and his money. This is one of the reasons that it was said to see Elon walk out the door. But not long after he does, Sam Altman goes out, he does the thing that he's so good at. He knows that they need lots of money, lots of GPUs. So he goes out and strikes up a partnership with one of the biggest companies of all time Microsoft. So Microsoft ends up fulfilling both of these things. They become the largest investor in OpenAI and they partner to build the supercomputers that OpenAI needs. And Greg, this is a detail of the story that's especially wild to me. When Dario and his team at OpenAI, when they get access to these Microsoft supercomputers, they decide to take their scaling theory as far as they possibly can. Dario Amade was really pushing for the idea of no, we really go big or go home. So for example, at DeepMind, when they did that Atari demo, they were using just one GPU for the richest universities in the world like MIT Stanford. It would be a big deal to have a few dozen chips. And in places like India, you would have grad students, multiple grad students sharing one computer chip. So they're trying to do their research on fractional amounts of GPUs. But with this Microsoft partnership, OpenAI now has access to these supercomputers with thousands and thousands of GPUs. And so as the story goes, Dario approaches the leadership team at OpenAI and says, guys, what if next time, what if we take this model and we crank it up to 10,000 GPUs? So 10,000 computer chips, I mean, no one had ever thought about that before. But that's bananas. Actually within OpenAI, this was a contentious decision because some people were like, is that even possible? That just seems improbable. Other people were like, no one has ever done this before. And if we think that AI could go badly, maybe we should more gradually scale it, not just do this dramatic step change, but his philosophy was we need to accelerate the development of the technology so that we can then retain hold of it and figure out how to perfect it in the lead time that we have over other potentially bad actors getting a hold of it. And Sam Altman also really liked the idea because his entire career has been adding zeros to things. So he was like, let's do it. And Ilya Satskever also philosophically was always more in the camp of scaling will potentially bring wondrous and potentially terrifying things, but we should not be afraid to go in that direction. And so the main people that were running OpenAI all converged on, yeah, let's give it a go. However, if you are going to massively scale up your GPUs, your compute, you also have to massively scale up the data that it is searching for patterns inside of because just think of it like no matter how smart and powerful it is, if it only has access to a limited amount of data, it's never going to truly become like an artificial general intelligence. Or it's almost like if Einstein as smart as he was, if he'd only ever read one book, he would know that book really well. Right. But he wouldn't be that smart. Yes. You're going to need a massive amount of data to match the massive amount of compute, but here's the problem. There's only so many open source free databases on the internet. And so here's where OpenAI does something that right now has a lot of people, a lot of different corporations suing them, including the New York Times, because it appears and some former OpenAI employees have leaked some details about this. But they just started dumping big chunks of the internet into their AI. I believe this is going to be the pinnacle scene in the inevitable Hollywood depiction of this story one day where they're just ramping up all the stuff that they're throwing into their system like, oh, here's a free database. Let's put that in. Oh, look, let's put some red in there. Yeah. And then, hey, how about Wikipedia? And then, hey, man, these researchers over at the University of Toronto just scraped all these books off the internet. I'm sure they wouldn't care if we just took all them, copyrighted books and fed that to the LLM. No problem, right? What else we got? Yeah, that's pretty much what happened. Well, allegedly they started throwing in scientific journals, news articles, blogs, transcripts from YouTube videos. They just kept going and going. And how far does this go? Do they feed it the whole internet? Well, I guess there is a lawsuit that's happening right now. So we're going to learn more details, I think, as information comes out from those suits. But it's been reported that basically if a website or if some text online didn't explicitly have a label on it saying, do not use this to train your AI, they adopted a stance of better to ask for forgiveness instead of permission. So all we know is that they just dumped a lot of internet in the system. We don't know how much or what exactly they put in there. Yes. And we know that this is eventually the strategy that would give birth to what we now call Chad GPT. And so Dario is both the guy who is saying, let's scale this thing up further and faster than anyone has before. Let's crank this up to 10,000 GPUs. But he's also the safety guy. Isn't there a tension between those two? Between let's crank the knob up to 11 and we really need to make sure this is safe? Yeah, this is sort of the classic paradox of Dario Amade and of AI safety in general is that they, on one hand, fear the effects and implications of these very large, very powerful models. And they're trying to build them and stay on the cutting edge of AI capabilities. And I've asked Dario about this before and he says, in order to be able to study the safety challenges of very powerful AI systems, you have to have very powerful AI systems to use as your testing grounds. You can't sort of learn about safety on a Formula One car by practicing on like a, you know, Jalopy of a 10-year-old Honda Civic. You just won't teach you that much about what kinds of risks are going to take place when AI is very powerful. And so the argument here is that to make a powerful AI that is safe, that is good for humanity, you're going to need to learn about powerful AI systems and test powerful AI systems. And so therefore, you're going to have to make one. Yes. If you want to do cutting edge AI safety research on very powerful systems, you need to actually build those powerful systems. I think the other thing that like, in Dario Amade would say is like, whichever system is the best, it's going to be embedded in every part of society. Like we're going to use it to make decisions about food to give a loan to. We're going to use it to plan our cities. We're going to use the superintelligence to maybe even like figure out our military strategy. And so like the only way to have the impact that we want to have in the world to ensure that we have superintelligence and that the superintelligence does things like curing cancer instead of like screwing us all over and self sabotaging ourselves is by having both the safest and the best model. Because if ours is safest, but it's not actually a very good model, then like the unsafe model is going to be the one that's going to be widely deployed and that's much worse world. And the last one I've heard him make is the only way to stop a bad guy with a powerful AI is a good guy with a powerful AI. Essentially, this is the argument that this technology, it's so powerful. And so lucrative that someone is going to build it. And in Dario's mind that someone could be an authoritarian government. It could be a rival AI company that doesn't care as much about safety. It could be a terrorist group. And so the ethical thing to do in his mind, if you are concerned about the power of these AI systems is for you to be the one who builds it and keeps it safe and kind of sets the high bar of safety that the rest of the industry will have to follow. And it was this mindset, this idea that to make a GI safe, you need to make it fast. This is part of what drew Dario to open AI in the first place. This is a part of the mission that he loved. However, one day, late in 2020, he just up and quit. Along with several members of the AI safety team and they went out and pretty much immediately started a rival AI company called Anthropic. All right, so Kevin Dario Amade has yet to accept my interview request, although the people that he works with are very nice and had a nice meeting with him. It may be one day, he will come on the show. But in the meantime, I know you've spoken to him. What do you understand is the reason that he leaves open AI. What does he see there? What does it he doesn't like? So the official story of why Dario and his colleagues left open AI is that they had philosophical differences about AI safety approaches and priorities that Dario and his team wanted the company to put more emphasis on safety and that others at the company were less interested in that. The real story is more complicated and involves a lot of not only philosophical differences, but also like real personal differences and beefs, lots of disagreements I've heard about and reporting about specific decisions they were making, whether they were taking safety seriously enough, whether they were becoming too commercial. I mean, you have to remember that when Dario joined open AI, it was a research nonprofit. It was specifically set up not to be an act like a normal AI company. And by the time he left, it had started this for-profit subsidiary. It had struck this deal with Microsoft. It was starting to look more and more like a kind of normal tech startup. And I think that made him and his colleagues very uncomfortable. And there are some other juicier stories that I'm going to save for my book. So we don't know what Dario saw that scared him. No. We don't know. I mean, maybe Kevin knows something and we'll just have to wait for his book. All we know is that he quits. He says that it's connected in some way with AI safety and that he opens a competitor claiming that now he's going to be the one to make AI truly safe. And so now we have more competitors in the race. Yes. And this pushes everyone to work even faster. And really where you see that most dramatically is around chat GPT because open AI had already decided that eventually they wanted to release a version of chat GPT to the public. But they didn't think it was quite ready. It had gone from GPT 3 to GPT 3.5, but it was still buggy. It still regularly had these hallucinations that they didn't understand. So they were trying to hit their benchmark of GPT 4 before going public with their chat bot. But suddenly this rumor starts to spread within the company that inthropic also has a chat bot and they might release it soon. And so open AI executives make a decision. We are not going to wait for the GPT 4 launch because the model is just not ready. But we have the chat interface and we have GPT 3.5. So their nervous that if anthropic beats them to market with their chat bot, then open AI is going to seem like they're behind the ball. They're going to come off like a copycat. They're operating under very Silicon Valley belief of winner takes most. So you need to be the number one. You need to be the one that has the name recognition, the one who invented this kind of chat bot. And so they just decide, you know, let's do a low key, you know, no press release, no advertising, no social media blitz release of chat GPT 3.5. And Keach Hagee and Karen Howe, they were telling me that supposedly back at open AI, the team did not think that this was going to be a very big deal outside of Silicon Valley. They didn't think it was going to make a very big public splash. And so why release it if they didn't think it was going to be a hit? Well, in some ways it was like insider signaling to just say to the world of technology, we were here first. It doesn't matter the public uses it or not. It mattered that like the world of technology doesn't think that they are just copying off of their rivals and throbic. Inside the company, it was a low key research preview is how they described it. Let's just release this model with this new interface that was just like a chat bot and see what people think. The night before they were like making bets on how many people would actually start using the model. And I think the first weekend and the highest bet was 100,000. So that's how many users they provisioned their servers for. So on November 30th, Sam Altman goes on to Twitter and he just writes, today we launched chat GPT, try talking with it here and he paced the link. The next generation of artificial intelligence is here. The future is now. The internet's going crazy over new artificial intelligence called chat GPT. A new artificial intelligence chat bot. That GPT is like a Google you can ask to do this. They can answer essay questions, write songs. Who knows what companies and ethical issues. It already has more than a million years to have. Very creepy. A new artificial intelligence has gone viral. Probably the area. The company I am Wall Street doesn't mean Earth questioning whether chat GPT is live being threatened to some establish being companies. Next time on the last invention. Well, I love if you could just take me back to this time period in your life where after years of being on the fringes as you've described it being rejected you and Hinton and your fellow connectionist AI researchers, your contrary and views are proved right. And then all of these actual AI systems, these promising new technologies are born out of you guys is you know determination to chase after this idea despite of the naysang. How did that feel? I mentioned it felt really good. Oh, yeah, I mean it was great. It was, let me share something emotional. So shortly after AlphaGo, I don't know maybe 2018 or something. Oh, I guess that's when I got the Turing Award with Jeff and Young. I thought I've achieved the greatest prize that a computer scientist can expect in their life. And I've accomplished so much and you know my career has been so rewarding and successful. What else is there to do? I felt like if I die tomorrow, I'll go with you know serenity. You did it. But wait, but there's a but. November 22. Chat GPT. It dawned on me. Yes, but like look, this has been a really big step. How far are we from human level? Maybe just a few years, maybe a decade, maybe two. And then what? Like what's going to happen with this kind of technology? Aren't we going to build machines that we don't control and could potentially destroy us? How do we make sure this doesn't happen? And I didn't have an answer. The last invention is produced by Longview, home to the curious and open-minded. We are an independent outlet focused on giving people the backstory to the debates shaping our future. To support our work, click on the link in our show notes or visit us at LongviewInvestigations.com and become a subscriber. And as always, it really helps us if you leave a rating and a review on Apple or Spotify or wherever you listen to your podcasts. One last thing to mention, audio from the documentary AlphaGo was used in this episode. Thank you for listening and we'll see you soon.