Summary
This episode traces the history of AI development through games, contrasting the symbolic AI approach (Deep Blue, Watson) that dominated for decades with the neural network approach pioneered by Geoffrey Hinton and Yoshua Bengio. It explains how the 2012 ImageNet breakthrough validated neural networks and set the stage for modern AI, ultimately leading to DeepMind's formation and the race toward artificial general intelligence.
Insights
- Game-playing AI systems served as crucial benchmarks for AI progress, but early victories in chess and Jeopardy didn't translate to real-world intelligence because they relied on brute-force computation rather than learning mechanisms
- The neural network approach was rejected by mainstream AI researchers for decades despite being theoretically sound, because it lacked sufficient computing power and data until the internet era and GPU technology converged in 2012
- ImageNet 2012 represented a paradigm shift where AI systems learned patterns autonomously rather than having rules hand-coded, enabling the leap from game-playing to practical applications like image recognition and language translation
- The trade-off in neural network AI is interpretability: more powerful systems become less transparent in how they reach conclusions, mirroring the mystery of human cognition itself
- Game-playing ability became a proxy for general intelligence in the minds of tech investors and entrepreneurs, driving billions in funding toward AGI development once DeepMind demonstrated superhuman Atari performance
Trends
Shift from symbolic AI (expert systems) to connectionist AI (neural networks) as the dominant paradigm in industryGPU technology originally developed for gaming becoming critical infrastructure for AI training at scaleBig data and internet-scale datasets enabling machine learning approaches previously considered impracticalGame-playing benchmarks evolving from simple (chess) to complex (Atari, Go) as measures of AI progress toward AGITech billionaires and venture capitalists viewing AGI development as existential opportunity rather than distant theoretical goalContrarian researchers operating on the fringes for years before mainstream validation and rapid adoption by major tech companiesAI safety concerns emerging as AI capabilities accelerate, prompting researchers to advocate for responsible developmentCompetition between tech giants (Google, OpenAI, others) to acquire AI talent and lead AGI developmentTransition from academic research to corporate-backed AI labs as primary drivers of innovationPublic perception of AI shifting from sci-fi fantasy to imminent transformative technology
Topics
Deep Blue vs. Gary Kasparov chess match (1997)Symbolic AI and expert systems approachNeural networks and connectionist AIBackpropagation algorithm and machine learningImageNet Challenge 2012 breakthroughGPU computing for AI trainingIBM Watson and Jeopardy competitionGeoffrey Hinton's research and contributionsYoshua Bengio's neural network advocacyDeepMind and Demis Hassabis foundingAtari game-playing AI systemsArtificial General Intelligence (AGI) developmentAI interpretability and black box problemGame theory and AI benchmarkingAI safety and existential risk concerns
Companies
IBM
Developed Deep Blue chess computer that defeated Kasparov in 1997 and Watson system that won Jeopardy in 2011
Google
Hired Geoffrey Hinton and acquired DeepMind, becoming primary funder of AGI research after ImageNet breakthrough
DeepMind
Founded by Demis Hassabis to pursue AGI research; created breakthrough Atari-playing AI and was acquired by Google
OpenAI
Co-founded by Ilya Sutskever, a grad student in Hinton's lab who helped win ImageNet 2012 competition
Nvidia
GPU chip manufacturer whose gaming-focused graphics processors became essential infrastructure for neural network tra...
University of Toronto
Where Geoffrey Hinton's lab conducted the ImageNet 2012 competition with minimal funding against larger institutions
People
Geoffrey Hinton
Pioneer of neural networks and backpropagation; rejected by AI community for decades before 2012 ImageNet validation
Yoshua Bengio
Co-developer of neural network approaches; faced rejection from mainstream AI researchers for pursuing connectionist ...
Gary Kasparov
World chess champion defeated by IBM's Deep Blue in 1997, symbolic moment of machine surpassing human intelligence
Demis Hassabis
Child prodigy and Pentamind champion who founded DeepMind to pursue AGI; convinced Google to fund AI research
Ilya Sutskever
Grad student in Hinton's lab who helped win ImageNet 2012; later co-founded OpenAI
Peter Thiel
Early investor in DeepMind with $2.5 million, backing AGI research when Silicon Valley considered it embarrassing
Elon Musk
Radicalized by Google's acquisition of DeepMind; founded OpenAI as alternative to ensure safe AGI development
Shane Legg
Co-founder of DeepMind alongside Demis Hassabis, shared vision of building true thinking machines
Alan Turing
Early AI pioneer whose vision of machine intelligence influenced decades of AI research direction
Gregory Warner
Host and narrator of The Last Invention podcast series on AI history
Quotes
"My view is you shouldn't give up on an idea that goes against the grain until you understand why it's wrong."
Geoffrey Hinton
"That way that deep blue beat Kasparov was nothing like how human chess players do it."
Joshua Bengio
"If you let go of that requirement, then you can get much more powerful systems."
Geoffrey Hinton
"We had no idea how valuable what we done was."
Geoffrey Hinton
"AGI is the most important technology probably that's ever going to be invented."
Demis Hassabis
Full Transcript
Hello, Matt here. Before we get into this week's episode, I wanted to pop in real quick to let you all know about another podcast from our team here at Longview called Reflector. On Reflector, we mixed together historical backstories with on-the-ground reporting to tell context-obsessed stories about the beliefs that are shaping the world. To find it, just search for Reflector on whatever app you are using to listen to this right now. This is The Last Invention. I'm Gregory Warner. Kasparov has played Night B8 to D7, which is a move that his arch rival Anatoli Carpohov on a spring day in 1997. Millions of people around the world, myself included, watched one of the most unusual chess matches in history. In one corner, weighing 176 pounds, considered by many the greatest player in the history of the game. A battle between the World's reigning champion, Gary Kasparov. And in the other corner, weighing 1.4 tons, the new and improved RS 6000 SP Supercomputer. And the IBM Supercomputer created to beat him, Deep Blue. The Battle of Man Against Machine. The first move of this epic sixth game has been played. Deep Blue has played E2 to E. Now the reason it was getting so much attention, even from a lot of people who usually are not into chess. For some, watching chess might be tantamount to watching Pay Drive. However, it's actually been very entertaining. Look at Gary, that jacket opened up a little bit there. Because the fact that Gary Kasparov wasn't just another world chess champion, he was the world champion. And at this time in history, he had never lost a single championship match. So challenging Kasparov in any way. I mean, he's the pinnacle of chess. He was incredible genius. And on the other side of the board, IBM had invested millions of dollars and years of research into creating Deep Blue. They had specially designed chips that could analyze up to 200 million different chess positions per second. And they said that they were finally ready to take on the very best human at this game that in many ways has become a symbol of human intelligence. And now there's all kinds of problems. Notice that this bishop on C6, the bishop can easily fall victim to what we call an overload tactic. And this game was intense. And remember, watching it on TV at the time, looking at Kasparov, running his hand through his hair, getting up, walking around the room, coming back to the board, concentrating on each and every move he made. While the machine is just this glowing screen, very methodical, making its moves quickly and decisively. He looks disgusted, in fact. He looks just like he can't believe what's going on right now is an unhappy camper. But in the final match, our human champion fell into a trap. And whoa! Kasparov has resigned. In an absolutely stunning, stunning 19-mover, Kasparov has just simply stormed away. Machine didn't just beat men, but trounced him. Newsweek magazine called it the brain's last stand. The game of chess, supposedly a true test of human intellect, will never be the same again. Now, for a while, it did feel like we were witnessing a moment of profound change. Call it a blow against humanity. The victory seemed to raise all those old fears of superhuman machines crushing the human spirit. But pretty quickly, while it was a very impressive feat that the programmers at IBM had pulled off, it wasn't actually transformative. I mean, it didn't really change chess. In the years since, chess has only become more popular, both in the amount of chess people are playing, in the viewership of chess competitions, human to human. And in the world outside of chess, this did not usher in an age of competition between humans and true thinking machines. And while some of us looked at this and shrugged, or perhaps brief decipher leaf, there were those who looked at this moment and thought, we were playing the wrong game. Although I was fascinated by these early chess programs that they could do that, I was also slightly disappointed by them, because deep blue, even though it was a pinnacle of AI at the time, it did not seem intelligent. That way that deep blue beat Kasparov was nothing like how human chess players do it. I was actually more impressed with Kasparov's mind than I was with the machine, because he was this brute of a machine, all it can do is play chess, but it couldn't even play tic-tac-toe. And then Kasparov can play chess, but also can do all the other things that humans can do. So I thought, you know, doesn't that speak to the wonderfulness of the human mind? For today, the games and the gamers that brought us the AI revolution, and the small band of contrarian scientists determined to make an AI that didn't think like a machine. Okay, any meals? Gregory Warner, let's talk about games. I actually think one of the interesting ways of like following the path of AI is funny enough through games. So I originally got into this connection between games and the history of AI through live-bree. And I used to be a professional poker player for a long time. My original background is actually in physics. She herself is very good at games. She's famous for being a world champion poker player. We met her in episode one, right? Yeah, she spent a good chunk of the last decade publicly advocating for AI safety. And that's in part because she's not just a game player, but she's a game theorist. Being able to solve a game is being able to understand a particular environment where you have different objectives, different scoring metrics, perhaps the environment can be changing and being able to adapt in order to be the best at that goal. So as artificial intelligence gets better at game theory essentially, in increasingly complex and increasingly real life environments, then they are getting better at navigating the world. And that's really the trajectory we're seeing. When you hear people talk about AGI, this idea of artificial general intelligence, what they're really saying is an agent that is as good as a human generally is at navigating all of the different things in this world that we live in. So on the one hand, games are this very practical kind of benchmark, right? Like just a way for computer scientists to test how their system is doing. But as the games get more and more complex, they are getting closer to that holy grail of a thinking machine. Yeah, that's the idea. And it goes back all the way to the 1950s to the first generation of AI researchers. Some of the earliest computer programs were built to try and play basic games. Tomorrow, a preview of the future as it begins to take shape in the laboratories of the world. One of the earliest ones came from the UK. It was a system that played Tic Tac Toc, but because it was the UK, they called it Nuts and Crosses. In his spare time, engineer D.W. Davis built an automatic Nuts and Crosses machine that thinks for itself. By its own effort, it selects from the 6,045 alternatives, the one that always wins. And then over the rest of the 20th century and into the 21st century, we saw games of increasingly more complexity be defeated by computers to the point of superhuman level. Fast forward to the 1970s, you've got an AI system that can play checkers. That man is in playing checkers against a computer, is he? Sure, and it plays pretty well. Sometimes even better than the man who designed them. After that comes back, Gammann, then in the 90s, you get the famous chess match. IBM's Deep Blue computer demolished the greatest chess player ever, Gary Kasparov, in the final and decisive game of their match. And then at the time you get to 2011, you've got an AI system that challenged humans to what was seen at the time as the most ambitious game yet. This is Jeopardy, the IBM Challenge. And now here is the host of Jeopardy Alex Trimack. So IBM, they're back again and they're doubling down on the same strategy that they use with Deep Blue to win in chess. They've bought millions of dollars in years of research into winning Jeopardy because as they see it, this is an even more complicated challenge for an artificial intelligence. Language is an area where from the very beginning of the computer era, people kept expecting computers to do reasonably well at. They expected computers could talk. And so far, the computers have failed to deliver on this promise. On top of having to answer questions on any different subject, you know, history, pop culture, philosophy, this system will have to speak and understand language. A little over three years ago, the folks at IBM came to us with a proposal that they considered to be the next grand challenge in computing. And that was designing a computer system that could understand the complexities of natural language well enough to compete against Jeopardy's best players. Jeopardy and IBM, they hype up this big three-night showdown. So you are about to witness what may prove to be an historic competition. Where the AI system Watson is going to go head-to-head against two of the best players in Jeopardy history. And the first things start off a bit rocky for Watson, the human players, their neck and neck. Stylish elegance or students who all graduated in the same year. Watson. What is chic? No, sorry. What is class? Class, you got it. But by the second night of this three-night special. And anytime you feel the pain, hey, this guy refrained. Don't carry the world upon your shoulders. Watson. Who is Jude? Yes. Watson starts crushing the humans. Losing to him by one hundredth of a second. Watson. Who is Michael Phelps? Yes. Black holes boundary from which matter cannot escape. Watson. What is event horizon? Yes. Watson. Who is Grendel? Yes. Watson. What is glass judgment? Correct. What is London? Correct. What is stick? Stick is right. And with that, you add your lead. You're at five. And just like with Deep Blue, beating the world's best chess player, there was this moment afterwards where it felt like we might really be witnessing some kind of milestone. IBM says the technology could help speed up medical diagnosis and other challenging computers. IBM, they put up this documentary saying this is going to be transformational. All these things are now going to be possible. Of course, this whole project is not ultimately about playing jeopardy. It's about doing research and deep analytics and a natural language understanding. This is about taking the technology and applying it to solve problems people really care about. We're just so excited about all the things we can do with this. But again, just like with Deep Blue, the strategy was very good at winning a complicated game, but it failed to live up to the hype. It failed to lead to anything very useful outside of the world of that game. OK, so you're saying that for decades, computer scientists were testing their AI systems against the world of games. What is the theory that as the games got more complex and as the skills they had to program in became more interesting that somehow those skills would translate into the real world? And yet, none of these AI systems that can play these games can make the jump into real life. Yeah, they can't make that jump that live buri was talking about. Their intelligence does not transcend into the real world. Exactly. And what is the theory for why? Well, it comes down to the strategy, to the way that these AI systems were built to win in these different games, which relies on a massive amount of engineering. Deep Blue, for example, was programmed essentially hand coded with all the rules of chess, with the millions and millions of possible chess positions they might encounter coming from all the world's best chess books or the same with Watson. It's programmed on all these encyclopedias. And then during the game, they're essentially just running algorithms to try and retrieve the possible right answer or the possible right move as fast as they can. And do you remember what were you thinking back in 1997 when Gary Kasperov gets beat by IBM's deep blue? And there's all this excitement about what's going to happen next. Nothing much because we knew that it was just brute force search, which is a classical AI technique that is very unlike human intelligence. But it turns out that all of this time that all the attention was being paid to Watson and deep blue, there were some AI researchers like Joshua Benjio. That way that deep blue beat Kasperov was nothing like how human chess players do it. And this small band of AI researcher outsiders who were essentially shaking their heads, saying that these AI systems, they are not doing what anyone would consider thinking. And they thought that they had a better way. Would you describe yourself as a bit of a contrarian? I'm tempted to disagree with that, but I think you might be right. And one of them that would turn out to be the most consequential was a guy named Jeffrey Hinton. Well, let's just get into it. First off, can you just introduce yourself? What's your name and what title do you go by these days? My name's Jeffrey Hinton. I've been doing research on neural networks since 1972. That's a little over 50 years. And for a long time, this was regarded as crazy. And more recently, it's turned out to work much better than symbolic AI. This is the same Jeffrey Hinton who quit his job at Google in 2023, very publicly, to warn the world about the existential risk of AI. Yes, it is. He is now a Nobel Prize winner for his work on artificial intelligence. Many call him the Godfather of AI. But before he quit his job at Google, before he even had his job at Google, for much of Hinton's career, his ideas, his strategies, his approach was resoundingly rejected by almost all of his peers. I had a very smart student wanted to do graduate work with me. And one of the other professors in my department told me, oh, don't worry. Go with Hinton. That'll be the end of your career. It's a dead end. I'm sorry to laugh. I just know what happens at the end of the story makes it ridiculous. But what did that feel like at the time? My view is you shouldn't give up on an idea that goes against the grain until you understand why it's wrong. So Hinton, he initially started as kind of ostracized by the AI community because he was working on the approach that most people thought was a dud. Unsurprisingly, Hinton's name and his backstory came up in almost every conversation that I had for the series, including with the author Karen Howe. He just felt very strongly about it in part because he originally started studying AI, not because he wanted to recreate human intelligence, but actually because he wanted to understand human intelligence better. So he was interested in it from the perspective of if we successfully create intelligence systems in computers, that will enable us to better understand our own intelligence. And one of the things that's so cool about Hinton is that he always studied artificial intelligence alongside brain science because he had this deep-seated belief that the two were intrinsically linked. And so he was coming from a more neuroscience background and he strongly felt that if we can create software that mimics the processing power of the brain, surely we will be able to get to some kind of intelligent system. So the idea is that the path to intelligence, real intelligence, to get the machine to think like a human brain, it needs to be structured like a brain. Like the same way we have a bunch of neurons in our brains, I'll talk to each other, that's what they're going to kind of design for this computer. Yes, this is how you get the approach called neural networks or neural nets, which is basically that. It's a AI system with all these different layers and layers of artificial neurons and they fire and they change in a way that kind of mimics or mirrors the way that neurons fire in our brains. So my aim has always been to understand how the brain works, but in our attempts to understand how the brain works, we've developed this technology which is amazing. I thought, wow, this is really cool. Why don't we take inspiration from human brains to figure out how to do AI? Joshua Benjiot is like Hinton, now one of the most decorated and celebrated AI researchers in his field. He's won the Turing Award. He's actually the most cited living scientist on Earth right now. But like Hinton, most of his career was full of rejection. My papers got rejected because they were about neural nets and my students didn't want to work on neural nets because they were afraid they wouldn't get a job. And why were you so committed to this? Why not just follow the mainstream AI models? Some scientists, at least I and many others I know, have an emotional relationship with ideas. You get really excited about something and you feel strongly that this is the path. If you want to be honest, you know you can't be sure, but still you have this strong feeling. These emotions are what allowed us to go through the times when it was maybe difficult to work on these topics. Walk me through the 80s into the 90s. What was it like to study this? Did it feel like it was fringe? Like what language should we use to accurately describe what you were up to? Fringe is quite good. There was a period, even quite late on, even in the 2000s, when people were saying things like this paper's van neural networks, it shouldn't be submitted to a machine learning conference. Like it's not even worth submitting. We don't want that kind of stuff in machine learning. It's obvious nonsense. A machine learning shouldn't pay any attention to it. Here is what I'm not getting, right? If AI, from its origins from the beginning of the term in 1956, even earlier with Alan Turing, the whole idea of AI was to mimic human intelligence. The brain is our thinking organ, and this camp had a way to mimic the brain. So why was that idea? Why were they out in the cold? Part of the reason is almost a philosophical resistance to this idea among AI researchers. Because from that 1956 summer program where that debate emerged between the symbolists who want to make expert systems and the connectionists who want to make these AI toddlers, the AI babies, the expert system side just totally dominated, partly because the systems they made were just better at doing things that looked like intelligence. For decades, higher intelligence, as in what the mathematicians do, or physicists, or people who play chess, and so on, and win tournaments, that was considered the peak of human intelligence. Like an AI system that can beat a chess master, an AI system that can, you know, stomp two nerds and jeopardy. That must be smart. That is what intelligence looks like to us. And Bingeo and Hinton and their side, they're over there saying, no, no, no, intelligence is a toddler. Intelligence is a four-year-old. Wait a second. We need to build a foundation's first. And foundations for human intelligence is the intelligence of a one-year-old. And you don't spoon feed a one-year-old with mathematical formulae, right? You let him experience life. You show him things. But they also had a couple of very serious technical obstacles in their way as well. For many, many years, most people claimed it was crazy for not completely unrelatively reasons. The biggest one was that for decades, critics of Hinton and this neural net approach, they would say, if your system is going to learn on its own, learn its own patterns, what are you going to do when it learns the wrong thing? They said, a big network of brain cells with random connection strings in them will never learn to do anything interesting. If you try just tinkering with the connection strings to make it behave better, you'll get stuck in what's called a local optimum. And the metaphor that Hinton uses for this problem is to imagine a hiker on a huge mountain range with a simple mission, climb to the tallest peak, and the hiker has one rule always go up. And this works fine all the way until they reach the top of a smaller mountain peak. And at that point, every direction that they can go is down, right? And so from the hiker's perspective, it thinks that's on top of the world, when in reality, there's a much larger peak nearby. It's like a mountain range where you get trapped on a peak and you can't get to the higher peaks because you have to go downhill to get to the higher peaks. And if you just try going uphill, you'll be trapped on this local peak and you'll never really get anywhere. So it's like the hiker has learned the terrain, it's figured out the mountain, it's actually been able to climb all this distance, but it cannot figure out how to go back down the path in order to take the right trail to the even higher peak. Yes, to retrace its steps and to find a different solution, so to speak, to correct an error. And this was a huge problem. And one of the reasons that Hinton and Bingeo are referred to as Godfathers of AI, one of the reasons that they are legends in their field is because despite all of the naysayers, they continued to go back to their labs, they kept doing their research, trying to solve problems like this. And finally, they did. And it's a crazy story. They used this old, largely forgotten algorithmic system called back propagation. And they were able to give these neural nets a way to metaphorically retrace their steps, go back to where they started from and start climbing again. In a sense, the machine could now learn from its mistakes. God, it's like back propagation. It's like a math way of saying, hey, go back and correct your error. Like the network can now escape the small hills, find its way to scale up to the real mountains, so to speak. And this would become revolutionary in theory. But the trouble was when they discovered it, they were still running into two other very persistent problems. One of them was that they just needed an insane amount of computing power. One aspect is computers were small and slow relative to what they are now. If you try to use your own networks, you couldn't get them to do much. You can imagine a digital brain firing with digital neurons, trying to not only learn patterns, but go back and learn from its mistakes. That's going to take a lot more computing power than you could get from a 1980s or 1990s IBM. And you also need just an insane amount of data for this AI system to be combing through and learning inside of and making mistakes and learning again. And therefore, for years, they continued to live on the fringes of AI research. To watch as, you know, IBM's Watson, deep blue, get all the money, get all the attention. And then came the year 2012 when they finally had their chance to completely flip this dynamic. A huge breakthrough came at Jeff Hinton's lab in 2012. And that came in the form of a game called ImageNet. The ImageNet Challenge. I talked about this with the writer, Jasmine Sun, who's working on a book about AI right now, which is this grand challenge with a huge data set of images. And with the Wall Street Journal's Kichegi. This contest that had been running for many years. They explained to me that this was a simple game between the world's best AI systems in an AI versus AI challenge of essentially name that picture. Look at a bunch of images and have the computer describe what was in the images. That's a cat, that's a dog, etc. Tons and tons of photos. And can someone build a system that can label and classify them as accurately as possible? And it turns out that this is something that is really easy for humans to do, even children, but actually very difficult for machines. Many of the things that we do effortlessly like recognize objects or recognize the words when somebody is talking are actually very difficult computational tasks that require huge amounts of computation. So even the people doing symbolic AI understood that things that appear very difficult like playing chess are actually much easier than things that appear very simple to us and that a three-year-old child can do like recognizing objects. So it was almost easier to make a chess champion than to make a two-year-old who could tell the difference between a banana and a ball. Yes. Up to this point, even the best AI systems that entered into this competition, they were still making a lot of mistakes. They were often mislabeling one out of every five or one out of every four of the images that they tried to categorize. And that is because they were all expert systems, meaning that they relied a lot on hand encoding for all the insane amounts of patterns and textures and colors and shapes that they would need to know to identify an image. This is almost like trying to teach an AI that's just looking at pixels to tell the difference between a butterfly and a moth. Right. How would you mathematically give a AI system the patterns and the textures that it needs to understand the difference between a seal and a sealion, a tea cup and a coffee mug or even like distinguishing cat and dog, the classic one, right? That's even hard because both are pretty similar. I mean, if the AI is looking for shapes, it's looking for like two triangles that would be the ears, a kind of blob that's the face, fur. I guess you look for whiskers in the cat. You can see how trying to embed the rules, put the codes and the shapes and the math into a machine like this would be really hard. So in 2012, Hinton and two of his grad students, one of them, by the way, is Ilya Suskiver, who would go on to be a founder at OpenAI and would help create jet GPT. They joined this competition with their totally different approach where they're going to let their AI learn and find patterns totally on its own. And one of the reasons that they're so confident that they can win is that the data problem that they had for years. This had largely been solved by the era of big data on the internet. In 2012, what happens is during all this time, the connectionists, which have been sort of an academic exile, have continued to make progress on their research. And there are a couple of things that happen that assist them. One is that the internet suddenly makes the aggregation of data far cheaper. And when you're trying to build data driven machine learning systems, you need a lot of data. And before collecting it from the analog world was just not as practical. And even in the early age of the internet, I imagine dial up was a little bit tough, you know? Yeah. But the second thing that happens is that computer chips become a lot more powerful. And on top of that, they got an assist from the video gaming world. Can you tell me what is a GPU and how is it that video game players ended up being the unsung heroes here? Yeah, a GPU is a piece of hardware graphics processing unit that was originally used in gaming. Because it turns out that after years and years of many lonely late nights as a stereotype goes, where all these gamers are playing all these different video games and they want really sweet graphics, and they want really smooth play, an industry of chip makers emerged that made these GPUs, the most important one being this little company called Nvidia. Because when you build video games, you just need huge graphics, like they have to move really fast, be really smooth. It's just like gaming just happens to require an insane amount of computing power compared to typing on word or whatever most of us are doing this at the time. And Hinton and his colleagues, they realized that those GPUs that make sweet graphics in games also pack to huge punch when you're trying to run a digital brain neural net that's trying to learn its own patterns and learn from its own mistakes. So now Hinton and his team, they've got their math, they've got the back propagation algorithm, they've got the data from the internet, and they've got the compute thanks to those video gamer GPUs. And so now it's on to the game. And it's a real David and Goliath situation here because remember, this is Hinton and two grad students throughout the University of Toronto. Their university doesn't even fund their experiment. They are going up against way bigger AI labs in China at universities like MIT. But when the results come in, this just like blows everything else out of the water with how suddenly accurate it is. Jeff Hinton's technology was able to do this and win this contest, have the best performance that any tech had ever had. They were just amazed. Hinton and his grad students, they don't just win, but they cut the error rates nearly and half. And people suddenly realize that maybe the connectionists were on to something all along. And something happened which doesn't often happen in science, which was that some of the best researchers in the field who had been vigorous opponents of neural net saying that stuff will never work on real images. They pretty much immediately switched their opinion. They said this is amazing, we're going to start doing that. And even though there are no cameras, there's no press around like there was for deep blue in that chess match or Watson on jeopardy. This is the AI that actually makes the leap from the world of games into the world beyond. I've talked to AI researchers who were sort of like, I remember being on hacker news in 2012, seeing Alex and then going holy crap. If this can work for images, there's no limit to what an AI system can't do. And so that was when I realized I had to get into the field. People like Hinton and Benjiro, after years in the cold, they're now essentially proved right. They're suddenly winning awards being celebrated. There's a bidding war that breaks out. All the top tech companies are trying to hire Hinton and his grad students. Eventually they end up at Google Hinton in his 60s. Suddenly becomes a multi millionaire, something he told me he never expected would happen. What was that like? That was weird. We had no idea how valuable what we done was. OK, so now finally, Hinton and his connectionists, they are no longer on the fringes. Their approach goes from being rejected to being embraced by Google, yeah. Yeah, and they themselves become wealthy Google employees. So what does Google actually do with this technology? How is it useful to them? Yeah, it was immediately useful for a number of things. Obviously, Google image search or YouTube video recommendations, but it also is this seismic shift in the strategy for all of these other categories. So if you were working in facial recognition up until this point, you were using those expert systems. You were hand coding all these different complex algorithms. But now after image net, you make the switch. You are letting your AI learn from patterns on its own. This is also true in the world of language translation and all these other different categories of automation. And yet, it comes with a trade off. Because remember, we talked about this before, if you go with the model of the AI toddler over the model of the AI expert, a trade off you have to make is mystery, essentially an understanding of exactly how it works. And for you, is this just something that you've always accepted that if you're going to make AI in this way, then you just have to accept that you will never fully understand how they know what they know, how they work? Well, we'd like to know, but the reality is that if you let go of that requirement, then you can get much more powerful systems. And for researchers like Hinton or Benjiro, that is the trade off that will give you intelligence. And one of the things that really fascinated me was that this early AI sort of neural net research was, in many cases, an attempt to understand the human brain. This came up when I was talking to Kei Chegi. She was saying that not understanding exactly how the AI is working, that isn't a flaw. It's actually better understood as a feature, not a bug. They weren't trying to make some robot that would do stuff for you. They were trying to actually understand ourselves. And we don't know how the human mind works. So it's a mirror in some ways. Right. And the idea is that if you're trying to create something that is truly intelligent, discovering that its interworkings are a mystery, in some ways is a signal of success that you're making progress. Yes. Our own heads are a black box to us. After the short break, one man stares into the machine's black box and thinks he sees a way to build super intelligence. The last invention is sponsored by Cozy Earth. We all know how obvious it is when you don't sleep well. Everything feels harder the next day. Your energy is off, your focus, even your mood. Good sleep really does shape everything that comes after. That's the idea behind Cozy Earth's comforters. They're designed with careful attention to detail, using naturally breathable temperature regulating materials that help you settle into deeper rest. The construction creates this soft cloud-like feel without being heavy or trapping heat, so you stay cool and comfortable all throughout the night. It's thoughtful design around something we all depend on, a great night's sleep. Try one for yourself, risk-free. Cozy Earth offers a 100-night sleep trial so you can see how it feels in your own home. Their comforters are built to last and come with a 10-year warranty. Head to CozyEarth.com and use the code Invention for up to 20% off. And if you get a post-purchase survey, be sure to mention you heard about Cozy Earth right here on the last invention. Press the craft behind the comfort and make everyday feel a little more intentional. Deep fake porn didn't come out of nowhere. It was allowed to spread while governments dragged their feet and tech companies shrugged. I'm staring at myself in this video that I know I haven't made. This is what it looks like to feel violated. This season on Understood. If you follow the trail, who does it lead to? These images they would like hunting me and the biggest platform was Mr. Deep Fakes Understood. Deep fake porn empire. Available now on CBC Listen or wherever you get your podcasts. So the image net victory. It was kind of a jailbreak for AI in general. AI was very quickly out in the world. It was in our phones, it was in our browsers. But these are just products, right? Yes. This is not Turing's vision of a thinking machine that might outthink humans. So where do we get to the next step of AGI? Yeah. The way that a lot of different tech insiders have explained it to me is that when Google looked at Hinton and his two grad students and the amazing achievement of image net, they saw a way to make money and they saw a way to increase the efficiency and usefulness of their products. But it would take another contrarian, another game for the industry to take the next step to seeing not just a way to make money or not just a way to increase their market share and the technology field, but to see a way to make a digital supermind that might change the world forever. And that gamer, that guy, is named Dimis Hasabis. More and more people are finally realizing leaders' accompanies. What I've always known for 30 plus years now, which is that AGI is the most important technology probably that's ever going to be invented. So to me, it's been obvious for many, many years that AI, if it was possible and it seems that it is, it would transform everything. So who is Dimis Hasabis? And how is it that he comes to be this bridge between Hinton's work and this moment that we're in right now where people think that we are seriously on the cusp of AI changing everything for better or for worse? So Dimis was a child prodigy. He was a champion games player. Dimis Hasabis, he grows up in England and by all accounts, he was a child genius. By age four, he was playing chess competitively against adults. By age 13, he's already representing England in these international chess championships, but he's not just good at chess. He's good at almost every single game that he plays. He starts entering into these pintimined championships. Are you familiar with pintimined? It's like the Olympics of Mindsports, basically, right? Yeah, some people describe it as like a decathlon of the mind. It's been called the biggest gathering of Honorax ever. He's a person chess by gallon bridge and other more obscure mind games. You play all these games ranging from chess to go to bridge to poker all at the same time and Dimis, he started entering into these world championship matches. The Penta Mind World Champion. And of course, he dominates. I think it's quite notable that he was a Penta Mind champion. IE that he's not just good at one game, but has some sort of like flexible, cognitive, meta-cognitive skill set that allows him to succeed across a whole range of different strategic activities, different environments, different rule sets. But all this also leaves him with a question. And I think what this makes him interested in is like, why is he so good at games? What makes some people good at games and some people less good at games? What is happening in my mind? What is my own intelligence? How could I recreate a general intelligence like my own inside of a machine? And what would it take to also build a computer system that could do that as well? And so this question inspires him to go to Cambridge University, where he studies computer science, and then to open his own gaming studio, where he's not only designing and building his own video games, but he's using the latest artificial intelligence technology to do so. But at the time, he's just really unimpressed with the quality of those AI systems. These early forms of AI that he was working with in the early 2000s and 90s, they just weren't advanced enough to do the kinds of things that he wanted. And he felt like the computer just weren't smart enough yet. So he closes down his game studio in his early 20s and says, no, I got to get a PhD in neuroscience. Like, actually, I need to understand the brain better. And in classic Dimis Hassab's form, he doesn't just get a PhD in neuroscience, but his thesis paper ended up being named among the top 10 scientific breakthroughs by Science magazine back in 2007. All right. This guy's pretty unstoppable, right? I think he's earned the name genius. But anyway, it's while he's studying to get his PhD that he meets another student, a guy named Shane Legg, who is also really into this idea of artificial intelligence. And of building a true thinking machine and the two of them decide to found this company together called Deep Mind. They're like, we're creating a research lab. We are going to pursue this crazy idea that nobody takes seriously. And we're going to get a bunch of researchers who believe in this vision to do with us. And so in 2010, this boy genius and his co-founder, armed with their education and their big dream, they head off to the world capital of ambitious tech startups, Silicon Valley. And they start going around telling these different investors that they're not just going to make a new tech product, that they want to make real AGI that will transform the economy, transform healthcare, that will supercharge humanity into this age of abundance. But not even Dimis Asabis can overcome the fact that at this time, almost no one in Silicon Valley thinks that AGI is going to be possible anytime in the near future. Well, not only that, it's like Kevin Russo from the Times was telling you, it was considered sort of embarrassing for a company to talk about AGI. Right. And so at first, according to Dimis Asabis, they had a really hard time hiring people. They had a hard time scraping together enough money to really get their company off the ground. Until finally, they landed their first big investor, Peter Teal. You know, one of the standard ways people think about technology is that if it happens, it's great. If it doesn't happen, not a big deal. I think little could be further from the truth. Our entire civilization, our entire culture is predicated on accelerating technological change. Peter Teal, as you know, he has embraced the nickname, the contrarian of Silicon Valley. For some ways that have made him quite controversial, but yes. Controversial in some eyes, beloved in others. But what's so interesting about his initial investment, and I believe it's been reported that it was about $2.5 million. Is that Teal doesn't necessarily buy into the fact that AGI is going to be utterly transformative. The thing that he's most interested in is funding and backing tech projects that are promising that technology can actually deliver a far better world. And so his investment, it's less a deep-seated belief that Demis is going to pull this off. And it's more like a vote of confidence in a kind of tech entrepreneur dream. Is it like all the money was going to dating apps and attention economy kind of grabbing sort of projects. And he's like, no, these folks want to build a supermind. Let's give him some money. Right. Like, okay, maybe they can build a supermind. Maybe they can't build a supermind. But this is the direction that Silicon Valley should be going in. So Peter Teal's in. They've got some money. What do they do with it? Well, right away they start trying to build these different AI models. They start doing AI experiments, but they don't really make any breakthroughs to speak of until the ImageNet competition in 2012 when they see what Hinton and his grad students were able to do. They look at that and they think there it is. There is our path to making a true world-changing AI. And so in true gamer fashion, they decide to build a groundbreaking AI that plays Atari. It basically showed this AI agent teaching itself in real time how to play a vintage Atari game and then became better at it than any human in the world. So we started with probably the most iconic of the game consoles, the Atari 2600 from the 80s. Their idea was to take things even further than Hinton did. And with this AI system, they weren't going to give it any instructions, any data, any information at all. So this is literally the first time the machine has ever seen this data stream, this pixel data stream. So there has no idea it's controlling the green rocket at the bottom of the screen, has no idea how to get points, no idea how it loses lives. Dimis Osabis, later he did a presentation where he walked people through how this AI played the game Space Invaders. This is a game where you are a fighter plane and you're trying to defeat a bunch of aliens in spaceships that are shooting at you. And you'll see it loses its three lives almost immediately. So it's just playing randomly at the moment. Then after overnight trainings on a single GPU machine on our servers, it's just playing the game some more. You come back in the morning and now it's better than any human can play the game. So every single bullet hits something, it can't be killed anymore by the Space Invaders. It's worked out that the mothership at the top of the screen going across now is worth the most points. It does these unbelievably accurate shots to to get those points. In just a few hours, this AI learned on its own how to execute every single move in such an efficient way that it not only earned a perfect score, but it seemed to understand what was going to happen in the game before it happened. It's built up such an accurate model of this world that it's in that if you watch the last Space Invader, they get faster as there's less of them, watch the last bullet. It's sort of predictably fires where it thinks the Space Invader will be in a few seconds time. It's perfectly model this very complex data stream. Now of course, these are just games, but this could be anything. This could be climate data, this could be economics data, stock market data, anything that has temple sequences of data, which is most things these days. Silly as it sounds, Atari ends up being the trigger that fires the opening shot in the AI race. That was a huge breakthrough and a video of that kind of was circulated on private planes of billionaires and it was what prompted Google to very quickly buy up DeepMind. It starts this bidding war that eventually leads to Google purchasing DeepMind and hiring Dimas just as they had Hinton before. Only this time, they're no longer saying we want you to work on our products. They are putting their seal of approval, they're putting their money and their resources behind this previously wild pie in this guy idea of creating a true digital supermind. And when Google sucked this really promising technology up inside the Borg of Google, that radicalized Elon Musk when that happened and made him convince there had to be some alternative takesative counter it. This is what would lead Elon Musk to do everything in his power, first to stop Dimas and his creation. With artificial intelligence we are summoning the demon. Mark my words, AI is far more dangerous than nukes. I think that's the single biggest existential crisis that we face. And then ultimately to beat him to it. Next time on the last invention, how the technologists who were most concerned about the risks of AGI began one after another to believe that the only way for it to be safe was to make sure that they were the ones who built it and built it fast. The last invention is produced by Longview, home for the curious and open-minded. To support our work go to Longviewinvestigations.com. Special thanks this episode to Keach Hege, Jasmine's son and Karen Howe. Links to their work can be found in our show notes. Thanks for listening and we'll see you soon.