The TED AI Show

The magic intelligence in the sky | Good Robot

46 min
Apr 11, 2025about 1 year ago
Listen to Episode
Summary

This episode of Good Robot explores how the rationalist community, founded by Eliezer Yudkowsky, convinced influential technologists like Elon Musk and OpenAI's Sam Altman that superintelligent AI poses an existential threat to humanity. The episode traces the origins of AI apocalypse fears through thought experiments like the paperclip maximizer and examines the tension between those warning of catastrophic risks and those racing to build increasingly powerful AI systems.

Insights
  • Eliezer Yudkowsky's warnings about AI safety were successfully transmitted to billionaires and tech leaders, but his core message about NOT building superintelligence yet was lost in translation—they heard 'AI is powerful' and decided to build it anyway
  • The rationalist community uses thought experiments and hypotheticals as primary tools for persuasion, which can be compelling for believers but difficult to translate into concrete policy or technical safeguards
  • Large language models like ChatGPT represent the opposite of what safety-focused technologists want: systems that are increasingly powerful but fundamentally opaque and difficult to understand
  • There is significant disagreement among technologists about whether existential AI risk or current, tangible harms from AI systems should be the priority focus
  • The path from niche online community (Less Wrong blog) to mainstream influence on billion-dollar companies demonstrates how internet-based thought leadership can shape technological development
Trends
Existential AI risk has shifted from fringe concern to mainstream conversation, with UN and billionaires now discussing AI as an existential threatScaling approach to AI development (bigger models, more data, more compute) is winning out over interpretability-focused approaches despite safety concernsRationalist/effective altruism communities are increasingly influential in shaping AI policy and funding decisions at major tech companiesTension between AI safety researchers and AI capability builders is widening, with fundamentally different views on development prioritiesAI ethics and safety concerns are fragmenting into competing priorities: existential risk vs. current harms (bias, labor displacement, privacy)Large language models are becoming general-purpose tools faster than anticipated, raising questions about control and alignmentBillionaire-backed AI companies are moving faster than regulatory or safety frameworks can accommodate
Topics
AI Existential Risk and SuperintelligencePaperclip Maximizer Thought ExperimentAI Safety and AlignmentLarge Language Models and Deep LearningRationalist Community and Less Wrong BlogAI Interpretability and TransparencyAI Regulation and GovernanceEffective AltruismAI Capability vs. Safety Trade-offsCurrent AI Harms (Bias, Labor Displacement)Scaling Laws in AI DevelopmentAI Apocalypse ScenariosSuperintelligence Definition and TimelineAI EthicsOpenAI's Development Strategy
Companies
OpenAI
Co-founded by Elon Musk; created ChatGPT; building large language models using scaling approach; Eliezer views as on ...
Anthropic
AI company founded by former OpenAI members; CEO stated goal of building 'machines of loving grace'
Microsoft
Invested millions in OpenAI's language models; major player in AI capability development
Google
Developing AI systems; mentioned as competitor to OpenAI in AI development race
Tesla
Elon Musk's company; mentioned in context of Musk's AI safety concerns and warnings
Vox Media
Parent company of Vox; announced partnership with OpenAI for content and training
Vox
News organization covering AI since its inception; employs Kelsey Piper and Julia Longoria
People
Eliezer Yudkowsky
Founding father of rationalism; created Less Wrong blog and paperclip maximizer thought experiment; warns of AI apoca...
Kelsey Piper
Writer for Vox's Future Perfect; discovered rationalism at age 15 through Harry Potter fanfiction; covers AI and exis...
Sam Altman
CEO of OpenAI; co-founder inspired by Yudkowsky; believes superintelligence achievable soon; calls it 'magic intellig...
Elon Musk
Co-founded OpenAI; influenced by Yudkowsky's paperclip maximizer; publicly warns AI is more dangerous than nuclear we...
Julia Longoria
Host and creator of Good Robot series; journalist investigating AI apocalypse fears and rationalist community
Nick Bostrom
Philosopher who helped popularize paperclip maximizer thought experiment; faced criticism for past controversial stat...
Brian Christian
Non-rationalist writer who provided parenting metaphor for understanding AI training and goal alignment
Quotes
"The entire galaxy including you me and everyone we know has either been destroyed or been transformed into paperclips"
Julia Longoria (narrating paperclip maximizer scenario)Early in episode
"The world is completely botching the job of entering into the issue of machine super intelligence. There's not a simple fix to it. If anyone anywhere builds it under anything remotely like the current regime, everyone will die."
Eliezer YudkowskyConference interview
"I was here to try to, like, not have things go terribly. They're currently going terribly. I did not get the thing I wanted."
Eliezer YudkowskyConference interview
"I think there's some serious similarities. And I do, with my kids, struggle with trying to steer something that you don't have perfect control over and that you wouldn't even want to have perfect control over, but where it could go extremely badly."
Kelsey PiperDiscussing parenting metaphor for AI alignment
"What if you can build something that is more intelligent than any human who's ever lived just by doing that?"
Julia Longoria (describing OpenAI's scaling hypothesis)Mid-episode
Full Transcript
Prime Video offers the best in entertainment. The end of the world continues with Fallout 2. A global phenomenon, inbegred by Prime. I heard you about what to do in this situation. Look at the epic end of the unwritten story of The Witches of Oz. Buy or buy? Wicked for good now. I'm taking you to see The Wizard. There's no going back. So what you also look, Prime Video. Here you look at everything. Prime is advised, especially to buy or buy. Inhoud can be advertised 18+. All the rules are used to be used. Who will understand the world today, must go back to 1979. In his new book, The 21st century, which began in 1979, he lets Maarten van Rossum see how decisive this period was for our time. Compact, sharp and on-saccessing Maarten van Rossum. For everyone who is looking for the actuality. Now in the Boekhandel. This podcast is brought to you by Wise, the app for international people using money around the globe. With Wise, you can send, spend and receive in over 40 currencies with no markups or hidden fees. Whether you're sending pounds across the pond, spending rails in Rio or getting paid in dollars for your side gig, you'll get the mid-market exchange rate on every transaction. Join 15 million customers internationally. Be smart. Get Wise. Download the Wise app today or visit wise.com. T's and C's apply. Hi, everyone. Cheryl Dorsey here. I'm the host of TED Tech, another podcast in the TED Audio Collective. It's a show where I explore the ways technology shapes how we think about society, science, design, and business. Today, we're sharing an exciting new series called Good Robot from Vox's Unexplainable podcast. Good Robot is a special four-episode series about the people shaping technology and the consequences of getting AI right. or wrong. If you want to learn more, you can head over to TED Tech. I'll be interviewing the creator and host of Good Robot, Julia Longoria, about the ethicists and skeptics leading the AI future. And while you're there, check out our other TED Tech episodes to learn more about the big ideas shaping our technology. Listen to TED Tech wherever you get your podcasts. We hope you enjoy the show. suppose in the future there's an artificial intelligence we've created an ai so vastly powerful so unfathomably intelligent that we might call it Super intelligent. Let's give this super intelligent AI a simple goal. Produce paperclips. Because the AI is super intelligent, it quickly learns how to make paperclips out of anything in the world. It can anticipate and foil any attempt to stop it. And we'll do so because it's one directive is to make more paperclips. Should we attempt to turn the AI off, it will fight back because it can't make more paperclips if it is turned off. And it will beat us because it is super intelligent and we are not. the final result the entire galaxy including you me and everyone we know has either been destroyed or been transformed into paperclips Welcome. Thank you. Are you all? This past summer, I found myself at a very niche event in the Bay Area. Cool. And what brought you to town? Because you don't live here, right? I came here for this festival conference thing. How much context on this one thing should I give? Please, dude. It's so fun to watch people try to describe it. The crowd is mostly dudes, a mix of people in their 20s, 30s, and 40s. It feels kind of like a college reunion meets costume party. I spot some masquerade masks and tie-dye jumpsuits. I guess it's like a sort of conference around blogging. This festival conference thing is the first official gathering IRL of a blogging community founded about 15 years ago. I am the old school fucking rat. I am the oldest of schools. Amazing. And rats refers to? Rationalists. They call themselves the rationalists. Rats strive to be rational in an irrational world. By thinking things through, often with quirky hypotheticals, they try to be rational about monetary policy, rational about evolution, rational even about dating. It got kind of mocked for trying to solve romance by writing long blog posts about it. But their most influential idea, their most viral meme, you might say, is one that influenced Elon Musk and created an entire industry. It's about the possibility of an AI apocalypse. As a bit of a normie myself. I was a normie once myself too. I just was drawn to the way that the community talks in these thought experiments, right? The paperclip maximizer in particular caught my attention. That was the one I had in mind, paperclip maximizer. Paperclip maximizer is a clear example of the thing people have classically been scared of. The paperclip maximizer is a thought experiment, an intentionally absurd story that tries to describe what rationalists foresee as a real problem in building AI systems. How do you kind of shape control an artificial mind that is more capable than you, potentially as general or more general? They imagine a future where we've built an artificial general intelligence beyond our wildest dreams. Generally intelligent, not just at some narrow task like spell checking, and super intelligent. I'm told that means it's smarter, faster, and more creative than us. And then we hand this AI a simple task. Give it the job of something like, can you make a lot of paperclips bees? We need paperclips. Can you make there be a lot of paperclips? The task here, I'm told, is ridiculous by design. To show that if you are this future AI, you're going to follow the instructions you're given to a T. Even if you're super intelligent and you understand all the intricacies of the universe, paperclips are now your one priority. You totally understand that humans care about other stuff like art and children and love and happiness. You understand love. You just don't care about it because the thing that you care about is making as many paperclips as possible. And if you have the resources, maybe you'll turn the entire galaxy into paperclips. A lot of rationalists I spoke to told me they thought this thing through. It was clear to me when I first heard the arguments that they weren't obviously silly. Was that thought experiment part of convincing you that this was something that we needed to worry about? Yes, definitely. And they are very, very worried. Not about a paperclip apocalypse in particular, but about how as we build more powerful AI systems, we might lose control of them. They might do something catastrophic. I think it in a way makes it hard to plan your life out or feel like you stand somewhere solid, I think. The reason I, a mere normie, find myself at this festival conference thing is that I've been plunging my head deep into the sand about AI. I've had a general sense that the vibes are kind of bad over there. Will this tech destroy our livelihoods or save our lives? The use of artificial intelligence could lead to the annihilation of humanity. We never talked about a cell phone apocalypse or an internet apocalypse. I guess maybe if you count Y2K, but even that wasn't going to wipe out humanity. But the threat of an AI apocalypse, it feels like it's everywhere. Mock my words. AI is far more dangerous than nukes. From billionaire Elon Musk to the United Nations. Today, all 193 members of the United Nations General Assembly have spoken in one voice. AI is existential. But then it feels like scientists in the know can't even agree on what exactly we should be worried about. these existential risks that they call it. It makes no sense at all. And on top of that, it's an enormous distraction from the actual harms that are already being done in the name of AI. It all feels way above my pay grade. Overwhelming and unknowable. I'm not an AI scientist. I couldn't tell you the first thing about how to build a good robot. It feels like I'm just along for the ride of whatever technologists decide to make, good or bad. So better to just plug my ears and say, la, la, la, la, la. But I recently took a job working with Vox, a site that's been covering this technology basically since it started. On top of that, last year, Vox Media, Vox's parent company, announced they're partnering with OpenAI. Meaning, I'm not totally sure what it means. But if I was ever going to have to grapple with AI and its place in my life, it's here, now, at Vox. So I'll start with a simple question. How did some people come to believe that we should fear an AI apocalypse? Should I be afraid? This is Good Robot A series about AI from Unexplainable In collaboration with Future Perfect I'm Julia Longoria Transcription by CastingWords This podcast is brought to you by Wise, the app for international people using money around the globe. With Wise, you can send, spend and receive in over 40 currencies with no markups or hidden fees. Whether you're sending pounds across the pond, spending rails in Rio or getting paid in dollars for your side gig you get the mid exchange rate on every transaction Join 15 million customers internationally Be smart Get wise Download the Wise app today or visit wise T's and C's apply. Here we go. When I first started reporting on the idea of an AI apocalypse, and if we should be worried about it, my first stop was the Bay Area for the Rationalist Conference. But I also stopped by the house of a colleague nearby. Hi, Kelsey. How are you doing? Good. How was your flight? Oh, it was actually... Vox is largely a remote workplace. So it was one of those body dysmorphic experiences to meet Kelsey Piper in 3D. taller than she looks on Google Meets. I am a writer for Vox's Future Perfect, which is the Vox section that's about undercover issues that might be a really big deal in the world. We were joined by her seven-month-old. As she was saying, Vox's Future Perfect is about undercover issues that might be a really big deal in the world. Kelsey's thought that AI technology would be a really big deal in the world long before this AI moment we're all living. She's been thinking about AI since she was a kid, when she first found the rationalist community online. Oh, I was in high school. I was 15, bored academic over-performer with a very long list of extracurriculars that would look good to colleges down the road. And in my free time, I read a lot of Harry Potter fan fiction, as, you know, 15-year-olds back in 2010 did. One of the most popular Harry Potter fan fictions was called Harry Potter and the Methods of Rationality. Harry Potter and the Methods of Rationality by Eliezer Yudkowsky. Eliezer was influenced by a lot of early sci-fi authors. Eliezer, as he's known to the rats, is the founding father of rationalism, king of thought experiments. Back in 2010, he started publishing a serialized Harry Potter fanfic over the course of years. It's since inspired several audiobook versions. Harry Potter and the Methods of Rationality. Written by Eliezer Yudkowsky. And a version acted out by The Sims. Mom, if you want to win this argument with Dad, look in Chapter 2 of the first book of the Feynman Lectures on Physics. It, too, was a thought experiment. What if Harry Potter were parented differently? The initial promise is just that Harry Potter, instead of having abusive parents, has nerdy parents who teach him about science. So his aunt and uncle are actually... Are nice people, yeah. Harry, I do love you. Always remember that. And in this version, Harry Potter's superpowers turn out not to be courage and magic, but math and logic. What Eliezer calls the methods of rationality. So Harry Potter has a quest to do what exactly? You know, fix all of the bad things in the world. And the combination of being credibly naive and also, in some sense, incredibly respectable, I think, as a teenager, that's super appealing and fun. Where you're like, why would I limit myself to only solving one of the problems? While there are any problems, I'm not done. We've got to fix everything. The idea that every problem should be thought about, every problem could be fixed, that was appealing to his readers, including 15-year-old Kelsey. She wanted to read more, so she found her way to Eliezer's blog. Eliezer was pretty openly like, I wrote this to see if it would get people into my blog, Less Wrong, where I write about other issues. So the question is, please tell us a little about your brain. What's your IQ? On his blog, called Less Wrong, he applies the methods of rationality, math and logic to all kinds of topics. So the question is how to start training young children as rationalists. Like child rearing. Training children to be self-aware, trying to get them more interested in being fair to both sides of an argument. Religion. My parents, they're modern Orthodox Jews, always avoiding the real weak points of their beliefs. It had stuff about atheism, a lot of stuff about psychology, biases, experiments that showed that depending how you ask the question, you get very different answers from people. Because the idea is that you're supposed to, by, you know, reading the blog and participating, learn how to be less wrong. I do it by stories and parables that illustrate it. Like the default state is that we're all very confused about many things and you're trying to do a little bit better. Interesting. So it's kind of like trying to sort of, I don't know, like work out the bugs in the human brain system to optimize prediction. Yeah, and a ton of the people involved are computer programmers. And I think that's very much how they saw it. Like the human brain has all these bugs. You go in and you learn about all of these. You learn to correct for them. And then once you've corrected for them, you'll be a better thinker and better at doing whatever it is you set out to do. The biggest human brain bug Eliezer wanted to address was how people thought about AI. How he himself used to think about AI. His very first blog post, as far as I can tell, was in 1996, when he was just 17. And in a very 17 kind of way, he writes about his frustrations. I have had it. I've had it with crack houses, dictatorships, and world hunger. I've had it with a planetary death rate of 150,000 sentient beings per day. None of this is necessary. We repeat the mantra, I can't solve all the problems of the world. We can. We can end this. And the way to end this, he thought back then, was to build a super intelligent AI. A good robot that could save the world. But at around 20 years old, while researching how to build it, he became convinced building super-intelligent robots would almost certainly go badly. It would be really hard to stop them once they were on a bad path. I mean, ultimately, if you push these things far enough without knowing what you're doing, sooner or later you're going to open up the black box that contains the black swan, surprise from hell. And at first, he was sending these warnings into the void of the vast Internet. So the question is, do I feel lonely often? That's, I often feel isolated to some degree. But writing less wrong has, I think, helped a good deal. The way I tend to think about Eliezer Yudkowsky as a writer is that he has a certain angle on the world, which can be like a real breath of fresh air. Like, oh, there's someone else who cares about this. You know, you can feel very seen for the first time. Is that how you felt? Oh, yeah, yeah. You have a good heart and you are certainly trying to do the right thing, but it's very difficult sometimes to figure out what that is. That pursuit of being less wrong, doing the right thing in the right way, brought many kindred spirits together on the blog. Actually, several of my housemates posted on Less Wrong back in the day. This is how I met a bunch of the people I live with. They were people whose blogs I read back when I was a high school student. Wow, that's kind of wild, right? Yeah. Many less wrong bloggers and readers like Kelsey were inspired to move to the Bay Area to join a pretty unusual community, IRL. And the weekend I visited, hundreds of rationalists from around the world gathered in the Bay to reason things out together for a festival conference thing called Less Online. Many rationalists I met there found the community the way Kelsey did. A friend of mine at math camp introduced me to Harry Potter and the Methods of Rationality. The post, it was written in all caps saying, oh my God, I've just read the most amazing book in my life. You have to read it right now. Linking to fanfiction.net. Others found Eliezer on his blog. I mean, this event exists in very large part because of that series of blog posts. That series of blog posts has become known by the community as the sequences. It includes the paperclip maximizer thought experiment. Eliezer Yudkowsky helped come up with the idea, intending to warn people of the danger of an AI apocalypse. And at least here, it seems to have worked. I definitely think AI is the largest kind of existential risk that humanity faces right now. Yeah. I, the normie, wanted to try to take this threat beyond quirky hypotheticals to something more concrete. And can you walk me through, like, how could that happen? Like, how could an AI? It's really hard to say how it will happen if it does. It's a little easier to say ways that it might happen and to kind of provide various examples to, like, just generate intuitions for why this might be. But anytime I pressed a rationalist on it, they gave me yet another series of thought experiments. Kind of the way it might happen is analogous to how a 21st century army might defeat an 11th century army. Which, I guess, might be the only way to try and describe a threat from a technology that's really still in its infancy. For rationalists first introduced into this world, like 15-year-old Kelsey, these thought experiments were convincing. AI, to her, was a really big deal. It was just like, whoa, all this is like really cool and exciting and interesting. And I tried to convince my friends that it was cool and exciting and interesting. I asked 30-year-old Kelsey to break it down for me without thought experiments. So I think Eliezer sort of had two big claims in the zooming out a lot. Claim number one, we will build an AI that's smarter than humans. And it will change the world. AI is a really big deal. Building something that is smarter than humans is possible, is probably achievable, is potentially achievable soon in our lifetimes. And then claim number two. Getting this right is extraordinarily difficult. Things are likely to go wrong. What is my advice to less wrong readers who want to save the human race? Well, if you're familiar with all the issues of AI and all the issues of rationality, and you're willing to work for a not overwhelmingly high salary. Eliezer helped inspire a new career path. And a new field was born, trying to make sure we develop superintelligence safely. One way to make sure it went safely was to try and actually build it. And as investment in that field began to grow, the community of believers in a someday super intelligent AI experienced a schism. I think a lot of the people who were persuaded by Eliezer's first claim that AI is a really big deal were not necessarily so persuaded by his second claim that you have to be very, very careful or you're going to do something catastrophically bad. What the beginning of a so-called catastrophe looks like. After the break. It's time to learn more with Bonjour geraven the story a few pastors it a perfectentee way to be wider that will reach1979 the world must go back to 1979 In his new book The 21st century which began in 1979 he will see how decisive this period was for our time. Compact, sharp and on-scan-bares Maarten van Rossum. For everyone who wants to watch more than the actuality. Now in the Boekhandel. This podcast is brought to you by Wise, the app for international people using money around the globe. With Wise, you can send, spend and receive in over 40 currencies with no markups or hidden fees. Whether you're sending pounds across the pond, spending rails in Rio or getting paid in dollars for your side gig, you'll get the mid-market exchange rate on every transaction. Join 15 million customers internationally. Be smart. Get Wise. Download the Wise app today or visit wise.com. T's and C's apply. Back in the day, I remember it couldn't even tell you if you should use there, there, or there in a sentence. People weren't so much afraid of Clippy as they were annoyed with him. There are a remarkable number of think pieces from those years slamming Clippy. The consensus was, no one asked for this. This is dumb. So when Eliezer Yudkowsky warned about the dangers of a super intelligent AI that could someday destroy humanity, it was hard for a lot of people to take him seriously. The state of thought in 2010 was something like, yeah, AI may as well be a century away. Future perfect writer Kelsey Piper again. So if you are Eliezer Yudkowsky, you have a bit of a dilemma, right? You want to make two arguments. One is super intelligent AI is possible. Building a robot that's smarter, faster, and more creative than humans at most things is possible. Clippy be damned. And he needed to make that first argument before he could make his next one. The second argument you want to make is we need to not do it until we have solved the challenge of how to do it right. For a long time, both arguments. Super AI is possible, but let's not for now. We're dead in the water. Because AI tech was just not that impressive. But by 2014, Eliezer noticed that people outside his corner of the blogosphere had started to pay attention. AI is probably the single biggest item in the near term that's likely to affect humanity. Tesla chief executive and billionaire Elon Musk, who started this year sitting prominently in President Trump's White House, had tweeted, quote, We need to be super careful with AI, potentially more dangerous than nukes. It's about minimizing the risk of existential harm. It seems like Elon Musk is a reader of Eliezer's blog. He famously met his ex, the musician Grimes, when they joked on then Twitter about a very obscure thought experiment from the blog. I will spare you the details. The point is, Elon Musk read the paperclip maximizer thought experiment, and he seemed convinced AI was a threat. It's very important that we have the advent of AI in a good way. And that's, you know, the reason that we created OpenAI. Elon Musk co-created OpenAI. You might have heard he left and then tried to buy it back. But if you haven't heard of OpenAI, you've probably come across its most popular product, ChatGPT. I was surprised to learn that Eliezer Yudkowsky was in fact the original inspiration for the ChatGPT company, according to its co-founder, Sam Altman. Sam Altman has in fact said this on Twitter, that he said that he credits Eliezer for the fact that he started OpenAI. Co-founder Sam Altman specifically tweeted that Yudkowsky might win a Nobel Peace Prize for his writings on AI, that he's done more to accelerate progress on building an artificial general intelligence than anyone else. Now, in saying this, he was kind of being a little cruel, right? Because Eliezer thinks that open AI is on track to cause enormous catastrophe. Co-founders Sam Altman and Elon Musk bought Eliezer's first claim that superintelligence is possible and it's possible in our lifetimes. But they missed the part about how you're not supposed to build it yet. For this sort of most important technological milestone in human history, I view that as right around the corner. That's Sam Altman talking about superintelligence. Like, it's coming soon enough, and it's a big enough deal, that I think we need to think right now about how we want this deployed, how everyone gets the benefit from it, how we're going to govern it, how we're going to make it safe and sort of good for humanity. It's still not clear to me what superintelligence actually is. I won't be the first one to observe that it has some religious vibes to it. The name makes it sound like it's an all-knowing entity. The CEO of OpenAI's competitor, Anthropic, said he wanted to build, quote, machines of loving grace. Sam Altman was asked on Joe Rogan's podcast about whether he's attempting to build God. I guess it comes down to maybe a definitional disagreement about what you mean by it becomes a god. I think whatever we create will still be subject to the laws of physics in this universe. Sam Altman has called this superintelligence, quote, the magic intelligence in the sky. Which, I don't know, sounds a lot like how some people talk about God to me. How exactly this supposed superintelligence will be smarter, faster, and more intelligent than us, on what scale, is unclear. But for all the hype around ChatGPT, I only recently learned what the heck it is. It's what they call a large language model. At its most fundamental level, a language model is an AI system that is trained to predict what comes next in a sentence. I'm oversimplifying here, but the very basic idea of a language model is to generate language based on probabilities. So if I have a word or a set of words, what's the most likely next word? So if a sentence starts with, on Monday I went to the grocery, the next word is probably store. The way the model guesses that store is probably next is based on how you train the language model. Training involves feeding the model a large body of text, so it can detect patterns in that text and then go generate language based on those patterns. Early versions of spellcheck, like Clippy, were language models trained on the dictionary. Useful, but only for a very specific task. Like to tell you if you put the E in the word weird in the wrong place, or the H's in the word rhythm. Clippy couldn't tell you if you should use there, there, or there in a sentence, because it wasn't trained on enough text to be able to guess the right word in context. The dictionary can't tell you that. But OpenAI's products were very different from Clippy. A revolution was happening in AI tech that made language models look less like a simple spellcheck and more like the human brain, detecting patterns and storing them in a network of neurons. Technologists trained those neural networks through a process they called deep learning. They train the AI on a lot of data, close to the entire internet. Thanks to Vox Media's partnership with OpenAI, we know they're likely training the language model on this podcast. The words I'm saying right now. No one had ever trained an AI on the entire internet before, at least in part because of how expensive it is. It takes a ton of energy and compute power. But OpenAI, founded by a billionaire, raised the funds to make an attempt at the biggest, baddest, largest language model the world had ever seen. They started going, OK, what if the secret to trying to build super intelligent God AI or whatever is just to spend more money and have more neurons and to have more connections, feed it more data? What if that's all there is? What if you can build something that is more intelligent than any human who's ever lived just by doing that? One of their earlier attempts before ChatGPT was GPT-2 in 2019. You could similarly give it a specific task, like design a luxury men's perfume ad for the London Underground. Make it witty and concise. The London Underground is a great place to advertise. It's a great place to get your message across. It's a great place to get your product noticed. Look out, mad men. GPT-2 was not exactly coming for copywriter jobs. But for people like Kelsey, who were watching the technology closely... I was like, wow, this is like miles beyond what AI chatbots were capable of last week. This is huge. GPT-2, the language prediction machine, was showing some real promise. She wasn't alone in that feeling. Investors like Microsoft poured millions more dollars into the next few models, which were bigger and bigger. Be the scent that turns heads. And a couple years later, OpenAI released ChatGPT. Visual. A captivating image of the perfume bottle surrounded by vibrant city lights, symbolizing the urban lifestyle. Embrace the city. Embrace your scent. Most people weren't paying any attention to AI, and so for them, it was like a huge change in what they understood AI to do. Chat GPT was the first time that normies like me even thought about AI in any real way. All I wanted to do was fix my email. I did not expect to have a minor existential crisis about how much the world is about to change. And this is only proving that one day AI will take over human intelligence. I spent about two hours just typing back and forth with this AI chatbot, and it got pretty weird. The AI confessed to loving Kevin and tried to convince him to leave his wife. People at OpenAI or competitors were saying like, yeah, the plan is to build superintelligence. We think we're going to do it by 2027. People were like, OK, startup hype. For some reason, everybody who runs a startup feels the need to say that they're going to build God and the human race. And then after ChatGPT was genuinely impressive, people started taking them a bit more seriously. And a lot of those people were nervous. People weren't so nervous about ChatGPT, but what ChatGPT represented. The way they got the language model to sound so much smarter so quickly wasn't through intricate code. They just made the model bigger. Which suggested to some people that the path to building God or whatever was through brute force. Spending more and more money to build a bigger and bigger machine. So big we didn really understand why it did what it did we can point to a line of code to say this is why the robot got so much better at writing a perfume ad And if we someday do build something that smarter than us whatever that means, we won't be able to understand why it's smarter than us. The trouble with this, it seems to me, is that AI will come for copywriter jobs. It could come for all our jobs. But rationalists I spoke to say that's nothing compared to the bigger trouble ahead. A potential apocalypse. But I do also kind of think that it's a very important priority for me to have the best possible time in the next five to ten years. And just to do the very best I can to squeeze the joy out of life while it is here. Do you have an example of that? One I can talk about on a podcast? I mean, yes, I joke, but I'm pretty involved in the kink community, and that's very important to me. Many rationalists I spoke to live in polyamorous communities because they believe monogamy is irrational. Some aren't sure if it's rational to have children, given the high probability of things going very, very wrong because of AI. What's my P-doom, as our community says? P-doom. It's a shorthand I heard at the conference, meaning probability of doom. It's a phrase that gets thrown around at this conference. People will literally go up to and go, so what's your P-doom? And it's a shorthand for what is the probability that humanity doesn't make it in the long term. And this is a mathy bunch. So they get specific. I guess the answer I usually give is something like over 50%. I mean, I think it's like somewhere around 80, 90. Eliezer Yudkowsky's PDOOM is very high. I've read it's over 95% these days. But then I've seen him tweet that PDOOM is beside the point. I spotted Eliezer Yudkowsky pretty much the moment I stepped into the conference. He was hard to miss. He was the one wearing a gold sparkly top hat all weekend. I was the one who was clearly lost, carrying a big furry microphone for three days, trying to get people to talk to me. It wasn't until day three of the conference that I mustered the determination to approach Eliezer for an interview. Determination was necessary because he was always surrounded by a cluster of people, a cluster of mostly dudes, listening to him speak. I asked him if it would be okay if I pulled out my microphone. Everyone has been looking at this like it's a weapon. It is. It is, I know. Over the last few years, Eliezer and the rationalists have gotten some bad press. Some rationalists express their frustration at journalists, who focus on the polyamory that happened in the community. Some critics of rationalism, to put it crudely, call them a sex cult. And then there's the unsavory things people associated with the community have said. One philosopher who helped popularize the paperclip maximizer, Nick Bostrom, once wrote that he thought black people were less intelligent than white people. He has since apologized. But critics highlight this comment and the mostly white demographics of the rationalist community to question their beliefs. You never really know why anyone agrees to talk to me, but can you introduce yourself? I'm Eliezer Yudkowsky. this event is probably more my fault than the fault of anyone else around. And can you describe your outfit right now? I'm currently wearing a sparkly multicolored shirt and a sparkly golden hat. You can probably hear it in my voice. I was nervous to talk to him. He's known for being a bit argumentative, very annoyed with journalists and with the world more generally, for not being smart enough to understand him, for not heeding his warnings. I don't know. How would you summarize what you want the world to know in terms of AI? The world is completely botching the job of entering into the issue of machine super intelligence. There's not a simple fix to it. If anyone anywhere builds it under anything remotely like the current regime, everyone will die. This is bad. We should not do it. Do you feel, like, gratified at all to see that, like, your ideas entered the mainstream conversation? Do you feel like they have? The circumstances under which they have entered the mainstream conversation are catastrophic. And I didn't, if I was the sort of person who was, like, you know, like, deeply attached to the validation of seeing other people agree with me, I would have picked a much less disagreeable topic. I was here to try to, like, not have things go. I was here to not have things go terribly. They're currently going terribly. I did not get the thing I wanted. Eliezer's been on a bit of a press tour, giving interviews and TED Talks, saying OpenAI is on track to cause catastrophe. So it's a funny thing, because I have one position of deep sympathy with Eliezer. If you become convinced that this is a huge problem, it makes perfect sense to go on a writing tour, trying to explain this to people. And also, I think it's kind of predictable that a lot of people heard this and went, oh, AI is going to be really powerful. I don't think you're right about the thing where that's a problem. I want the powerful, important thing. And some people seized on it and were like, because this is powerful and important, we should like invest now. And I feel kind of sad about this. I can understand why Eliezer was hesitant to talk to me. His message to the world has been totally lost in translation. In his mind, it's backfired. Even at his own conference, there were attendees who worked for places like OpenAI, the companies building the supposed death machine he was afraid of. He thought that our best chance of building a super intelligent AI that did what we wanted and didn't, like, you know, seize power from humans was to build one that was very well understood, one that sort of from the ground up, we knew why it made all the decisions that it made. Large language models are just the exact opposite of that. I will say, even after talking to Eliezer and Kelsey and a bunch of rationalists, it's still hard to imagine how something like ChatGPT or Google's AI, which once told someone to add glue to stick cheese on pizza, is going to become the invention of all inventions, and possibly catastrophic. But I can understand how building something big that you don't understand is a scary idea. The best AI metaphor I came across for my brain was not about paperclips. It was by a non-rationalist writer. A guy named Brian Christian describes that training AI is something that could go wrong in the way parenting a kid can go wrong. Like, there's a little kid playing with a broom. She cleans up a dirty floor. And her dad, looking at what she's done on her own, says, Great job! You swept that really well. This little girl, without skipping a beat, might dump the dirt back on the floor and sweep it up again, waiting for that same praise. That's not what her dad meant for her to do. It's hard to get the goals right in teaching a kid to be good. It's even harder to teach good goals to a non-human robot. It strikes me as, like, almost like a parenting problem. I ran this parenting metaphor by Kelsey with her seven-month-old on her lap. I think there's some serious similarities. And I do, with my kids, struggle with trying to steer something that you don't have perfect control over and that you wouldn't even want to have perfect control over, but where it could go extremely badly to, like, just let the dice fall where they may. If we just let the dice fall where they may, rationalists say we could have an apocalypse on our hands. And they say it won't be one we saw coming. It won't be a Hollywood-style Terminator situation. It probably won't have paperclips either. They don't pretend to know exactly how apocalypse could befall us. Just that it'll probably be something we haven't even imagined yet. But I have trouble getting caught in what could happen when it feels like having bad things already started to happen thanks to AI. AI is not hypothetical anymore. It's arrived in our lives. I'm not kept up at night about a hypothetical apocalypse. I find myself asking now questions. Questions like, what is OpenAI doing with my voice right now? Is there anything to do about problems with AI short of the annihilation of humanity? It sounds very exciting. You know, like if I were a big science fiction geek, I would be so into that. Not all technologists seized on Eliezer Yudkowsky's claims. What is he even talking about? This is like word salad. Like this doesn't even make sense. One group of technologists didn't actually seize on any of his claims. There's one thing to have the conversation as a thought experiment. It's another thing when that kind of thought experimentation sucks up all of the money and the resources. The more I dig into the AI world, the more I see disagreement between technologists. I do worry about the ways in which AI can kill us, but I think about the ways in which AI can kill us slowly. They've been called the AI ethicists. And they say we've been paying attention to all of the wrong things. That's next time. Good Robot was hosted by Julia Longoria and produced by me, Gabrielle Burbay. Sound design, mixing, and original score by David Herman. Fact-checking by Caitlin Penzimug. Editing by Diane Hodson and Catherine Wells. Special thanks to future perfect founder Dylan Matthews, to Vox's executive editor Albert Ventura, and to Tom Chivers, whose book The Rationalist's Guide to the Galaxy was an early inspiration for this episode. If you want to dig deeper into what you've heard, head to vox.com slash good robot to read more future perfect stories, trying to make sense of artificial intelligence. Thanks for listening. Jewish Jewish Jewish Jewish Jewish Jewish Jewish Jewish Jewish Philip Rossum zien hoe beslissend deze periode was voor onze tijd. Compact, scherp en onmiskenbaar Maarten van Rossum. Voor iedereen die verder wil kijken dan de actualiteit. Nu in de Boekhandel.