The Last Invention

EP 8: The Accelerationists

53 min
Nov 20, 20255 months ago
Listen to Episode
Summary

This episode explores the 'accelerationist' movement in AI, featuring interviews with Reid Hoffman and Beff Jezos who argue for rapid AI development despite risks. The episode contrasts accelerationist views with AI safety concerns, examining arguments that slowing AI progress poses greater risks than accelerating it.

Insights
  • Accelerationists view AI development as a portfolio risk management problem where AI reduces overall existential risk despite adding new risks
  • The movement frames AI safety concerns as 'safetyism' that stifles innovation and economic growth across multiple sectors
  • Effective accelerationism emerged as a counter-movement to effective altruism, using 'memetic warfare' to combat pessimism about technology
  • Key investors and technologists believe competitive dynamics make coordinated AI slowdowns impossible and potentially counterproductive
  • The accelerationist vision includes massive economic transformation comparable to or exceeding the Industrial Revolution
Trends
Rise of effective accelerationism as organized counter-movement to AI safety advocacyIncreasing polarization between AI acceleration and AI safety camps in Silicon ValleyGrowing influence of accelerationist thinking on political leadership and policyShift from university-led to corporate-led AI development and safety researchEmergence of 'memetic warfare' as strategy for shaping public opinion on technologyPortfolio approach to existential risk management gaining traction among investorsRegulatory capture concerns around AI safety as potential cottage industryGeopolitical competition driving AI acceleration regardless of safety concerns
Quotes
"Pretty clear it's either, you know, adapt or die. As we say, accelerate or die, and those are our choices."
Beff Jezos
"This is perhaps the most important moment in human history, maybe past the invention of fire."
Reid Hoffman
"I was seeing a sort of pervasive and somewhat contagious pessimism throughout society... They would sweep in and spread sort of negativity."
Beff Jezos
"The only way to stop a bad guy with an AGI is, is a good guy with an AGI."
Unknown
"If you believe you can change things, you will actually change things. I'm an example of this."
Beff Jezos
Full Transcript
5 Speakers
Speaker A

Hi, listeners, this is Matt, co founder here at Longview. And before we get to the show, I just wanted to share something you may already know, but may not fully appreciate. Our whole project only exists because of subscribers. Advertising, of course, helps, but it isn't enough. Because the reporting that defines this work, the kind that takes time and patience and independence, is funded by people who choose to subscribe. Your support is what allows us to investigate the stories that we're working on right now. Reporting on why the economy feels broken and incomprehensible to many of us, reporting on health and longevity, reporting on the resurgence of anarchy on the left and what far right actually means in this political moment. And, of course, reporting on AI. It's easy to underestimate the value of your small monthly subscription, but taken together, your subscriptions are the foundation of everything we do. So if you're already a subscriber, thank you. You are helping make this work real. If you'd like to become one of the people who sustain this project, you can Visit us@longviewinvestigations.com and thanks again. Okay, onto the show.

0:02

Speaker B

Test one, test two. Okay. We are at the AI for Good conference in Geneva, Switzerland, and it is packed.

1:17

Speaker C

This is the last invention. I'm Gregory Warner.

1:29

Speaker B

AI is here, and you could argue that we're in a new reality where we need to recognize not just the pace, but the permanence of change.

1:32

Speaker C

Back in the summer of 2025, Andy Mills and I went to this conference hosted by the United Nations.

1:46

Speaker B

We are the AI generation, called AI for Good. And simply put, the goal of AI for Good is to help solve the world's most pressing challenges using AI.

1:52

Speaker C

Where, broadly speaking, the sentiments around the future with AI were pretty bright.

2:04

Speaker B

And when you think about artificial intelligence, how does it make you feel? I'm excited about it. I'm very excited about it. Yeah, I feel very happy to be part of it.

2:09

Speaker C

And the theme of this gathering of world leaders and technologists was unlocking AI's potential to serve humanity.

2:19

Speaker B

We are the generation that is determined, ladies and gentlemen, determined to shape AI for good. Take a tour. Take a tour here.

2:27

Speaker D

All right.

2:43

Speaker C

Take me on a tour, Andy.

2:43

Speaker B

So in some ways, this room is like the heavy hitters. This is a lot of adults in this room.

2:44

Speaker C

So for three days, there were all sorts of people. Humanitarians, academics.

2:49

Speaker B

A lot of big technology companies are here. Microsoft, Amazon, representatives from big tech labs like Meta, Huawei, this company Google.

2:54

Speaker D

Have you heard of them? Big.

3:04

Speaker B

Pretty big company.

3:05

Speaker C

As well as dignitaries and representatives from all these different nations around the globe, they got together to showcase how AI might be harnessed to solve climate change, poverty, and many of the world's most pressing problems. For example, Harvard University geneticist David Sinclair was there and he gave this presentation.

3:06

Speaker B

So what I'm going to talk to.

3:26

Speaker E

You today about is using AI to.

3:27

Speaker B

Tackle the major cause of illness on the planet. And of course, I'm talking about the.

3:30

Speaker C

Aging process, about how his team is using AI in research to create a medication that could radically extend human life.

3:34

Speaker B

And so this might be the future of medicine. A very simple pill that you can take that resets your age.

3:43

Speaker D

You take it, and every 10 years.

3:50

Speaker B

Your doctor gives you a new pill to take for a couple of months.

3:52

Speaker C

His team envisions a near future where humans may regularly live to be 120 or even 150 years old.

3:55

Speaker B

That's what we're aiming for. I can't promise we're going to get.

4:03

Speaker D

There in the next few years, but.

4:06

Speaker B

I can tell you that I can see how this future is going to happen, very likely within our generation and certainly within our children's generations.

4:08

Speaker C

This conference also had this convention hall where people could interact with some of the latest AI powered robots.

4:19

Speaker B

This is like the robot section.

4:25

Speaker E

The rapid advancement of AI is truly incredible. It's like watching a fascinating story unfold.

4:27

Speaker B

This is an insane.

4:35

Speaker C

There were robot tutors, robot teachers. All right, come on, come on, come on, little guy. Robots to assist the elderly and disabled. I'm being followed by R2D2 right now. Is this like the future, basically?

4:36

Speaker D

I hope so.

4:50

Speaker C

Robots that could perform complicated surgeries. There were also specialists showing off the ways that AI might enable pretty sci fi like, human enhancements.

4:51

Speaker B

So the first headset is what you.

5:04

Speaker D

Describe as a crown.

5:07

Speaker C

This French neuroscientist there named Olivier Ulier showed off this kind of AI crown.

5:08

Speaker B

I'm putting it on your head at.

5:14

Speaker C

The moment that he actually placed on Andy's head.

5:17

Speaker B

And before I activate, I have to ask you, do you allow me to monitor your brainwaves? I allow you to monitor my brain waves. I trust you.

5:20

Speaker D

Thank you.

5:27

Speaker C

And he explained that with this crown just sitting on your head, a person could move objects around on a computer screen using only their mind.

5:29

Speaker B

So with this device on my head, you're saying that I can move a mouse on my computer or I could type out words just by thinking about it?

5:37

Speaker D

Yes. So the first thing that you're seeing.

5:45

Speaker B

Is your brain waves here. These are my brain waves, yeah.

5:49

Speaker C

Later, during this presentation, to the whole conference, Olivier brought on stage a friend of his who has been paralyzed from the neck down for more than a decade.

5:52

Speaker B

My dear friend Rodrigo Mendez, the very first person ever to mind control.

6:03

Speaker C

And they showed how, wearing this headset, this guy was able to drive a car. And not just any car, a Formula one race car, simply using his mind.

6:08

Speaker D

Well, I was sitting in this race.

6:19

Speaker B

Car, and thanks to AI, my brainwaves.

6:21

Speaker D

Were converted into commands to the car.

6:26

Speaker B

So I could accelerate, turn right, and turn left.

6:28

Speaker C

And then there were those who were trying to use AI on a much more global scale. For example, we met this Dutch roboticist, Guido de Kron.

6:45

Speaker D

I want to develop safe robots and physically safe robots.

6:53

Speaker C

And that's why I focus on really.

6:58

Speaker B

Small, lightweight and soft drones.

7:00

Speaker C

And he and his team are building these drones that are the size and shape of insects.

7:03

Speaker D

One of the applications is in precision agriculture.

7:09

Speaker B

And the goal of precision agriculture is to grow crop.

7:12

Speaker C

And what they're working on is a way to solve world hunger, because they imagine that greenhouses and farms across the world will one day be buzzing with these robotic insects that can scan each and every plant and then enable growers to produce way more food using far fewer resources.

7:17

Speaker B

So single tomato plants, you know, how.

7:36

Speaker C

It'S doing, how much water it needs.

7:37

Speaker B

How many nutrients you. You know whether there's a disease or.

7:40

Speaker C

Plague, so that you can act quickly.

7:44

Speaker B

And prevent it, to spread.

7:46

Speaker C

Gather data about each plant and come back and have a specific plan for each plant, like how much water to give that particular plant.

7:48

Speaker D

Exactly.

7:56

Speaker C

And eventually he pulls out one of these insect drones to show us.

7:57

Speaker B

Yeah, I'll actually. I'll switch it on, like, if you.

8:00

Speaker D

Touch it with your hand.

8:03

Speaker B

So if you want.

8:05

Speaker C

Oh, my God. And it flew around the room like a little moth or butterfly on these soft, gentle wings.

8:06

Speaker B

It flies by flapping its wings. So it's very different from the kind of drone that people are used to with the propellers.

8:16

Speaker C

And it was amazing. It could actually land on my open palm in this way that felt completely lifelike. Dude, this really feels alive.

8:22

Speaker B

It's weird.

8:34

Speaker C

I'm loving this guy.

8:35

Speaker D

To me, it's really Zen like.

8:36

Speaker C

And Dekron, he sees a day not far in the future where the sight of millions and maybe billions of these things flying around farms will become just as common as the sight of a tractor. And where this technology helps produce enough food to truly feed the world. Don't fly into the flame, little moth.

8:44

Speaker B

It's actually quite agile. So if you want to test it's actually possible.

9:07

Speaker C

For today, the case that we should accelerate the path to advanced AI, we meet the people who argue that the rewards far outweigh any risks, and that a remarkably better future may be just around the corner. These are the accelerationists. After a short break, Andy walks us through who is making this case and sits down with a few accelerationists to ask, why is it that they think that the real danger is not the technology, but society's pessimism? Stay with us.

9:13

Speaker A

The last invention is brought to you by Ground News in the era of social media, Ground News is a unique tool that I find myself using daily to understand the effects of how different outlets frame the same events. Their Split Headlines feature makes it simple to scan coverage from across the media landscape in one place.

10:05

Speaker B

YouTube Working right now to reinstate users banned from the platform during the COVID 19 pandemic Here's a recent example of.

10:23

Speaker A

Split headlines about YouTube's decision to reinstate certain creators it had previously banned.

10:30

Speaker B

YouTube will soon reinstate users who were.

10:36

Speaker D

Banned for spreading misinformation about the COVID 19 pandemic and the 2020 election.

10:38

Speaker A

When CNN covered this story, it used a headline that declared YouTube to reinstate accounts banned for posting false information about COVID 192020 election, while the Fox News headline framed it as Google to reinstate banned YouTube accounts censored for political speech. A lawyer for Google's parent company admitted the Biden administration exerted its executive authority.

10:44

Speaker D

Pressuring the tech company to censor people.

11:07

Speaker B

And remove certain content in a new.

11:09

Speaker A

Lining these headlines up side by side helps Ground News users see how the same incident is interpreted differently depending on the outlet, offering a broader, more nuanced sense of what's going on and a view of how others might be seeing the headlines as well. If you want to steer clear of algorithmic bubbles and cut through the media bias, give Ground News a look. Head to groundnews.cominvent to get 40% off the vantage plan, the same one that we rely on. Check them out@groundnews.com invent and make sure to use that link so they know that we sent you.

11:11

Speaker B

All right, so way back at the start of the series, we introduced this group of techno optimists often called the Accelerationists.

11:46

Speaker E

The world as you know it is over. It's not about to be over. It's over.

11:55

Speaker B

I believe it's going to change the.

12:00

Speaker D

World more than anything in the history of mankind.

12:01

Speaker B

More than electricity. And while all of them are united in their optimism about the AI future, I've always believed that it's going to be the most important invention that humanity will ever make. A kid born today will never be smarter than AI. They do not agree about why we should be racing ahead. For example, if you look at the leaders of all of the big AI labs in the US right now, folks like Sam Altman, Elon Musk, and Dario Amadei, we are rapidly running out of truly compelling reasons why this will not happen in the next few years. They actually agree with the AI doomers and the scouts that AGI poses a serious and even an existential risk. But I do think it's one of.

12:04

Speaker C

The existential risks that we face, and.

12:45

Speaker B

It'S potentially the most pressing one. But they've just come to believe that the best way to protect humanity is to build a safe AGI before someone else builds a dangerous one. Even if one company was willing to.

12:48

Speaker D

Slow down the technology, that doesn't stop.

13:03

Speaker B

All the other companies. That doesn't stop our geopolitical adversaries, to.

13:06

Speaker D

Whom this is a existential fight.

13:10

Speaker B

Fight, fight, fight for survival. This is the view that's essentially the only way to stop a bad guy with an AGI is, is a good.

13:12

Speaker C

Guy with an AGI.

13:18

Speaker B

We have to make sure that we're ahead of China and other authoritarian countries, both because I don't think they would use powerful AI very well, and because if we're not ahead, there's this racing dynamic. And yet somehow we have to also protect against the dangers of AI systems we ourselves build. Then there's this other group, also largely in Silicon Valley, who think that all of this AI danger and existential risk talk has been totally overblown, and they do not want to see the fear about AI slow down human progress. The thing that the technology industry does best is improve material quality of life. I think that we should accelerate as hard into that as we possibly can. I think the quote unquote risks around that are greatly exaggerated, if not false. Why is all the worry about the technology going badly wrong? And why are people not worried enough about it not happening? This is where you get some of the biggest investors in artificial intelligence. People like Peter Thiel.

13:19

Speaker D

Our entire civilization, our entire culture, is.

14:17

Speaker B

Predicated on accelerating technological change. And Marc Andreessen over time, net everything, technology has been primarily a force for good, primarily a force for progress, and basically you embrace it and support it and accelerate it as much as you can, and then you deal with the issues as they arise. And they view the fight for accelerating the AI future as, as one and the same. As fighting for a better human future. I think the forces against basically technological progress, they're like the environmental movement I described. They're fundamentally, at some deep level, they're sort of anti human. You know, they want fewer people and they want a lower quality living on Earth. And like, I just, I very much disagree with both of those. And then there's this whole other wing of accelerationists that are proudly socialists. These are incredibly powerful levers to create social transformation. For example, I talked to Professor Alex Williams, who teaches at the University of East Anglia in the uk.

14:20

Speaker C

Think about those billions of hours of.

15:13

Speaker E

Human lifetime and ingenuity and intelligence and.

15:16

Speaker C

Sweat that are being wasted on these tasks, which we probably have the technology now to automate.

15:22

Speaker B

And he was telling me to just sit back and think about the fact that, that our society as it exists right now, requires millions and millions of people to work at jobs that they find no meaning in, jobs that are dangerous, jobs that are boring, working in mines, working in repetitive factory jobs.

15:30

Speaker D

And that would be a global social good if we could eliminate that work.

15:51

Speaker B

And the idea that AGI could replace those jobs, that would be a liberating force for the working class unlike anything that they've ever experienced in history. And the reason that this group wants to go fast is because they say that this isn't a hypothetical situation. Those people are in miserable situations right now. And if we can alleviate their suffering, if we can help them materially, then we should do that as soon as possible. And so to make the case today, I've spoken to two very different kinds of accelerationists with very different politics. The first is a legendary investor, an inventor and the founder of LinkedIn. Reid Hoffman.

15:57

Speaker D

This is perhaps the most important moment in human history, maybe past the invention of fire.

16:47

Speaker B

Reid was one of the earliest investors and a former board member at OpenAI. He was a member of what is often called the PayPal mafia, alongside Peter Thiel and Elon Musk. And he's also someone who has become a very large donor to the Democratic Party in the last several elections. And I should add that Reid is one of the people that does not like to be labeled an accelerationist, even if in the end he also wants to bring about the AI future as quickly as possible.

16:58

Speaker D

For sure, future generations, which may not be further off than 10 to 20 years, will be looking back at this and going, this is one of those inflection moments in human history that's like a Cambrian explosion.

17:27

Speaker B

And the second is the co founder of the AI hardware company Xtropic Gilverdam.

17:42

Speaker E

Pretty clear it's either, you know, adapt or die. As we say, accelerate or die, and those are our choices.

17:48

Speaker B

Who's a guy who is better known by his wildly popular Twitter Persona, Beth Jesos. He definitely embraces the identity of the accelerationists. He even co founded this movement that he calls the Effective Accelerationists, which is kind of like a counter movement to William McCaskill and the effective Altruists.

17:54

Speaker E

And you are either in this branch that adapts and survives, or you are in the branch that resists adaptation and dies.

18:17

Speaker B

And in recent years, his tweets have gained the attention of not just prominent people in Silicon Valley, but in Washington, D.C. and some of the ideas that he has been advocating for have been embraced both by the President and. And by Vice President Vance. Well, let's start with your vision of the future. Yeah, and I'm not talking about, like, the distant future where, you know, super intelligence leads to us traveling the galaxies and overcoming death and all that. I mean, the. The nearer term future in a world where we keep accelerating towards better and better and more capable AI. Give me a list of what you see as the changes coming our way that we should get excited about to counteract all of the changes that people are so worried about right now.

18:26

Speaker E

Let's work through it. What would massively increase our quality of life? Well, if we had really cheap education. Right. Let's say you had the best philosophers and scientists in the world as your personal tutor that understood your knowledge base, your limitations, your inclinations. We're basically close to being there already. Healthcare. Imagine having the best doctor everywhere on Earth. I mean, that's going to save countless lives.

19:13

Speaker D

You know, if everyone had the equivalent of a doctor in their pocket, that was free.

19:38

Speaker B

And you're talking about truly universal healthcare access here, that if you've got the Internet, you've got the best possible doctor in the world.

19:43

Speaker D

Yeah, but even such things as you want to return to the manufacturing industry in America, AI is your only hope for it.

19:52

Speaker E

There's going to be factories for robots, factories for solar cells, factories for chips, for AI supercomputers. There's going to be a ton of demand for energy. There's a lot of building and work to do.

19:59

Speaker B

So you're saying that instead of this world of rampant unemployment and people wondering where they're going to make their money, you see us headed towards more money, more jobs.

20:08

Speaker E

Yes, that is the goal. I mean, the whole point of accelerationism is economic growth and prosperity. And so I think saying that the accelerated path is going to Lead to less prosperity is like literally antithetical to our core thesis. So that doesn't make sense.

20:17

Speaker D

I think AI is uncategorically the best hope for accelerating prosperity. It will touch everything. And this is part of the thing. When you get to the well, why do you see such optimistic things for AI? The answer is everything that language touches will get massively transformed and amplified. And that's very core to who we are as human beings.

20:31

Speaker E

And that is the real cost of having this sort of decelerative mindset is that you miss out on huge growth opportunities. So we have a choice here. Do we re industrialize, do we embrace AI we embrace AI Compute in America and capture a lot of the prosperity that will come with it, or do we miss out and then our economy becomes hollow? Right. That is the choice we have.

20:59

Speaker B

So let's get into your project that you call effective accelerationism, this counter move to the effective altruists. I want to know why you came to see this as a necessary project. What is it that you were looking at? What inspired you to insert this bold faced, unabashed techno AI optimism into the world back when you started it?

21:20

Speaker E

Yeah, I was seeing a sort of pervasive and somewhat contagious pessimism throughout society. I mean, it was around 2021, 2022. Economy was kind of shaky, everything was very uncertain. And I was seeing sort of the pessimists and the doomers being opportunistic while the world was undergoing a moment of uncertainty, of higher anxiety. They would sweep in and spread sort of negativity. And I was seeing sort of the movie play out, where actually most of Silicon Valley was sort of part of this effective altruistic camp that kind of kept mutating and then became the sort of AI doomer camp. And that sort of mindset was the dominant mindset at all the AI labs, which I knew would accrue immense power. And broadly, I saw it as a sort of bioweapon defense project. But it's a mind bioweapon.

21:46

Speaker B

So you saw this as an act of defense, this idea that in the face of memetic despair and pessimism spreading through the Internet, spreading through the culture, you're going to flood the Internet, you're going to flood the culture with this memetic optimism.

22:47

Speaker E

Yeah, exactly.

23:02

Speaker B

We'll just stay there just for another beat. It was from you that I first learned of this concept, hyperstition, this theory that an idea, often a fictional one, can become a reality because enough people begin to believe in it. And Then that belief starts to dictate our actions. And you have tied this concept in the past to the rise of safetyism in American culture. I think when people hear safetyism at this point, a lot of us think about Jonathan Haidt's work and about parenting and this idea that the fears that parents have have begun to dictate the lives of their children and rob them of having a childhood. And in the end that ends up harming them more in the long run than anything that might have actually happened to them. But in your case, you're saying that safetyism is actually everywhere.

23:04

Speaker E

Absolutely. Every institution, essentially.

23:56

Speaker B

Well, make that case. And what is it that you've been fighting? What do you think is at stake if we keep moving towards more and more safetyism?

23:59

Speaker E

Again, it's like hyperstition. Having beliefs about the world being in a certain state in the future amplifies its likelihood. And so I sought out to steer us towards an optimistic future by spreading sort of optimism so that people could realize these techno optimistic futures. And clearly this is throughout everything. This is the reason, unfortunately, there's been enough of this sort of overregulation that we can't build anything in the physical world anymore. It's very hard to build housing, it's very hard to build railways, the medical field is highly overregulated. The cost of discovering drugs has scaled exponentially. The cost of housing is too high, the cost of education is too high. These are all sectors where there's been far too much regulation and there's been this sort of adding too much process, too many regulations, giving too much top down control, too much top down power to those that control these institutions. And in a way, the more you are scared, the more anxious you are. Then you give up more agency, give up more your freedoms, you give up almost everything. You give up control to these centralized institutions and these committees and you give up your power. And so again, anxiety is a state where usually you're inclined to take action to reduce uncertainty about the future. And so by inducing anxiety, it serves those that are trying to kill variants, because that feels like appeasing. It's like, ah, if we let you be free, then there's a potentially very bad outcome. Let's focus on that very bad outcome. Never mind the upsides and your freedoms and so on. Let's focus on that very bad outcome. Give me control, give me power and I will keep you safe and you'll feel at peace. But until then I will keep beating this drum that you are not safe. You should be very Afraid and you should give me power. And so we had to change towards a culture of techno optimism, of encouraging variants, encouraging exploration. Let's reimagine how we do everything. Let's reimagine every institution. We have to adapt. And so the choice was pretty clear. It's either adapt or die. As we say, accelerate or die. And those are our choices. And so we chose to accelerate. And part of that was fighting sort of the pervasive doomerism that was, again, instrumental to the same old people that want to centralize, control, kill variants and accrue power. And there's always an argument to be made that there are risks. Right? There's always. But people tend to underestimate the reward, especially given that it is exponential. Right. And our brains can't quite fathom exponential rewards.

24:08

Speaker B

Reid, I know that you were one of the earliest investors in OpenAI.

27:00

Speaker D

Yep.

27:04

Speaker B

And that you were a board member there. You worked closely with their leadership, especially with Sam Altman. And recently, in some leaked emails, the public got the chance to see how some of the senior people at the company were talking about AGI all the way back in 2017, 2018. And I think a lot of us found it pretty dramatic. There was language in there about how if this technology were to fall into the wrong hands, it could lead to an AI dictator, about how superintelligence was going to reshape the world, reshape the balance of power, essentially, about how the thing that the company was making, the company understood it as having this unimaginable amount of potential, but also an unimaginable amount of power. And I just want to know from you, is that an accurate reflection of how you and other people in Silicon Valley talk about AGI, think about AGI, talk about the future, and is that weird?

27:05

Speaker D

Well, the first short answer is yes. I think there's a small. It's not Silicon Valley as a whole, but I think there's a group of Silicon Valley technologists who think this largely and broadly. And a few places, DeepMind, OpenAI and others aggregate those people. So the short answer is yes, we were talking in that arena, and it kind of surprises people because you say, well, right now we have intelligence that's kind of the equivalent of a bumblebee. And it's like, and now you're talking about super intelligence. Like, how do you get there? And the answer is, well, because we understand things like possible compounding curve. We understand things about, like, well, what happens when you move from, you know, one GPU to 500,000 GPUs if you can use them inappropriate way. And to most other people it seems like, okay, that's nutty. And by the way, just to go back in history, similar discussions around the Manhattan Project and I don't mean to indicate that there's a parallel in terms of danger or anything else, but these technologies had these massive impacts and like, that was another one. And frankly, if you go back to the printing press, a lot of the discussion around the printing press was very similar to the kind of discussion we have around AI amongst the elites. So the oh my gosh, this is a moment in history is not like, oh, it's only these kind of crazy technologists, but it is a rare and small number of people kind of looking at the future, talking about possibilities and probabilities that engage in these conversations. And of course it like, I have enough self reflection to know this is amazingly rare. Right?

28:13

Speaker B

Yeah.

29:53

Speaker D

And you know, is something to be handled with intelligence and absolute focus in terms of how you navigate. And I think that was true for, you know, what is now not only the OpenAI folks, but also the anthropic folks and the Microsoft folks and the DeepMind folks, you know, all kind of looking at this.

29:54

Speaker B

All right, so help me square that with something else I've heard you say. I believe you said this to Tyler Cowen on his podcast, that people should be less worried about the robots coming for our jobs and more worried that AI advancements aren't coming fast enough. Which by the way, great copy, but, but obviously people are worried. They are worried that whole industries in our economy might go belly up. They are worried that AGI might lead to an economic collapse. Some people go much further, including some people in the AI field and say that they're worried that AGI poses the greatest existential risk that humanity has ever faced. You've got the so called AI doomers who think that we need to stop making AGI. You've got this other group that I call the AI Scouts who think that this threat is so big that we all need to come together to ensure that the AGI that we're creating, it doesn't destroy humanity. So help me understand, what is it you mean when you say that we need to be more worried about it not coming fast enough?

30:13

Speaker D

Well, let me break it into three issues and I'll start with the biggest risk, which is so called existential risk, humanity level risk, et cetera. Then go to the society and jobs and then go to looking at how to make human progress. Now on existential risk, the Reason why I think the analytic frame is wrong is they look at AI only as a potential add to a negative risk. And you go, okay, well, it is an add to a negative risk. So then you go, oh, my God, you stop. It's like, well, actually, in fact, existential risk for humanity is a portfolio. We don't just have AI, we have nuclear war, we have pandemic, we have asteroids, we have climate change. We have a whole stack of things that could actually, in fact, have this existential risk. And so when you think about a particular intervention, you don't just think, does this one thing add a negative is what does it do to the whole portfolio? Even when you look at superintelligence, I go, okay, yes, there is an added negative risk. But I can also see ways that a super intelligence that could add very positively to all of these other risks. Like, yes, there is a scenario in which somehow natural law, physics, or somehow us fucking it up, we fuck it up, right? But there's also a lot of scenarios where it improves the longevity and the survival characteristics of humanity by dealing with these other issues. And so it's kind of like I look at the portfolio of existential risk, and when I do my analyses, I look at this and I go, okay, I think this is going to be, on balance, even as it's going, has a much higher likelihood or probability of decreasing the overall existential risk portfolio.

31:25

Speaker B

So it's going to decrease the overall existential risk, even if it itself, to some degree or another, will likely pose.

33:07

Speaker D

An existential risk that adds in a new existential risk.

33:16

Speaker E

Right.

33:20

Speaker D

That's how I look at the existential risk thing. That'll be the first part. I want to do all three answers. Second part is I refer to AI as a cognitive industrial revolution. The transformation is equivalent or more than the Industrial Revolution. The Industrial revolution, you might say, gave us physical superpowers and gave us manufacturing and transport and a bunch of other stuff. And this is mental superpowers. And the mental superpowers may even be more important and more transformational in various ways. And so the upside is as important. But by the way, the transition is very difficult. I don't think there's any way that we as human beings and societies go through these transitions without a lot of difficulty and turmoil and chaos. But the upsides are possibilities. Like, for example, I co founded Manus AI with Siddhartha Mukherjee for trying to do drug cures for cancer and other kinds of things. The upsides are just spectacular. Now this gets to the last question, which is kind of part of the reason I broke into third is part of what I think you have to be is kind of like as a student of human beings, the progress of technology, of societies, of industries, how do we operate as human beings? And the answer is we divide into various groups. But what that means is once you're kind of the. The intellectual fascination usually tends to be like, oh no, we get in a room, we all talk and we all agree.

33:20

Speaker E

Right.

34:44

Speaker D

And I have never seen human beings do that about anything significant. And by the way, you'd think we could do it for climate change, where you could see more simple physics. And we don't do it there either. So the theory that we're simply going to go sit in a room and talk and all agree, for example, of an existential risk or of a job transformation risk, and then since we all agree, we're all going to massively slow down and stop doing things that's never happened in human history for things that are even much more visible. So that, as a pipe dream for the conversation, doesn't actually in fact yield the results you hope for.

34:46

Speaker B

So you're saying that you can wish that human nature was otherwise, but the truth is that we're just going to have to admit that we are competitive and we're disagreeable and this is how we work, and therefore we're not going to get together and solve all of the problems that AGI might pose. Is that about it?

35:25

Speaker D

Yeah. And you can't just pre think it by sitting there and going, I imagine how all this is going to work because you actually don't know what the landscape is. And like, for example, if you said before, I'm going to deploy a car, I'm going to think of every possible thing to go wrong and prevent everything, and you're going to end up with a thing that goes at five miles an hour, has six feet of armored plating, you know, et cetera, et cetera, and it's just not gonna work. And that will never work. And then someone else is gonna go build other things that are gonna be there. So you have to be iterating, you know, kind of iterate the deployment and be thinking about it and navigating as you're kind of going, I'd love to.

35:43

Speaker B

Ask you what you make of the camp that I call the AI scouts. The people who agree with you that the AGI future would be awesome would be ideal. The people who also want all of the great stuff that you want to come our way, but who say that to get that without destroying humanity, that we're first going to have to really solve a lot of problems that we're going to need to invest in AI safety research and alignment research and control research, because this is a really serious problem. Why aren't you with them? Why not take the acceleration part seriously, but also the getting prepared part too?

36:19

Speaker E

Yeah, look, I'm happy there's people that are doing AI alignment research, but I don't want there to be a sort of cottage industry. If you say every corporation needs an AI safety department and you're an AI safetyist, you're kind of arguing for your own paycheck. And it's not clear to me the utility of these folks yet. And it's not clear to me that the research has gone far enough and whether it is of extremely high utility. I just don't agree that we should plow all our capital towards this research. And again, you're trying to steer a moving target. And so again it's just like, I just don't think it's an effort that's going to yield outstanding results in terms of our ability to control AI. I think that the real driving forces are sort of the market forces and geopolitical forces.

36:59

Speaker B

So just so I'm clear about what you're saying, you don't think their project is going to work, that trying to do something like ensure we have total control over AI is what useless is a waste of time.

37:47

Speaker E

It's a subtle argument. I think you will have some control knobs, but just not as many as people want essentially. And that's just an argument about complex systems. And it's very difficult to have control over really self adaptive complex systems. It's just like the economy, right? We don't control the economy. What we control is for example, the interest rate, the discount factor over time. And with that knob you can actually do some stuff. And we might have that for rewarding certain behaviors with AIs and that might do something. And that's great that we do research there, but I just don't think we'll have control forever over these systems. And I do think that markets and capitalism are far superior technology for aligning incentives than sort of top down prescriptions. Of course you need a bit of both, you need some legislation, you need some market incentives. But because it's such a moving target, I think right now I don't think most of the capital should go towards AI safety research, some of it should go to AI safety, but I think there's already a good amount going into it.

38:00

Speaker B

To go back again to your time at OpenAI, did you ever feel, as I've read Peter Thiel said, that the company had an effective altruist problem, that especially before Dario Amadei left, there were some people there who were too afraid of the risks of AI, that there was too much fear, and that that fear at times would get in the way of the company's broader mission?

39:06

Speaker D

Yes, in a couple of different ways. I do think it was moderately too much fear by some individuals. And by the way, you still have that element, like you have, you know, Geoffrey Hinton, winner of the Nobel Prize, going around saying there's a 60% chance that AI will extinguish humanity, which I think is, frankly, completely incorrect. And that, by the way, induces panic and concern and causes you, as always with panic, to navigate in crazy ways. And I do think that part of what's happened is people have gotten more of a sense of, oh, actually, in fact, we have more controls over this. We can steer in some ways, and there's a lot of really good upsides. So I think it's still there as a we must get this right. And is not an effort to kind of say, hey, everyone, I'm so important because I'm working on something on the humanity level and more, I'm really trying to navigate this the right way.

39:31

Speaker B

Well, because you just mentioned Geoffrey Hinton, I've had the chance to speak with both him and Yoshua Bengio, and they were not subtle with their opinions or their concerns. Bengio went as far as to say that he has regrets about the work that he dedicated his whole life to and that he's worried about the future of his children and his grandchildren. It was a very intense thing for him to say, a very dramatic statement. And I want to ask you, when you hear scientists like them saying stuff like that, ringing this alarm and trying to get the public to come up with ways that we can ensure our survival. What do you make of that?

40:24

Speaker D

So, look, I always respect super smart people with criticisms. When the Manhattan Project was created, some of the people who created it thought that when you exploded an atom bomb, it would break the Earth's crust and turn us into a molten globe. So scientists are not always correct about things. And I think that part of what Yoshua and Jeffrey think is that this should be done through university science. And part of atmospherically, what makes them nervous is being done through corporations. And so it's like, well, they can't be thinking about safety the right way. They can't be serious. They have a profit motive, et cetera, et cetera. But by the way, the only way this kind of scale compute gets created is by companies, not by kind of universities. Now that being said, I think that their theory of how human beings and human ecosystems work is incorrect. And I think it's the people get together in groups, they build stuff, they compete with other groups, there's even competition of course within academia. So I would go back to the question of it is a very good thing to be doing safety and alignment stuff. The work is to add in to the current progress on these things, how to make that happen within the framework of where we're going. That the theory of going around and going we should just tell everyone to stop is a theory that doesn't work and matter of fact is in a sense sadly destructive because if anything it just creates conflict around it in the groups that care about it. Like you could almost run this kind of two by two matrix about what is like people care about humanity, people don't care about humanity. People will listen to your message and slow down. People won't listen to your message and won't slow down. By going around and going brah, you should all slow down. Only the people who are aligned with your point of view are likely to slow down and the other people are not likely to slow down. And therefore you're having a dystopic self defeating yes, yes, right. And so the point is to say actually in fact the good dialogue, the good discussion is how do we steer this better? What is the ways of doing that and just contributing to it. And it can't just be they will shift control to me so that I do it. It's like that's not a compelling answer for any individual. It's a, here's how I'm going to try to contribute. And that's what I've been trying to do and that's what I've been trying to get the critics to participate in doing as well. Not to say, hey, ignore all these crazy people, reformulate your thinking into the things that can help us navigate the boat as we're going down the fast moving river.

41:04

Speaker B

Do you think that it would be good for our society if we were turning on our TVs or if we were going to live events where you had representatives from the Accelerationist camp, the scout camp and the AI Doomer camp engaging in public debates about this?

43:54

Speaker D

Fundamentally, yes. Although you know, by the way, fear is always the easy one. Like in any new technology, fear will be more Rhetorically compelling. Part of the reason why Hollywood creates tons and tons of movies about evil AI, Right? And almost no movie about good AI and so that debate will have that characteristic. And so people just said, hey, it's just an objective thing. They'll walk away fearful. But I do think the public engagement with it is useful and part of the reason why I'm here doing this interview with you.

44:12

Speaker B

I don't think there are many people who are listening to this who would disagree with you about the pessimism and the distrust that is growing in our society. We're living at this time when people are regularly telling pollsters that they are worried the future is not going to be better than the past. Telling pollsters that their trust in every institution. My vocation in journalism, government, medicine, it's all in the tank, right?

44:51

Speaker E

Yep.

45:17

Speaker B

Do you see a version of accelerationism as being something that might inspire and unite people? The idea that we can build this technology, we can forge a better future, we can overcome all of the doom and the gloom of our current age. Is that how you conceive of this project?

45:18

Speaker E

I would say there's an opportunity to rebuild all institutions. Right. For the AI age. I think we need to rebuild education, we need to rebuild healthcare, we need to rethink how we do housing. I think disruption represents opportunity, and that is the engine of social mobility. And again, it's like if you are pessimistic about the future, you will feel like this disruption is happening to you or not for you. But the whole point is that we are trying our hardest to make it permissionless. You don't need anyone's permission to participate in the acceleration and the growth. And so you could just go out there and build something and capture some of the upside and secure a future for you and your family. And that's the point. It's the only real equalizing force that there is out there. And it's a rising tide that floats all boat. Rather than leveling everyone to the same level by having a drought and all the boats hitting the rock bottom, you're.

45:37

Speaker B

Saying that, yes, there's a lot of change that's coming, but we don't have to be afraid of it. In fact, if you refuse to be afraid of it and you look ahead with optimism instead of with pessimism, you actually are a part of making a better future More likely, Yes, I think.

46:33

Speaker E

People should be optimistic. We live in a time of extreme opportunity right now. If you believe you can change things, you will actually change things. I'm an example of this. If you explain to me how much impact that would have had on US policy three years ago when I started anonymous account to just share some of my thoughts, I wouldn't believe you. But at the time I thought it was important and I had to put these thoughts out there. And you just have to start. And I think a lot of people have self limiting beliefs, they limit their agency. But now is the time of the ability to execute on almost any idea now with this amazing technology. So what we are in the need of is people of more agency. And pessimism erodes your agency and wears you down. So you need to shed that and embrace optimism and a belief that you can make the future better. Because if you believe that, then you will. It's that simple.

46:51

Speaker B

So I want to go back to this idea of a transformational period that you think is coming our way, because you compared this to the industrial revolution and it is hard to summarize just how impactful that transition was, how tough that transition was, where, yes, we got the train and the car and electricity and the radio and eventually the airplane and all these amazing technologies. But alongside them we also went through these really tough social changes. You know, the mass population movement into cities, followed by, you know, crime and diseases spreading and then pollution getting out of hand. We had, you know, labor abuses and then labor unions and labor riots and then counteractions from the government to tamp down the riots. And it was very chaotic. Eventually you would get the rise of socialism and communism and pretty much all the stuff that we fought world wars about in the 20th century comes from this transition. Are you envisioning that kind of a tough transformation coming our way? Is it that dramatic?

47:59

Speaker D

I hope not. It's certainly not impossible. And that's part of the reason why focusing on them and part of the reason why I'm saying, look, we have tool sets that are better, including AI itself for this transition. I think we've learned lots of different things in these transitions. One is, can we deploy the tool sets to do that? Another one is, you know, what Roosevelt was doing with the New Deal. It's like, okay, what are the things that we can do in order to try to make these transitions less painful, shorter, only smaller percentages of the population, any particular year, et cetera, as ways of doing it? And that's part of where the dialogue and the focus should be around this.

49:09

Speaker B

As profound as technology has been, AI.

50:00

Speaker E

Will be more impactful.

50:04

Speaker D

As we gather this afternoon, we're still in the earliest days of one of the most important technological revolutions in the.

50:06

Speaker B

History of the world. This AI revolution is not made up, it's not overhyped.

50:14

Speaker C

The last invention was created as a series to investigate the background and emerging debate around this AI moment. But this is a story that in many ways has only just begun.

50:21

Speaker D

We need to advance legislation that promotes.

50:34

Speaker C

Long term AI growth and innovation.

50:37

Speaker D

Who is determining what's happening? I got a handful of people who are really determining the future of the world.

50:40

Speaker B

That's scary stuff. And as the AI debate moves further into the halls of traditional power and as the effects of AI begin to shape the world around us, the arrival.

50:47

Speaker E

Of this new intelligence will profoundly change our country and the world in ways we cannot fully understand.

50:58

Speaker B

We expect that surprises are coming. We know that what seems normal right now may not be normal for very long. And so going forward, we're going to be speaking with the people who are grappling with exactly how to understand, how to, how to harness, how to regulate, how to adapt to this emerging technology.

51:05

Speaker D

They hail it as the greatest breakthrough.

51:25

Speaker B

For humanity since the Industrial Revolution. Maybe the printing press from lawmakers.

51:27

Speaker D

Of course, it will also make them additional ungodly sums of money. But that does not seem to explain.

51:32

Speaker B

The near religious fervor of AI's loudest advocates. To skeptics, pretty much this entire generative.

51:38

Speaker D

AI boom has been based on the.

51:46

Speaker B

It's been based on suggesting what this might do versus what it actually does.

51:47

Speaker D

There is no it. All there is is a belief system that we're making God in a box.

51:51

Speaker C

And so while this episode marks the end of our initial limited series run, we would love for you to, as we say, stay with us as our reporting continues.

51:56

Speaker B

And we just want to say thank you to all of the people who have been willing to speak with us both on and off the record, and of course to you for listening. And we would love to hear from you. What questions do you have about AI? What keeps you up at night? What are you excited about? What would you like us to investigate? Who would you like us to interview? You can reach us by sending an email to helloongview Report.

52:07

Speaker A

The Last Invention is made by Longview. It's produced by Andy Mills, me, Matthew Bowl, Gregory Warner, Andrew Parsons, Megan Phelps Roper, Seth Temple Andrews and Ethan Minnello. Special thanks to Carmen Hilbert and Jonathan Farber. Music was composed by Scott Devendorf, Ben Lands, Kobe Beinert and me. Scott and Ben have put out a soundtrack. You can find a link to that album in our show notes. The artwork and design for the show is by Jacob Bol. We're proud to announce that Episode one of the series was just named by Apple Podcasts as one of the best episodes of 2025. Thank you to Apple and thanks to you for listening. We hope you'll continue to share the show with your friends and community as an introduction to what might be one of the most common consequential conversations shaping our future. For everyone out there already supporting the work we do, we are deeply grateful. It is because of you that making a series like this is possible. If you would like to become a supporter and learn more about our team, Visit us at longviewinvestigations.com or click on the link in our show notes to become a subscriber. We will be back soon. Goodbye for now. This episode is sponsored by Ground News, the app that helps you spot media bias and see a broader picture of the news shaping our world. Get 40% off their vantage plan at Ground News Inventory.

52:48