Moonshots with Peter Diamandis

Eric Schmidt: Singularity's Arrival, the 92-Gigawatt Problem, and Recursive Self-Improvement Timelines | 241

44 min
Mar 24, 202626 days ago
Listen to Episode
Summary

Eric Schmidt discusses AI's rapid acceleration toward superintelligence, predicting recursive self-improvement within 2-3 years. He warns about America's 92-gigawatt power shortage constraining AI development and emphasizes the need to compete with China in robotics to avoid losing another technology revolution like electric vehicles.

Insights
  • AI programming efficiency has shifted from 80/20 human-to-AI to 20/80, with systems now capable of autonomous overnight development
  • America faces a critical 92-gigawatt electricity shortage requiring 60 nuclear plants worth of power by 2030 for AI infrastructure
  • China's dominance in electric vehicle manufacturing gives them a structural advantage in robotics hardware through shared motor technologies
  • Recursive self-improvement in AI could create millions of AI researchers limited only by electricity, not human constraints
  • Universities should immediately implement prompt engineering courses as foundational skills for all students entering the workforce
Trends
Shift from human programming to AI-directed development with overnight autonomous codingMassive data center construction requiring 400+ megawatt facilities spanning half-mile lengthsSpace-based data centers emerging as solution to power and cooling constraintsVertical integration becoming necessary for robotics manufacturing due to lack of supply chainAgent orchestration and reasoning systems replacing traditional software development workflowsChina's open-source AI strategy competing with centralized Western AGI approachesProgramming skills becoming obsolete for general workforce within one yearAI safety requiring potential 'Chernobyl-like' wake-up call for global regulatory actionImmigration policy becoming critical for AI competitiveness and talent acquisitionEnergy permitting and grid infrastructure becoming national security priorities
Companies
Google
Schmidt's former company where transformer, TPU, and DeepMind innovations originated, setting foundation for current AI
DeepMind
Google acquisition for $600M that revolutionized AI, from Go victory to protein folding breakthrough AlphaFold
Nvidia
Dominates AI hardware with complete server architecture control and water-cooled 2-kilowatt chips
OpenAI
Leading AI lab shifting strategy while raising massive funding rounds for AGI development
Anthropic
Created Claude AI system that shifted programming from 80/20 to 20/80 human-to-AI ratio
Microsoft
Major AI player with large enterprise cash flows funding frontier model development
Tesla
Elon Musk's company pioneering vertical integration and automated manufacturing for robotics
SpaceX
Rocket company positioned to benefit from space-based data center deployment opportunities
Unitree
Chinese robotics company demonstrating advanced humanoid robots dancing with humans
Figure
Brett Adcock's vertically integrated humanoid robotics company competing with Chinese manufacturers
Blue Origin
Jeff Bezos' rocket company large enough to launch space-based data center infrastructure
Relativity Space
Rocket company positioned to benefit from space data center launch opportunities
People
Eric Schmidt
Main guest discussing AI development, energy constraints, and US-China competition
Peter Diamandis
Podcast host interviewing Schmidt about AI trends and abundance movement
Larry Page
Pushed for AI excellence at Google and championed DeepMind acquisition
Sergey Brin
Co-founded Google and drove technical excellence in early AI development
Demis Hassabis
Led DeepMind team that solved Go and protein folding, revolutionizing AI capabilities
Jeff Dean
Helped complete DeepMind acquisition and leads Google's AI research efforts
Elon Musk
Pioneered vertical integration and believes robot-building robots are imminent
Brett Adcock
Leading vertically integrated humanoid robotics company competing with China
Sam Altman
Raising massive funding rounds for AGI development and shifting OpenAI strategy
Quotes
"We're 10 or 15% into the impacts of this, and you can see it, you can feel it."
Eric Schmidt
"I keep asking my friends, when does the asymptote arrive and when does the curve slow down? We have not found it yet."
Eric Schmidt
"The American competitor, not enemy, but competitor, is China. They have lots of money, they're very smart, their work ethic is equal or stronger than ours."
Eric Schmidt
"I don't want to lose the robotic revolution, in my view, the way we lost the electric vehicle revolution, at least on the low end."
Eric Schmidt
"Universities should stop everything else and design a course for freshmen which is a prompt engineering class."
Eric Schmidt
Full Transcript
3 Speakers
Speaker A

We're living through a historic moment right now.

0:00

Speaker B

The next thing that's really interesting and terrifying also is recursive self improvement. But we don't have it yet. What we do have, I keep asking my friends, when does the asymptote arrive and when does the curve slow down? It is actually true that there is a limit to our craziness. We have not found it yet. And that's the great thing, frankly, about America. The American competitor, not enemy, but competitor, is China. They have lots of money, they're very, very, very smart, their work ethic is equal or stronger than ours, and they dominate key industries. But at the moment, it sure looks to me like the robotic hardware of China is the winner. I don't want to lose the robotic revolution, in my view, the way we lost the electric vehicle revolution, at least on the low end. It's possible, but it requires. Now that's a moonshot, ladies and gentlemen.

0:02

Speaker A

Eric, do you remember the first time we met?

1:02

Speaker B

Yes. Larry Page introduced me to you because he was on your board.

1:05

Speaker A

Yes. So I got a call from Eric out of the blue, which was a great honor, and said, larry says I should meet you. When are you going to be down here or up here in San Francisco? I was in la and I said, how about tomorrow?

1:08

Speaker B

Typical Peter.

1:25

Speaker A

And I remember something which was so I love the story. We sat down to lunch at Charlie's Cafe, and of course, I'm running a nonprofit, and my mission is always raising capital for a nonprofit. And so we sit down to lunch, and before we get started, you said, peter, so what is your highest level of giving of membership? And I said, well, Eric, it's our vision circle for $2.5 million. And you go, okay, I'm in. Now let's have a conversation.

1:27

Speaker C

I'll buy them all.

1:56

Speaker A

It was crazy.

1:57

Speaker C

Oh, there you go.

2:01

Speaker B

But, you know, somehow you have the reason. When we spoke, the reason I wanted to come here is this has become the epicenter of the abundance movement. And the abundance movement is correct. That's the important thing.

2:03

Speaker A

Thank you.

2:16

Speaker C

Yeah.

2:17

Speaker A

I just want to thank you for all the support you've given myself xprize all these years. So grateful for that. So let's start with a question that I'd love to hear you expand on, which is we're living through a historic moment right now. Right. Could you define the moment we're in and give us sort of a state of the union of what's going on in AI?

2:21

Speaker B

We're 10 or 15% into the impacts of this, and you can see it, you can feel it. And some of it will happen, some of it will take longer. Right. So, for example, hardware takes longer than software, sort of. Robots take longer than digital systems on traditional hardware. Things like that. We've not. The next thing that's really interesting and terrifying also is recursive self improvement. Mm. It's not happening yet. And so it's easy to convince yourself that you're gonna have human agents, sorry, computer agents that are human, like completely within a year or two. We don't have the science for that yet. People are working on it. I can describe how I think it'll play out, but we don't have it yet. What we do have is reasoning systems that are perfect partners for human beings, for good and bad. Right. And that has a lot of implications. So if we stop today, which we're not, and it's not stoppable or controllable by any government or any single individual corporation, we would still have advanced humanity because of these rezuming agents.

2:48

Speaker A

How fast do you imagine is it going to accelerate?

3:59

Speaker B

There's a thing which I call the San Francisco Consensus. And the reason I call it is because everyone in San Francisco believes this, everyone I know, anyway, which is that it's easy to understand. This is the year of agents, which we can discuss why agents will take over everything this year. During this year, the scaling of the use of agents and reasoning will sort of grow at this enormous rate. Everybody's out of hardware, everyone's out of electricity. It's a real boom, right? It's like the biggest boom I've seen, and I've been through three or four of these in my career. In this thinking, once you have recursive self improvement where the system can begin to improve itself, you have intelligence learning on its own. And it will, in this argument, it will learn faster than we can because we're biologically limited. And the way this is expressed in San Francisco, and I'll give a simple example, you have a tech company with a thousand fantastic AI researchers. So one day they turn on AI research. That is an AI research agent. Well, how many AI research agents do you have? Well, as many as you're limited by electricity, right? You don't have to feed them, they don't need housing, there's no more housing in San Francisco. You know, all that kind of stuff. You don't have those problems. You don't have an HR department for them, if you will. And you don't have to pay them, you just have to feed them electricity. So how Many could you have well, maybe a million of these agents. Now, in AI, the way you determine that you've made progress is you have clear metrics that the reasoning or testing or whatever, the evaluation framework is better. Right. So that's what happens. So in that scenario, the slope goes like this, because you're already at this slope, then you add more people, then you get the agents and you go like this. And this is essentially a super intelligence moment. The belief in San Francisco is this occurs within two to three years. The evidence in favor goes something like this. Claude code came out a couple months ago, the latest one, Opus, whatever it is, 4.6. Yes, thank you. And everyone I know in the Bay Area that's doing software says it was 80 20, now it's 20 80. The best analysis I can come up with is it's not the Claude code part, it's that the underlying LLM can produce more reasoning over time, better quality tokens over time. It's a deeper thinker. Right. And all the labs now are competing for that. This is not just the size of the context into, it's actually the reasoning skill and the length of which it can think, it can just think longer and produce more stuff. Right. I watched this stuff when I was. I moved to the Bay Area when I was 21 and I was a programmer in high school way back when, and I was a pretty good programmer. And I watch what it does and I go, my God, I'm over. You know, there's not a thing that I could do that it cannot do. So when they wrote a C compiler in Rust, I got it's over declare. So I think part of this is because the people who are building it are also seeing the diminution of their own skill. They're being forced to go from programmers, which is what I'm very proud to have been, to being the director of a programming system. Right. And the most likely scenario, by the way, there's a lot of implications for this one, is that it's always been true, speaking as your local arrogant programmer, that the very top programmers were worth 10 times more than the ones right below. There's something special about the mathematical reasoning skills of programmers. Those people will become more valuable, not less valuable, because these systems need to be controlled by humans at the moment. Those people will be capable of grasping the parallelization and the activities of this. It also means that you're going to have, in my view, a relatively small number of very large companies.

4:05

Speaker C

Yeah, yeah.

8:22

Speaker B

And this is a big deal. And a very large number of very small companies because you don't need as many people. And you're watching that play out this month. I mean, this all happened in the last three. I was in one startup I'm involved with. I was talking to the programmer who's a perfectly brilliant young man, and I said, well, what's the truth? He said, well, here's what I do. He's working on UIs of various kinds and he said, I write the spec of what I want and then I write a test function, an evaluation function, and then I turn it on. I said, what time? And he goes, seven o' clock in the evening. And I go, okay, what do you then do? Well, he has dinner with his wife and he goes to sleep and I said, do you wake up? No, I sleep very well. When does it finish? Oh, four in the morning. And then he gets up, has breakfast, does whatever he does and then he sees what's been invented. I mean, it's mind boggling. And this stupid example I used with this young man, this is what the power of these systems are. If you can define the evaluation function and you can let it run, and if you have enough hardware, you're inventing worlds. I mean, this stuff would have taken me six months and 10 programmers at Google to do the same thing. This poor guy's sleeping.

8:23

Speaker C

It's so funny you say that because I was literally backstage. They said, eric Schmidt's coming now and I have my lid open on my Mac and I'm trying to get the jobs onto the cloud so I can close the lid. Because if you close the lid, it'll break the jobs. So I've got these six computers.

9:41

Speaker B

You can leave it open on this, you know, it's like important what you're doing. Don't interrupt for.

9:57

Speaker C

You're important too, you know. No, it's crazy because when we got together in Davos, just what, two months ago, it was in this kind of auto complete mode. You'd write the code and then it would help you get it done. You're about 10 times more efficient, but you're still babysitting it now. Literally it's working right now. When I get off stage, it will have solved six problems that I launched.

10:02

Speaker B

And I appreciate the excitement in the industry, but I can tell you when I was, I used to work on bsd. I basically worked at Berkeley on Unix, at Bell Labs and at bsd. Unix and programmers invent what they need. And so we invented the first email system and the first messaging system and nobody thought about it. It was like, well, we just need this thing. So one key thing to understand about digital intelligence is the first inventors are the people who are solving problems of themselves, who are programmers. You shouldn't be surprised by this. You should have expected this. The other thing that's interesting about programming is it's, it's both scale free, which means there's no particular limitations except electricity. You don't need a lot of data and you already have GitHub and the equivalents. And it's also a fairly limited language set. So the number of language components, if you will, compared to human language is less smaller. Language, clear, objective function, all you need is electricity. Now how far can this go? It'll get to the point where you don't have the ability to do completely new things.

10:25

Speaker C

Yeah, isn't it really quaint and crazy to think that we can sit here and say, yeah, I wrote a ton of code when I was younger, no one will ever do that again after the end of this year. It'll be like riding a horse, you know, be like paint skills that we all used to have.

11:27

Speaker B

But I do have a proposal for universities. Those of you who are associated with universities, you should stop everything else you're doing in the university right now and design a course for freshmen, men and women who starting in September, which is a prompt engineering class.

11:41

Speaker A

Why university? Why not high school?

11:58

Speaker B

God, you're so aggressive, Peter. Let's start with universities. You can improve my idea. I thought 18 year olds would be young enough. Maybe you think it's younger. Here's the most important thing. Spend a quarter or a semester. The first thing they learn in university is how to use these tools. Universities are completely opposed to my idea, as usual, because it violates every one of their tenets. But if you think about the student, and I mean every student, liberal arts, you know, math, whatever, this platform will be the expression platform for their art, their music, their writing and so forth. Why wouldn't you teach them immediately, Peter? Improve my proposal?

12:01

Speaker A

No, I just feel like that AI is going to impact every student in high school today and that they're living an unnatural life by not engaging with it. And when they hit universities for those that still exist,

12:44

Speaker C

well, plus your kids are that age and so they're literally right now doing exactly what you're describing.

13:04

Speaker B

People here who have teenagers, you know what I'm talking about, because they're all in it already. So. So I think that's an improvement to my argument. There's a problem of age restriction. You really have to think about Vulnerable teenagers with this technology. I did some analysis of where the real problems are with this stuff. A simple summary is that at some point there will be jobs impact from this stuff. We're seeing it in software and we're seeing it in certain customer service industries, not across the board. At some point that will happen. That's an issue. Another one is how do we as a country maintain our moral values while we're also racing against China. Another one is the impact on young people. It is not okay for 13 year olds to be committing suicide because of an LLM. It's just not okay. It needs to be addressed. It needs to be addressed right now for sure. And there's all sorts of other issues. The other one I came up with was in agent orchestration. Agents can be combined. I've always been worried that when you put the agents together, especially if they're from non compatible vendors, you get unpredictable effects. So these are problems to be solved. So we herald the future and we solve the problems that it brought.

13:09

Speaker C

So we're going to talk about China and government and jobs. But before we do that, I want to say I'm in this savor the moments kind of mode right now because I feel like the world a year from today will be nothing like the world today. And everything we're doing right now, I've enjoyed so much for so long and I just want to savor the moment but reminisce for one minute about the fact that while you were running Google, the transformer got invented there, the TPU got invented there. Dennis Hassabis solved protein folding, which is now universally used. Does the work in. I think it does the work in an hour. That used to take a PhD student four years. Four years, yeah. It's like 300 million times more efficient. All of that and all the diaspora from that, all the people working in the field in San Francisco, as you mentioned, they all were your people. And so you were there at the creation of everything we're experiencing right now. Is there anything like that profoundly strikes you about that moment? Did you even realize at the time?

14:28

Speaker B

No, I think when you're making history, you typically don't know it. I give a lot of credit to Larry and Sergey because they were ahead of me. I'm an operating CEO and they pushed, pushed and pushed for excellence. And I'll give you an example. In the early years of Google, Larry would. My favorite interaction was one day I said, we need to hire some people doing Java. And Larry and Sergey said, this is the stupidest idea. We have ever heard. And I could never tell with them whether they were being serious or not or they were just joking with me. But their argument was that real programmers were programming one level lower. Today Google has many thousands of things but they were so precise and so driven to excellence in technology that I could not find fool them. I couldn't market around them. I needed to have the technical expert. And they say oh that's boring, don't do that. That's another one of your ideas, right? We want a new idea and I give them a lot of credit for it.

15:29

Speaker C

What about the TPU in particular? That to me because I didn't even hear about it until much later and it takes years to design and build your own internal chips and now it's about to explode. I don't know how much is public or not, but it's.

16:29

Speaker B

Well, so the TP version 1 was essentially a matrix multiplier of a particular kind. When they went to version two, they changed the algorithm in a complicated way. And it's particularly good for inference. Whether it's brilliance or just luck, Those decisions made 10 years ago set up the TPU as the perfect inference engine and for everybody's benefit. Inference is what the reasoning steps that I'm describing on. So Google is particularly well positioned as you know, Nvidia purchased Grok for the reason of getting that integrated.

16:44

Speaker C

Trying to catch up to what you thought of 10 years ago.

17:17

Speaker B

And what's interesting about Nvidia, if you look at them, I was looking at the Rubin architecture they managed to do. Nvidia managed to do what intel could never do. Intel could never get control of the complete server architecture and they tried. Nvidia has managed to build real supercomputers that you can really buy with enough time and money and so forth and it will really be delivered to your, to your. And you just do the whole thing. These are major industrial achievements and that's why both companies will do incredibly well for that.

17:19

Speaker A

Eric, in the, in the AI, exponential growth right now, talk to me about where the constraints are. You were in Congress talking about energy, chips, people, capital. Where are the constraints right now it's interesting.

17:49

Speaker B

I started a data center company with my friends. There's a. In my testimony, I said there was an estimated 92 GW shortage of power in America in the between now and 2030. And by reference a nuclear power plant is about 1.5 gigawatts. So that's about 60 nuclear plants. And we're doing essentially zero or one right, depending on how you count. So I got interested in the question of what's the real resource constraint in America? And it's electricity. We have the universities, we have the smart people, we have the economics, and we have these amazing finance people who will give all of us billions and billions of dollars on a wing and a prayer. There's no country where the finance people are sufficiently crazy to do that. It's not true in China, it's certainly not true in Europe. And these guys are incredibly, incredibly jealous of the American financial system. So I always start, thank you to the finance people for funding our dreams, whether they work or not. Thank you. So there's usually a retort at this point where people say, well, the algorithms will ultimately require less energy. I am sure that that is true. There is this property that as the power of the hardware goes up, as the algorithms become more efficient, you don't need less power, you need even more power and even more computers because we discover new uses.

18:04

Speaker A

Jevons paradox.

19:38

Speaker B

Called Jevons paradox. And so I think that because humans have trouble with exponentials, everyone says, oh, well, in six to nine months it'll be the bubble and so forth and so on. There's no sign of this. I and a team have been working on this for years. The ultimate essentially, scaling laws are not done yet. I keep asking my friends, when does the asymptote arrive and when does the curve slow down? We've not seen it yet. There will be one, right? It is actually true that there is a limit to our craziness. We have not found it yet and we're running to the wall. And that's the great thing, frankly, about America.

19:39

Speaker C

You think it's a limit to the capital or a limit to. If you just add more and more and more scale and parameters, something just doesn't work.

20:25

Speaker B

Well, the first question is, is there a limit to the capital available? A gigawatt of power corresponds to about $50 billion of hardware, software, data centers on the order of depending on what numbers you use. Okay, so 100 gigawatts is. Do the math. Can we raise $5 trillion over five years? Yeah. That's the strength of American. Could we double that? We're getting the data center build out is 1% of the GDP of America growth.

20:34

Speaker A

Right now we're back to Apollo Program level up.

21:11

Speaker B

Thank you. And the current estimate of electricity use in America is that electricity. 10% of the electricity in the United States will be used in the data centers. And these data centers, these are not the data centers I used to build At Google, which seemed tiny by comparison, they were immense at the time. The standard data center that's being built is on the order of 400 megawatts. These things are plus or minus, and they're about a half a mile long and about 500ft wide.

21:13

Speaker A

Right.

21:41

Speaker B

And they're essentially airflow machines. Take the air, send the air out, cool it in the middle using typically air cooling. And then they have a water system inside to keep the chips. Using Nvidia as an example, the. The chips are water cooled. And also the HBM3E. And now the four memories put out so much heat that they have to be water cooled. The chips are 2 kilowatts. I mean, this is insane. These things will kill you.

21:41

Speaker C

You want to hear something truly astounding and funny? In hindsight, when you bought DeepMind, everybody thought it was like 800 million bucks or something.

22:05

Speaker B

600 million.

22:12

Speaker C

600 million bargain. Everyone thought, why on earth would you waste $600 million on this zero revenue AI? All it does is play go. And then years later, it came out that the acquisition paid for itself just by controlling the air conditioning more efficiently in the data centers. The entire acquisition price was paid off, and that became the AI that's changing the world.

22:13

Speaker B

And the credit of that one actually goes to a Larry Page. Larry had studied AI when he was a Stanford graduate student, and he. We always deferred to him in this thing. And he said, this is the best team. And I think Elon and Larry competed over it. There was some complicated kerfuffle there. And so Jeff Dean, who's the chief scientist, went over and then he and I basically, you know, finished the deal. And I still remember, you know, there in one floor, these sort of British people led by sort of Greek British person, Demis. And they were smart, but Google's full of other smart people. To give you an example in 2016. So Demis announces that we're going to win the game of go. I figured, well, and by the way, at this point, they're a separate group. We've let them alone because they have to grow and figure out what they're doing and all that. And this is the patience of capital. We could let them do that. We didn't require that they do anything. And so he goes, I'm going to go and say, well, I'm going to come too. And I said, okay. So I fly to Korea and it's all one floor, and I go and I meet the team that's winning the game. And of course, all of these Koreans, they're all very excited about this because they know they're going to beat the computer. And so the Koreans are in one room, I'm in another. I go to the Korean room and they're all saying, we're going to beat the crap out of this Google room. And I go into the Google room and it's very quiet and there's a monitor and there's basically what I now understand is an RL prediction mechanism of whether we're winning or not.

22:35

Speaker C

Cool.

24:13

Speaker B

Okay. And it starts at 50 50. So I watch, I go listen to the Koreans talk for a while and then I go watch the screen and it goes to 51%. Okay. And then it goes to 52%. And then David, who is the guy who's the architect, says, well, we just plan for it to get to infinity. Okay. So basically it's the abundance theory. So it's just boom. And everyone is. All the humans are crushed. Deepminded people say, yeah, did what it was supposed to. Yeah, welcome. And then I understood the genius of the DeepMind people. And you can see this today with Gemini. Gemini 3 is probably the broadest of the non Chinese systems in terms of its depth and because it's multilingual, multimodal and so forth.

24:13

Speaker C

So many moments in your life are just turning points in history and I don't know if you realize them in the moment, but that was one of the last moments where humans used to look for challenges where the computer could try to catch up, like chess. And then I think Go was the end point.

25:06

Speaker B

Now it's. And we knew that by the way, we understood that Go, the game of Go, was sort of incomputable by normal algorithms. There was lots of math that said you couldn't solve it. And so it was the combination. They came up with a two tree model with two different RL trees. So one of the other things I learned about these things is it's not just write me a Go program that will win the game. You actually have to understand the game and so forth. So they took the same team, they got bored with Go having one, so then they took the same team and had them work on protein folding.

25:21

Speaker C

Yeah.

25:53

Speaker B

And in protein folding, they took a protein. A whole bunch of protein scientists, which I know.

25:53

Speaker C

Can we just think about the genius of that? Like nobody would ever connect the game of Go, but that's solving all of biology.

25:57

Speaker B

But Demis had always wanted to work on this. Larry and Sergey were very interested in this. And protein folding is the perfect problem because you had you got a defined endpoint. You got a defined endpoint. So what happens is people get excited about AI, but you need to have a validation function because these things don't have common sense yet. Right. You have to show them what it's good. Ultimately they produce this thing called alphazero, which essentially self learns.

26:03

Speaker A

Let's talk about data centers in space. I'm in favor of eight, nine months ago, no one was discussing this. I mean, all of a sudden, do

26:28

Speaker B

you know why I'm in favor of them?

26:37

Speaker A

I do, but you can mention it as you wish, but all of a sudden everybody's talking about them. What are your thoughts?

26:38

Speaker B

I'm part owner of a rocket company and we need.

26:49

Speaker A

Which I love. I love having you come into the space community.

26:52

Speaker B

You understand this far, far better.

26:55

Speaker A

Rocket science has named that for a reason.

26:58

Speaker B

Rocket science is really, really hard. What I've. And I don't know that much about rockets, although I certainly know how to manage tech people. But I think that the opportunity is large and interesting. There are challenges. There's an issue of getting heat off of. Again, you know this much better than I do. Getting heat off of it because you don't have oxygen and you also have radiation issues. Those have to get addressed.

27:01

Speaker A

But it makes the business plan for every rocket company that's large enough. Right. The small guys aren't going to launch it. But Relativity Space and Blue Origin and SpaceX, I mean, Elon's predictions were, I think a launch per hour to populate the constellation he wanted. But do you think technically we're going to get there?

27:24

Speaker B

Well, the technology is understood.

27:49

Speaker A

What's interesting technically for the data centers in space, for heat dissipation.

27:51

Speaker B

Yeah, the technology is understood. To me it's a business question. And the question is where should the data center be? Should it be in space with these other issues? But it has other benefits, including infinite power and so forth, versus on the ground where you have fiber and it's not shaking too much.

27:55

Speaker C

Yeah, I mean, the energy argument says space wins by far. And I think the cooling is a very big challenge. But I think it's largely figured out now. But then the politics of space. One of the turning points in AI history also is you getting in front of Congress and saying, hey, we need to find almost 100 gigawatts. And at the time it seemed outrageous. And now, of course, it's mainstream and it looks like, you know, the crazy investors are solving the problem, the unleashing of the money.

28:11

Speaker B

Welcome to American capitalism tech industry. I mean, we have another set of problems.

28:39

Speaker C

Well, if the next frontier is space, then. But there's no. There's no investment community in space. But there's also no military jurisdiction in space. Or maybe there is.

28:44

Speaker A

I would have never thought. My childhood dream dreams of going to the moon and Mars and would be fueled by data centers. Never.

28:52

Speaker B

A mind can't carry you at the moment, but it can in the future. Maybe all the way.

29:00

Speaker A

I read a Time Op Ed piece you wrote last night. China can dominate the physical AI future. Can you summarize that for us, it was an important.

29:07

Speaker B

In the geopolitical context, and I've said this many times, I'll say it again. The American competitor, not enemy, but competitor, is China.

29:19

Speaker A

By the way, I think it's an important distinction for you to make, so thank you.

29:31

Speaker B

Not enemy. Competitor. And how to understand them as a competitor? They have lots of money, they're very, very smart, their work ethic is equal or stronger than ours, and they dominate key industries. Right. With respect to robotics, we somehow decided it was okay for them to dominate the electric vehicle industry. This was an error. To be very clear, it's an error. Why do you not understand? It's an error because we don't allow their cars in. Spend some time outside of this country in Chinese cars. Trust me, they are real competitors. They've done a great job. As I understand it, China is capable of vertical integration and building these gigafactories at a scale that we can't for all sorts of reasons that's got to get addressed. So if you want to compete and I want to compete and win with China, and I compete, not enemy. I want us to have the same kind of system. So in robotics, it turns out robotics, you can understand robots as essentially actuators, these little stepper motors, click, click, click, click, click. And a brain ignoring the appearance and the googly, you know, eyes and all that kind of stuff. It turns out that the industry of the electric vehicle produces the same kind of motors and the same kind of systems. They have an expertise that we don't. My own view is that at least for very low cost, China is going to win that. And that was what I was trying to say in that case piece. And I worry about that now. Today, these are not particularly useful. You know, they're fun toys. They're a replacement for the dog if you get mad at your dog. Sorry, I love dogs, but you get the idea. So we need to address this. But at the moment, it sure looks to me like the robotic hardware of China is the winner at the low end. I'm not talking about the high end. I'm not talking about the expensive stuff. I'm not talking about industrial robots. And if you're confused, watch the Unitree robot dance with the humans. It came out about a month ago.

29:33

Speaker A

We have Unitree here in the tech hub and the co founder will be on stage with us later today.

31:41

Speaker B

Pay attention to them. They're very, very impressive and I spent some time with them last time I was in China and they're one of many. And the way China works is that they have brutal competition. Brutal, brutal. It's like unbelievable. I was talking to my friend, we teach at Stanford and he said, you know, in China we don't have the board dinner, we have a two hour meeting. We get back to work and there's no preamble. We're not, you know, we don't say hello, how are you, how's the family, that kind of stuff. We're boom, boom. Right. It's just cultural.

31:45

Speaker C

Yeah.

32:17

Speaker B

And the workaholic, the work ethic, the precision and the scale that is possible in China is, is a real competitor. I don't want to lose the robotic revolution in my view, the way we lost the electric vehicle revolution, at least on the low end.

32:17

Speaker C

Interesting because the Chinese model very much has a well built out supply chain with many vendors in the loop. But we've been on a worldwide tour of all the humanoid robotics companies just by coincidence, I guess. But when you look at the Gigafactory and look at Elon's vertical integration and also Brett Adcock at figure same thing, it's all vertically integrated. Why? Well, because there's no vendor. Yeah, I have no choice.

32:34

Speaker B

So it appears that to get to. Again, we're in. This is the abundance group, is the abundance club. The way you get to abundance is you drive prices down and you get vertically integrated. And Elon in our country pioneered that to his credit. Right. You know the old joke about Google was we would build anything, including the buildings. Well, Elon is actually doing that. And why? Not because he's insane, but because he actually that's how you drive cost down.

32:56

Speaker C

He truly believes, and I'd love to get your take on this, he truly believes that the robot building the robot is imminent. And I didn't get it until we toured the Gigafactory and I realized that almost all of it's automated already. It's just the last piece is the, the human controlling a few knobs and that, you know, the humanoid robot can

33:25

Speaker B

actually do let me define the boundary because it's important. Let's use batteries, for example. Batteries are predictable of straightforward manufacturing processes at huge scale. Right. Things like that will have gigafactories. One of the questions, and I of course used LLMs to do my deep research as a new person in the rocket company, was how much of the human labor to build a rocket could be replaced by robots. The current limit, which of course will change, is in our company we have these extraordinarily talented, essentially assembly people, and they're more than welders and they're more than mechanics. They understand precisely how the tubes and so forth and so on go together and they use precise tolerances. That kind of assembly is beyond current robots. I'm sure it will eventually show up, but not for a long time.

33:44

Speaker A

People don't realize the majority cost of a rocket is labor. It's not material. Fuel is 10%.

34:40

Speaker B

And furthermore, when they get inside the rocket, they understand what they're doing and they see what's wrong and they use human judgment. We don't have those systems yet. Now, perhaps we will in the future, or perhaps this will be one of the last things to go. Right. But at the moment, high skilled mechanical labor is very important. Low skilled labor of any kind gets swept up.

34:45

Speaker C

Well, so both of those. There's AI self improvement and then there's robotics. Building robots. Building robots, self improvement. Both of those loops. We talk a lot about closed loops being.

35:09

Speaker B

If I can interrupt you.

35:20

Speaker C

Yeah, please.

35:20

Speaker B

The term I like to use is learning loops. In a business, try to figure out all the. The different learning loops and then try to accelerate the learning. Fastest learner wins. Sorry.

35:21

Speaker C

Well, I'm going to lead the witness here a little bit, but if you asked Daniela Roos over at CSAIL a year ago, Eric Brynjolfsson at Stanford a year ago, do we need one more big breakthrough in AI core science or will scaling what we've already got lead to self improvement? And then that will be all we

35:31

Speaker A

need to get to AGI.

35:49

Speaker C

To get to AGI and then self improvement.

35:50

Speaker B

I spent the last week doing RSI reviews, recursive self improvement reviews. The scientists do not agree on the exact approach to work yet, so I think it's too early to know that question. There's evidence that it will work. There are tests in the lab that show it, but they show it in limited cases that are kind of demos. Right. Real recursive self improvement is the following. Start now, learn everything, discover things and tell me what you learned. That query doesn't work yet.

35:53

Speaker A

We're seeing all of the frontier labs sort of constantly leapfrog each other.

36:27

Speaker B

Right.

36:32

Speaker A

I mean, literally every week. It's the new model.

36:32

Speaker B

Unbelievable.

36:34

Speaker A

And it's extraordinary. Do you imagine they're all converging towards the same endpoint or is anybody going to pull ahead?

36:35

Speaker B

So if you go back to this question of capital, how much room is there in the world for these companies? How many can there be? I'm going to make some numbers up and these are made up. I think there's at least 10 in the world at this scale. I think there'll be a few in China. I think the majority will be in the United States. The usual suspects, most likely. I think they might be one or two in Europe. Europe, depending their electricity costs are a problem. There might be one in India. Right. There's not going to be one in Russia because of the war and so forth and so on. So can the world accommodate 10? Yes. Do they track together? I don't think so. One of the key things to understand about China is China's approach, which has Produced deep seq v4, Quinn Kimmy version 3, et cetera, and more coming is it's all open source, open ways. They've managed to do this with our chip limitations against them, which annoy them no end. Shows you how clever they are. They're also. The Chinese strategy is a bit different. It's less central computing and much more edge computing, which has to do with enveloping their Chinese customers with AI around them all the time. We're much more AGI, asi, sensitive, centered, which is fine. So the patterns are diverging within the companies. It's a jumble now, but fundamentally Microsoft and Google have these large cash flow streams from enterprise so they can fund that. Anthropic has done a fantastic job of raising money from those other companies and they use, as you know, Google, TPUs and they've become the sort of leading player in this sort of Claude API within the Enterprise agent system. And OpenAI is now shifting a bit of its strategy to include the new things it's doing. I don't think we can predict. But the key thing to understand is that they need so much money. Right? I mean, look at what Sam is trying to raise. They need so much money that they're forced into these situations where they have to win those battles and they're business. I mean, this is all good. I think in a year we'll know better. The answer to your question.

36:45

Speaker A

I have two questions. I want to close us out with that I think are important. You made a statement a couple of times, once in our podcast, once elsewhere, that regarding AI safety, the world may need to have a modest Chernobyl like death event in order for us to wake up. Do you still believe that's the case?

39:01

Speaker B

Yeah, and by the way, I'm not endorsing that. I'm doing it as a descriptive, not as a prescriptive. What are the real dangers of this? There are biological dangers. I mean, there's obviously the dangers of kids and democracies and so. But like, let's think about like a biological attack, a nuclear attack that's spawned by these things. It may take such a tragedy, hopefully a small one, to awaken the world to understand that these things are, they do have negative power. And so I can imagine I'm making this up, that there's some bad thing that happens. And then all of the leaders, China, the U.S. we all have a meeting and we basically say, like, what are we going to do? We're in brutal competition, we hate each other. I don't like you, you don't speak the same language, blah, blah, blah. But we are all in it together over this issue. My sense is that will happen, but I don't know when.

39:26

Speaker C

We had a congressperson on this stage last year and one of the crowd asked, how much time do you spend talking about AI in Congress? Well, it's definitely way less than 1%. Yeah. And so without that wake up call, I don't see how you have.

40:31

Speaker B

I mean, governments. I spend lots of time with governments. Governments are super busy. Right. And they're driven by political things. They're driven by, at least in democracies, by political sentiment. I'd like us as a nation to focus on the following. I want to win the AI race. I want us to do whatever it takes takes to do that. This government is doing a very good job of making energy permitting more accessible. The rate at which data centers are getting built is now accelerated. Solving the grid problems. I also want lots of immigrants in our country because those immigrants, at least the high skills. Immigration is what we need because we need the smartest people in the world on our side to build these systems. This is a unique moment in history. Right.

40:44

Speaker A

I want to take us home on a positive note. So, you know, you said we're going to get to ASI at some point, whether it's two years out, five years out, someplace in this next decade. So the question is, what steps can we take to steer artificial superintelligence towards abundance, towards uplifting and in alignment with humanity to make this abundance thesis materialize. What's your advice to us companies, governments?

41:37

Speaker B

There's an over reliance in our society on people like me to work on this. Why don't we have the smartest people in politics, history, human psychology, governance, ethics working together to make sure this stuff stays in human values and human alignment? I want the system that we build in America to reflect American values. The values of freedom and freedom of speech and freedom of association. All those things you learned in college and, excuse me, elementary, elementary school and high school, they're still important to our nation. They're the enablers for the next generation of our children and grandchildren. I desperately want that. And I don't want America to ever get on the wrong side of that battle. There's lots of people working on this. Lots of people understand the technical details. I happen to run an informal group that discusses this every week, so it's possible. But it requires political will and it requires an understanding that this can be done without screwing up the genius of America. Right? In other words, I'm not suggesting slowing anything down. And you said it so well, shaping it, making sure we don't cross lines. Like I already mentioned, the, the teenage. Sorry, the underage kids problem. That's a line we can't cross. We got to solve that problem. There are others sa.

42:12