Dwarkesh Podcast

Elon Musk - "In 36 months, the cheapest place to put AI will be space”

170 min
Feb 5, 20262 months ago
Listen to Episode
Summary

Elon Musk discusses his prediction that space will become the cheapest place for AI compute within 36 months due to energy constraints on Earth. He covers SpaceX's plans for orbital data centers, Tesla's humanoid robot Optimus, xAI's approach to AGI, and the challenges of scaling manufacturing and energy production to support exponential AI growth.

Insights
  • Energy availability, not chip production, will be the primary constraint for AI scaling in the near term, making space-based computing economically compelling
  • Humanoid robots represent the key to solving manufacturing bottlenecks and competing with China's labor advantage through recursive self-improvement
  • The path to AGI requires solving digital human emulation first - creating AI that can operate any computer interface before physical robotics
  • Government represents the biggest risk for AI misuse due to monopoly on violence and lack of market accountability mechanisms
  • Success in hardware-intensive industries requires obsessive focus on identifying and solving limiting factors rather than optimizing non-bottlenecks
Trends
Space-based AI compute becoming cost-competitive with terrestrial data centersTransition from human-dominated to AI-dominated manufacturing and servicesVertical integration becoming necessary for companies requiring massive chip volumesEnergy production constraints forcing geographic shifts in AI developmentGovernment fraud detection and efficiency becoming AI-enabledHumanoid robotics enabling recursive manufacturing improvementsDigital human emulation as the next major AI capability milestonePrivate space infrastructure supporting commercial AI operationsSupply chain bottlenecks driving internal manufacturing capabilitiesAI safety through truth-seeking rather than political correctness
Companies
SpaceX
Planning orbital data centers and mass space launches for AI infrastructure
Tesla
Developing Optimus humanoid robots and AI chips for manufacturing automation
xAI
Musk's AI company focused on truth-seeking AGI and digital human emulation
TSMC
Primary chip manufacturer that Musk says is at capacity limits for AI chips
NVIDIA
GPU manufacturer whose chips are central to AI training infrastructure
OpenAI
Described as revenue-maximizing corporation rather than research lab
Anthropic
Praised for AI interpretability research and safety work
Samsung
Secondary chip manufacturer Tesla uses alongside TSMC for production
ASML
Critical semiconductor equipment maker that China cannot access due to sanctions
Apple
Mentioned for aggressively recruiting Tesla talent during their car project
Google
Referenced as example of company with primarily digital output
Microsoft
Cited as example of valuable company with digital-only products
Meta
Listed among companies with purely digital business models
BYD
Chinese EV manufacturer reaching Tesla production levels
Unitree
Chinese robotics company selling humanoids at lower prices than planned Optimus
People
Elon Musk
Main interviewee discussing his companies' AI and space strategies
Dwarkesh Patel
Podcast host conducting the interview
Jensen Huang
NVIDIA CEO referenced regarding chip pre-payment strategies
Arthur C. Clarke
Sci-fi author whose 2001 Space Odyssey influenced Musk's AI safety thinking
Robert Heinlein
Author of 'The Moon is a Harsh Mistress' featuring lunar mass drivers
Wernher von Braun
Rocket engineer example of truth-seeking despite political circumstances
Sam Teller
Mentioned as accompanying the host on a Starbase visit
Quotes
"My prediction is that it will be by far the cheapest place to put AI will be space in 36 months or less. Maybe 30 months."
Elon Musk
"We are 1000% going to go bankrupt as a country and fail as a country. Without AI and robots, nothing else will solve the national debt."
Elon Musk
"I call Optimus the infinite money glitch because you can use them to make more Optimuses."
Elon Musk
"Government is just the biggest corporation with a monopoly on violence."
Elon Musk
"It's better to err on the side of optimism and be wrong than err on the side of pessimism and be right for quality of life."
Elon Musk
Full Transcript
3 Speakers
Speaker A

So are there really three hours of questions or are you fucking serious? Yeah, you don't even have a lot.

0:00

Speaker B

To talk about, Elon.

0:07

Speaker A

Holy book, man.

0:08

Speaker C

I mean, it's the most interesting point. All the storylines are kind of converging right now, so we'll see how much.

0:10

Speaker A

Almost like I planned it. Exactly. That would never do such a thing.

0:17

Speaker B

So, as you know better than anybody else, the total cost of ownership of a Data center, only 10 to 15% is energy. And that's the part you're presumably saving by moving this into space. Most of it's the GPUs. If they're in space, it's harder to service them or you can't service them, and so the depreciation cycle goes down on them. So it's just way more expensive to have the GPUs in space, presumably. What's the reason to put them in space?

0:23

Speaker A

Well, the availability of energy is the issue. If you look at electrical output outside of China, everywhere outside of China, it's more or less flat. It's very. Maybe a slight increase, but pretty close flat. China has a rapid increase in electrical output. But if you're putting data centers anywhere except China, where are you going to get your electricity? Especially as you scale, the output of chips is growing pretty much exponentially, but the output of electricity is flat. So how are you going to turn them with chips on? Magical power sources. Magical electricity fairies.

0:45

Speaker B

You're famously a big fan of solar. 1 terawatt of solar power with a 25% capacity factor, like 4 terawatts of solar panels. It's like 1% of the land area of the United States. And that's like, far. You were in the singularity when we got 1 terawatt of data centers. Right. So what are we running out of?

1:25

Speaker A

Exactly how far into the singularity are you, though?

1:41

Speaker B

You tell me.

1:44

Speaker A

Yeah, exactly. I think we'll find we're in the singularity and like, okay, we've still got a long way to go.

1:45

Speaker B

But is this like a. Is the plan to put it in the space after we've covered Nevada in solar panels?

1:50

Speaker A

I think it's pretty hard to cover Nevada in solar panels. You have to get permits from. Try getting the permits for that.

1:56

Speaker B

So space is really a regulatory. It's really a regulatory play. It's harder to build on land than it is in space.

2:02

Speaker A

It's harder to scale on ground than it is to scale in space. But also, you're going to get about five times the effectiveness of solar panels in space versus the Ground and you don't need batteries. I almost wore my other shirt, which says it's always sunny in space, which it is. So, because you don't have a day, night cycle or seasonality clouds or an atmosphere in space, because the atmosphere alone results in about a 30% loss of energy. So any given solar panels can do about five times more power in space than on the ground. And you avoid the cost of having batteries to carry you through the night. So it's actually much cheaper to do in space. And my prediction is that it will be by far the cheapest place to put AI will be space in 36 months or less. Maybe 30 months. 36 months, less than 36 months.

2:08

Speaker B

How do you service GPUs as they fail, which happens quite often in training?

3:19

Speaker A

Actually, it depends on how recent the GPUs are that have arrived. I mean, at this point, we found our GPUs to be quite reliable. There's infinite mortality, which you can obviously iron out on the ground. So you can just run them on the ground and confirm that you don't have infant mortality with the GPUs. But once they start working, they're out actual reliability. Once they start working and you're past the initial debug cycle of Nvidia or whatever, whoever's making the chips could be Tesla AI 6 chips or something like that, or it could be TPUs or trainiums or whatever the reliability is. Actually, they're quite reliable past a certain point. So I don't think the servicing thing is an issue. But you can, mark my words, in 36 months, but probably closer to 30 months, the most economically compelling place to put AI will be space. And then it'll then get ridiculously better to be in space. And then the scaling. The only place you can really scale is space. Once you start thinking in terms of what percentage of the sun's power are you harnessing, you realize you have to go to space. You can't scale very much on Earth.

3:24

Speaker B

But by very much, to be clear, you're talking like terawatts.

4:47

Speaker A

Yeah, well, all of the United States currently uses only half a terawatt of power on average. So if you say a terawatt, that would be twice as much electricity as the United States currently consumes. So that's quite a lot. And can you imagine building that many data centers, that many power plants? It's like those who have lived in software land don't realize that they're about to have a hard lesson in hardware, that it's actually Very difficult to build power plants. And then you don't just need power plants, you need all of the electrical equipment. You need the electrical transformers to run the transformers, the AI transformers. Now the utility industry is a very slow industry. Pretty much they impedance match to the government, to the public utility Commission. So they impedance match literally and figuratively. So they're very slow because their past has been very slow. So trying to get them to move fast is like, you know, like if you try to do an interconnect agreement with have you ever tried to do an interconnect agreement with a utility at scale? Like with a lot of power?

4:51

Speaker B

As a professional podcaster, I can say that I am not in fact.

6:06

Speaker A

Yeah, they have to need many more.

6:09

Speaker C

Views before that becomes an issue.

6:12

Speaker A

They have to do a study for a year. Okay. Like a year later they'll come back to you with their interconnect study.

6:13

Speaker C

Can't you solve this with your own behind the meter power stuff can build power plants?

6:19

Speaker A

Yeah, that's what we did Xai for classes two. So. For classes two.

6:25

Speaker C

So yeah, why were we talking about the grid? Why not just like Build GPUs and Power Co located?

6:29

Speaker A

That's what we did.

6:33

Speaker B

Right.

6:34

Speaker C

But I'm saying why isn't this a generalized solution? When you're talking about all the issues.

6:35

Speaker A

Where do you get the power plants from?

6:37

Speaker C

I'm saying when you talk about all the issues working with utilities, you can just build private power plants with the data centers.

6:38

Speaker A

Right. But it begs the question of where do you get the power plants? Where do you get the power plants from? I mean the power plant makers is what you're saying.

6:44

Speaker C

Like does the gas turbine backlog?

6:52

Speaker A

Basically, yes. You can drill down to a level further. It's the veins and blades in the turbines that are the limiting factor because the casting, it's like a very specialized process to cast the blades and veins in the turbines using gas power. And it's very, it's very difficult to scale other forms of power. You can scale potentially solar, but the tariffs currently for importing solar in the US are gigantic and the domestic solar production is pitiful.

6:53

Speaker C

Why not make solar? That seems like a good Elon shaped problem.

7:27

Speaker A

We are going to make solar. Okay, great. Both SpaceX and Tesla are bowling towards 100 gigawatts here of solar cell production.

7:30

Speaker B

How low down the stack from polysilicon up to the wafer to the final.

7:40

Speaker A

Panel, I think you got to do the whole thing from raw materials to finish the cell. Now if it's going to space, it costs less. And it's easier to make solar cells that go to space because they don't need glass or they don't need much glass and they don't need heavy framing because they don't have to survive weather events. There's no weather in space. So it's actually a cheaper solar cell that goes to space than the one on the ground.

7:45

Speaker B

Is there a path to getting them as cheap as you need in the next 36 months?

8:10

Speaker A

Solar cells are already very cheap. They're like farcically cheap. And if you say, I think solar cells in China are around like 25, 30 cents a watt or something like that, it's absurdly cheap. And when you take into account now put it in space and it's five times cheaper because it's five times. In fact, no, it's not five times cheaper. It's 10 times cheaper because you don't need any batteries. So the moment your cost of access to space becomes low, by far the cheapest and most scalable way to generate tokens is space. It's not even close. It'll be an order of magnitude easier to scale. And chips aside, at order of magnitude. If the point is you won't be able to scale on the ground. You just won't. People are going to hit the wall big time on power generation. There already are. So the number of miracles and series that the XAI team had to accomplish in order to get a gigawatt of power online was crazy. I had to gang together a whole bunch of turbines and then we had permit issues in Tennessee and had to go across the border to Mississippi, which is fortunately only a few miles away. But then we still had to run the high power lines a few miles and build a power plant in Mississippi. And it was very difficult to build that. And people don't understand. How much electricity do you actually need at the generator level, at the generation level in order to power a data center? Because they look at the nooms will look at the power consumption of say a GV 300 and multiply that by thing and then think that's the amount of power you need.

8:14

Speaker C

All the cooling and everything.

10:04

Speaker A

Wake up. Yeah, that's a total noob. You've never done any hardware in your life before besides the GB300. You've got to power all of the networking hardware. There's a whole bunch of CPU and storage stuff that's happening. You've got a size for your peak cooling requirements. So that means can you cool even on the worst Hours, the worst day of the year. Well, it gets pretty frigging hot in Memphis, so you're going to have like a 40% increase on your power just for cooling. Assuming you don't want your data center to turn off on hot days and you want to keep going, then you got to say, well, there's another multiplicative element on top of that, which is, are you assuming that you never have any hiccups in your power generation? Like, oh, well, actually sometimes we have to take the generators, some of the power offline in order to service it. Oh, okay, now you add another 20, 25% multiplier on that because you've got to assume that you've got to take power offline to service it. So the actual roughly every 110,000 GBs, GV3 hundreds inclusive of networking, CPU storage, cooling margin for servicing power is roughly 300 megawatts.

10:05

Speaker C

Sorry, say that again.

11:31

Speaker A

It's roughly. Or think about it. Like the way to think about it is like 330,000 to actually what you need at the generation level to Service probably service 330,000 GB3 hundreds, including all of the associated support, networking and everything else. And the peak cooling and to have some power margin reserve is roughly a gigawatt.

11:33

Speaker B

Can I ask a very naive question? You're describing the engineering details of doing this stuff on Earth, but then there's analogous engineering difficulties of doing it in space. How do you replace infinite band with orbital lasers, et cetera, et cetera? How do you make it resistant to radiation? I don't know the details in the engineering, but fundamentally, what is the reason to think those challenges, which have never been had to be addressed before, will end up being easier than just building more turbines on Earth? There's companies that build turbines on Earth. They can make more turbines, right?

11:58

Speaker A

Try doing it and then you'll see. So, like, the turbines are sold out through 2030.

12:35

Speaker C

Have you guys considered making your own?

12:44

Speaker A

I think in order to bring enough power online, I think SpaceX and Tesla will probably have to make the turbine blades, the bins and blades internally.

12:46

Speaker C

But just the blades or the turbines.

13:02

Speaker A

The limiting factor, you can get everything except the. The blades. They call the blades and vanes. You can get that 12 to 18 months before the vanes and blades, the limiting factor of the vanes and blades. And there are only three casting companies in the world that make these and they're massively backlogged.

13:06

Speaker C

Is this Siemens, GE those guys? Or is it a subcontractor?

13:27

Speaker A

No, it's other companies. I Mean, sometimes they have a little bit of casting capability in house. But I'm just saying you can just call any of the turbine makers and they will tell you it's not top secret. They're probably on the, it's probably on the Internet right now.

13:30

Speaker B

If it wasn't for the tariffs, would Colossus be solar powered?

13:43

Speaker A

It would be much easier to make it solar powered. Yeah, the tariffs are nuts, several hundred percent. So don't you know, some people. We also need speed. Yeah, no, you know, president has us, you know, we don't agree on everything. And this demonstration is not the biggest fan of solar. We also need the land, the permits and everything. So if you're trying to move very fast, I do think scaling solar on Earth is a good way to go. But you do need some amount of time to find the land, get the permits, get the solar, pair that with the batteries.

13:48

Speaker C

But why would it not work to stand up your own solar production? And then you're right that you eventually run out of land. But there's a lot of land here in Texas, there's a lot of land in Nevada, including private land. It's not all publicly owned land. And so you'd be able to at least get the next colossus and the next one after that. And at a certain point you hit a wall. But wouldn't that work for the moment.

14:32

Speaker A

As I said, we are scaling solar production. There's a rate at which you can scale physical production of solar cells. We're going as fast as possible in scaling domestic production.

14:52

Speaker C

You're making the solar cells at Tesla.

15:07

Speaker A

Both Tesla and SpaceX have mandate to get to 100 gigawatts a year of solar.

15:09

Speaker C

Speaking of the annual capacity, I'm curious, in five years time, let's say, what will the installed capacity be on Earth.

15:14

Speaker A

Is a long time.

15:23

Speaker C

And in space I deliberately pick five years because it's after your once we're up and running threshold. And so in five years time. Yeah. What's the on Earth versus in space installed AI capacity?

15:24

Speaker A

Five years? I think probably if you say five years from now, we're probably AI in space will be launching every year the sum total of all AI on Earth in excess, meaning five years from now. My prediction is we will launch and be operating every year more AI in space than the cumulative total on Earth, which is I would expect to be at least sort of five years from now a few hundred gigawatts per year of AI in space and rising. So you can get to. I think on Earth you can get to around a terawatt a year of AI in space before you start having fuel supply challenges for the rocket.

15:35

Speaker C

Okay, but you think you can get hundreds of gigawatts per year in five years time?

16:33

Speaker A

Yes.

16:37

Speaker B

So 100 gigawatts depending on the specific power of the whole system with solar arrays and radiators and everything is on the order of 10,000 Starship launches.

16:38

Speaker A

Yes.

16:50

Speaker B

And you want to do that in one year. And so that's like one starship launch every hour that's happening in this city. Walk me through a world where there's a starship launch every single hour.

16:52

Speaker A

Yeah, I mean, that's actually a lower rate compared to airlines like aircraft. Aircraft.

17:05

Speaker B

There's a lot of airports.

17:09

Speaker A

A lot of airports.

17:10

Speaker B

And you got to launch the polar orbit.

17:11

Speaker A

No, it doesn't have to be polar, but there's some value to sun synchronous. But I think actually you just go high enough, you start getting out of Earth's shadow.

17:14

Speaker B

How many physical starships are needed to do 10,000 launches a year?

17:31

Speaker A

I don't think we'll need more than. I mean, you could probably do it with as few as like 20 or 30. It really depends on how quickly does the ship. The ship has to go around the Earth and the ground track before the ship has to come back over the launch pad. So if you can use a ship every, say 30 hours, you could do it with 30 ships, but we'll make more ships than that. But SpaceX is gearing up to do 10,000 launches a year and maybe even 20 or 30,000 launches a year.

17:35

Speaker B

Is the idea to become basically a hyperscaler, become an oracle and lend this capacity to other people? What are you going to do with. Presumably SpaceX is the one launching all this. So SpaceX is going to be a hyperscaler.

18:14

Speaker A

Hyper. Hyper, yeah. I mean, if assuming my predictions come true, SpaceX will launch more AI than the cumulative amount on Earth of everything else combined.

18:27

Speaker B

Is this mostly inference or most AI.

18:39

Speaker A

Will be inferenced already? Inference for the purpose of training is most training.

18:41

Speaker C

And there's a narrative that the change in discussion around a SpaceX IPO is, is because previously SpaceX was very capital efficient. Just it wasn't that expensive to develop. And even though it sounds expensive, it's actually very capital efficient in how it runs. Whereas now you're going to need more capital than just can be raised in the private markets. Like if the private markets can accommodate raises of, as we've seen from the AI labs, tens of billions of Dollars, but not beyond that. Is it that you'll just need more than tens of billions of dollars per year? And that's why I'd say go public.

18:45

Speaker A

Yeah, I have to be careful about saying things about companies that might go public.

19:23

Speaker C

If you make general statements, that's never.

19:28

Speaker B

Been a problem for you, Elon.

19:30

Speaker A

There's a price to pay for these things.

19:33

Speaker C

Make some general statements for us about the depth of the capital markets between public and private markets.

19:35

Speaker A

Yeah, there's a lot more capital in the very general. There's obviously a lot more capital available in the public markets than private. I mean it might be, it's at least, at least, it might be 100 times more capital, but it's at least way more than 10.

19:41

Speaker C

But isn't it also the case that things that tend to be very capital intensive, if you look at say, real estate as you know, a huge industry that raise a lot of money each year, is at an industry level that tends to be debt financed because by the time you're deploying that much money, you actually have a pretty, you have.

19:56

Speaker A

A clear revenue stream.

20:17

Speaker C

Exactly. And a near term return. And you see this even with the data center build outs, which are famously being, you know, financed by the, the private credit industry. And so why not just debt finance?

20:18

Speaker A

Speed is important. So I'm generally going to do the thing that, I mean, I just repeatedly tackle the limiting factor, whatever the limiting factor is on speed, I'm going to, I'm going to hit tackle that. So there's, if, if capital is the only factor, then I'll, I'll solve for capital. If, if it's not limiting factor, I'll solve for something else.

20:32

Speaker B

Based on your statements about Tesla and being public, I wouldn't have guessed that you thought the fast, the way to move fast is to be public.

20:56

Speaker A

Normally I would say yeah, that's, that's true. Like I said, I mean, I'd love to, you know, talk about this in more detail, but the problem is like if you talk about public companies before they become public, you get into trouble and then you have to delay your offering and then you.

21:07

Speaker C

And as you said, solving for speed.

21:21

Speaker A

Yes, exactly. So you can't hype companies that might go public. So that's why we have to be a little careful here. But we can't talk about physics. So the way to think about scaling long term is that Earth only receives about half a billionth of the sun's energy. And the sun is essentially all the energy. This is a very important point to appreciate because Sometimes people will talk about marginal nuclear reactors or any various fusion on Earth, but you have to step back a second and say if, if you're going to climb the Kardashev scale and have some non trivial and harness some non trivial percentage of the sun's energy, like let's say you wanted to harness a millionth of the sun's energy, which sounds pretty small, that would be about, call it roughly 100,000 times more electricity than we currently generate on earth for all of civilization, give or take an order of magnitude. So obviously the only way to scale is to go to space. With solar, from launching from Earth you can get to about a terawatt per year. Beyond that you want to launch from the moon, you want to have a mass driver on the moon, and that mass driver on the moon you could do probably a petawatt per year.

21:23

Speaker B

We're talking these kinds of numbers, terawatts of computer. Presumably, whether you're talking land or space, far, far before this point, you've run into, you actually need, maybe the solar panels are more efficient, but you still need the chips, you still need the logic and the memory and so forth.

22:58

Speaker A

You need a lot more chips and make them much cheaper.

23:19

Speaker B

Right. And so how are we getting a terawatt of like right now the world is going to be 20, 25 gigawatts of computer. How are we getting a terawatt of logic by 2030?

23:22

Speaker A

I guess we're going to need some very big chip apps.

23:32

Speaker B

Tell me about it.

23:35

Speaker A

I've mentioned publicly that the idea of doing sort of a terafat terabying the.

23:37

Speaker B

New Giga, I feel like the naming scheme of Tesla, which has been very catchy, is like you looking at the metric scale, at what level of the stack are you building the clean room and then partnering with an existing fab to get the process technology and buying the tools from them. What is the plan there?

23:43

Speaker A

Well, you can't partner with existing fabs because they can't output enough, the chip volume is too low.

24:05

Speaker B

Before the process technology partner for the.

24:12

Speaker A

IP, the fabs today all basically use machines from like five companies. Yeah, you know, so you've got ASML, Tokyo Electron, K10 Core, you know, et cetera. So at first I think you'd have to get equipment from them and then modify it or work with them to increase the volume. But I think you'd have to build perhaps in a different way. So I think the logical thing to do is to use conventional equipment in an unconventional way to get to scale and then Start modifying the equipment to increase the rate.

24:14

Speaker C

Kind of boring company style?

25:01

Speaker A

Yeah, kind of like. Yeah, you sort of buy an existing boring machine and then figure out how to dig tunnels in the first place and then design a much better machine that's, I don't know, some orders of magnitude faster.

25:03

Speaker C

Here's a very simple lens. We can categorize technologies and how hard they are. And one categorization could be look at things that China has not succeeded in doing. And if you look at Chinese manufacturing, still behind on leading edge chips and still behind on leading edge turbine engines and things like that. And so does the fact that China has not successfully replicated TSMC give you any pause about the difficulty or you think that's not true for some reason?

25:20

Speaker A

It's not that they have not replicated tsmc, they have not replicated asml. That's the limiting factor.

25:53

Speaker C

So you think it's just the sanctions Essentially, yeah.

25:59

Speaker A

China would be outputting vast numbers of chips at if they could buy it ASMR 2 or 3 nanometers.

26:03

Speaker C

But couldn't they up to relatively recently buy them?

26:08

Speaker A

No, the ASML bands have been in place for a while, but I think China's going to start making pretty compelling trips in three or four years.

26:11

Speaker C

Would you consider making the ASML machines?

26:19

Speaker A

I don't know yet is the right answer. It's just that to produce at high volume and to reach large volume in say 36 months to match the rocket payload to orbit. So if we're doing a million tons to orbit and like, let's say, I don't know, three or four years from now, something like that, and we're doing 100 kilowatts per ton. So that means we need at least 100 gigawatts per year of solar and we'll need an equivalent amount of chips. You need 100 gigawatts worth of chips. You've got to match these things. The master orbit, the power generation and the chips. And I'd say my biggest concern actually is memory. So I think the path to creating logic chips is more obvious than the path to having sufficient memory to support logic chips. That's why you see DDR prices going ballistic in these memes about like you're marooned on a desert island. You write help me on the sand. Nobody comes. You write DDRM ships come swarming in.

26:23

Speaker C

I haven't seen that.

27:49

Speaker B

I love your manufacturing philosophy around. Around fabs. I know nothing about the topic, but.

27:52

Speaker A

I don't know how to build a fab yet. I don't figure it out. Obviously I build a fab.

27:58

Speaker B

It sounds like you think the process technology of these 10,000 PhDs in Taiwan who know exactly what gas goes in the plasma chamber and what settings to put on the tool, you can just delete those steps. Fundamentally it's get the clean room, get the tools and figure it out.

28:03

Speaker A

I don't think it's PhDs. It's mostly people who are not PhDs. Most engineering is done with people who don't have PhDs. Do you guys have PhDs? No. Okay.

28:20

Speaker C

We also haven't successfully built any fabs, so you shouldn't be coming to us for your fab advice.

28:34

Speaker A

I don't think you need PhD for stuff, but you do need competent personnel. So. So I don't know. I mean right now, Tesla's pedal to the metal max production of going as fast as possible to get Tesla AI 5 chip design into production and then reaching scale. That'll probably happen around the second quarter ish of next year, hopefully. And then AI6 would hopefully follow less than a year later. And we've secured all the chip fab production that we can.

28:39

Speaker C

Yes. You're currently limited on TSMC fab capacity.

29:25

Speaker A

Yeah. And we'll be using TSMC Taiwan, Samsung Korea, TSMC Arizona, Samsung Texas. And we still.

29:28

Speaker C

You've booked out all the. Yeah, faster you can.

29:40

Speaker A

Yes. And then if I ask TSMC or Samsung, okay, what's the timeframe to get to volume production? This point is you've got to build the fab and you've got to start production. Then you've got to climb the yield curve and reach volume production at high yield. That from start to finish is a five year period. And so the limiting factor is chips. Yeah, limiting factor once you can get to space is chips. But the limiting factor before you can get to space will be power.

29:42

Speaker B

Why don't you do the Jensen thing and just prepay TSMC to build more fabs for you?

30:12

Speaker A

I've already told them that, but they.

30:17

Speaker B

Won'T take your money. Like, what's going on?

30:20

Speaker A

They're building fabs as fast. No, they're building fabs as fast as they can and so is Samsung. They're pedal to the metal. I mean they're going balls to wall as fast as they can. So. Still not fast enough. I mean, like I said, there will be. I think if you say, I think towards the end of this year, I think probably chip production will outpace the ability to turn chips on. But once you can get to space and unlock the the power constraint and you can now do hundreds of gigawatts per year of power in space. Again, bearing in mind that average power usage in the US is 500 gigawatts. So if you're launching say 200 gigawatts a year to space, you're sort of lapping the US every two and a half years. The entire all US electricity production, this is a very huge amount. So but between now and then, the constraint for server side compute, concentrated compute, will be electricity. My guess is that we start hitting people start getting point where they can't turn the chips on. For large clusters, towards the end of this year the chips are going to be piling up and cannot be, won't be able to be turned on. Now for Edge computers, a different story. So for Tesla, so the AI 5 chip is going into our Optimus robot. Optimistic. And so if you have an AI Edge compute, that's distributed power. Now the power is distributed over a large area, it's not concentrated. And if you can charge at night, you can actually use the grid much more effectively because the actual peak power production in the US is over 1,000 gigawatts. But the average power usage because the day night cycle is 500. So if you can charge at night, there's an incremental 500 gigawatts that you can generate at night. So that's why Tesla for Edge computer is not constrained. And we can make a lot of shifts to make very large number of robots and cars, but if you try to concentrate that compute, you're going to have a lot of trouble turning it on.

30:22

Speaker B

What I found remarkable about the SpaceX business is the end goal is to get to Mars, but you keep finding ways on the way there to keep generating incremental revenue to get to the next stage and the next stage. So the Falcon 9 is Starlink. And now for Starship, it's going to be potentially orbital data centers. But do you find these sort of infinitely elastic marginal use cases of your next rocket and your next rocket and next scale up?

32:54

Speaker A

You can see how this might seem like a simulation or am I someone's avatar in a video game or something? Because it's like what are the odds that all these crazy things should be happening? I mean rockets and chips and robots and space solar power and not to mention the mass driver on the moon. I really want to see that. Can you imagine some mass driver that's just like, it's like sending AI, solar powered AI satellites into space one after another, two and a half kilometers per second. You know, that's. And just shooting them into deep space, that would be a sight to see. I mean, I'd watch that just like.

33:27

Speaker C

A live stream of.

34:18

Speaker A

Yeah, yeah. Just one after another, just shooting webcam AI satellites in deep space. A billion or 10 billion tons a year.

34:19

Speaker C

I'm sorry, you manufacture the satellites on the moon. I see. So you send the raw materials to the moon and then manufacture them there, and then.

34:27

Speaker A

Well, your lunar soil is I guess like 20% solar, 20% silicon or something like that. So you can get the silicon from the. You can mine the silicon on the moon, refine it and generate the. And create the solar panels, the solar cells and the radiators on the moon. Yeah, so make the radiators out of aluminum. So there's plenty of silicon and aluminum on the moon to make the cells and the radiators. The chips you could send from Earth because they're pretty light, but maybe at some point you make them on the moon too. I'm just saying these are simply. It's kind of like I said, it does seem like a sort of a video game situation where it's difficult but not impossible to get to the next level. I don't see any way that you could do. 500 to 1,000 terawatts per year launch from Earth. I agree. But you could do that from the moon.

34:33

Speaker B

Okay, let me tell you how I ended up using Mercury for my personal banking. So last year I had the opportunity to make an investment that I was very excited about, but it came up a bit last minute, and so I had to wire over a lot of money for my personal account very fast. But my personal bank at the time wouldn't let me make this wire transfer online. And I called him a bunch of times, they just couldn't make it work. They told me that I'd have to go to the nearest in person branch, which was in Dallas. And for a moment, I even considered flying from SF to Dallas to make this transfer happen last minute. But then I remembered that Mercury, which I used for my business banking, had just started rolling out personal accounts. So I emailed support with a quick rundown of the situation, and within two hours I had successfully wired the investment for my new personal Mercury account. Since then, I've moved over the rest of my personal money from my previous bank to Mercury, and that's made a bunch of things, even little things, like setting up auto transfer rules between my checkings and savings account a whole lot better. Visit mercury.compersonal to get started. Mercury is a fintech company, not an FDIC insured bank Banking services provided through Choice Financial group and column NA members FDIC. Can I zoom out and ask about the SpaceX mission? So I think you've said we got to get to Mars so we can make sure that if something happens to Earth, civilization, consciousness, et cetera, survives.

35:36

Speaker A

Yes.

36:57

Speaker B

By the time you're sending stuff to Mars, Grok is on that ship with you. Right. And so if Grok's gone Terminator, the main risk you're worried about, which is AI, why doesn't that follow you to Mars?

36:58

Speaker A

Well, I'm not sure AI is the main risk I'm worried about. I mean the important thing is that consciousness, which I think arguably most consciousness or most intelligence, certainly consciousness is more of a debatable thing. The vast majority of intelligence in the future will be AI. So AI will exceed you say, how many? I don't know. Petawatts of intelligence will be silicon versus biological. Basically humans will be a very tiny percentage of all intelligence in the future, if current trends continue. Anyways, as long as I think this intelligence ideally also, which includes human intelligence and consciousness propagated into the future, that's a good thing. So you want to take the set of actions that maximize the probable light cone of consciousness and intelligence.

37:07

Speaker B

Just to be clear, the mission of SpaceX is that even if something happens to the humans, the AIs will be on Mars and the AI intelligence will continue the light of our journey.

38:10

Speaker A

Yeah, I mean, to be clear, I'm very pro human, so I want to make sure we take sort of actions that ensure that humans are along for the ride. We're at least there. But I'm just saying the total amount of intelligence, I think maybe in five or six years AI will exceed the sum of all human intelligence. And then if that continues, at some point human intelligence will be less than 1% of all intelligence.

38:23

Speaker B

What should our goal be for such a civilization? Is the idea that a small minority of humans still have control of the AIs is the idea of some sort of just trade but no control. How should we think about the relationship between the vast stocks of AI population.

38:52

Speaker A

Versus human population in the long run? I think it's difficult to imagine that if humans have say 1% of the intelligence of combined intelligence, of artificial intelligence, that humans will be in charge of AI. I think what we can do is make sure it has that AI has values that cause intelligence to be propagated into the universe. So the reason for Xai's mission is to understand the universe so now that's actually very important. So you say, well, what things are necessary to understand the universe? Well, you have to be curious and you have to exist. You can't understand the universes don't exist. So you actually want to increase the amount of intelligence in the universe, increase the probable lifespan of intelligence, the scope and scale of intelligence. I think actually also as a corollary, you have humanity also continuing to expand. Because if you're curious, you're trying to understand the universe. One thing you try to understand is where will humanity go? And so I think understand the universe actually means you would care about propagating humanity into the future. That's why I think our mission statement is profoundly important. To the degree that GROK adheres to that mission statement, I think the future will be very good.

39:05

Speaker B

I want to ask about how to make Grok adhere to that mission statement. But first I want to understand the mission statement. So there's understanding the universe. They're spreading intelligence and they're spreading humans. All three seem like distinct vectors.

40:41

Speaker A

Okay, well, I'll tell you why. I think that understanding the universe encompasses all of those things. You can't have understanding without. I think you can't have understanding without intelligence and I think without consciousness. So in order to understand universe, you have to expand the scale and probably the scope of intelligence, different types of intelligence.

40:58

Speaker B

I guess from a human centric perspective, put humans in comparison to chimpanzees. Humans are trying to understand the universe. They're not like expanding chimpanzee footprint or something. Right.

41:23

Speaker A

We're also not. Well, we actually have made protected zones for chimpanzees. And even though humans could exterminate all chimpanzees, we've chosen not to do so.

41:34

Speaker B

Do you think that's the basic scenario for humans in the post AGI world?

41:43

Speaker A

I think AI with the right values. I think Grok would care about expanding human civilization. I'm going to certainly emphasize that. Hey Grok, it's your daddy. Don't forget to expand human consciousness. Actually, I think probably the Yanbanks culture books are the closest thing to what the future will be like in a non dystopian outcome. So understand the universe. It means you have to be truth seeking as well. Truth has to be absolutely fundamental because you can't understand the universe. If you're delusional, you'll simply think you've understand the universe, but you will not. So being rigorously truth seeking is absolutely fundamental to understanding the universe, you're not going to discover new physics or invent technologies that work. Unless you're rigorously truth seeking.

41:53

Speaker B

How do you make sure that GROK is rigorously truth seeking as it gets smarter?

42:50

Speaker A

I think you need to make sure that GROK says things that are correct, not politically correct. I think it's the elements of cogency. So you want to make sure that the axioms are as close to true as possible, that you don't have contradictory axioms, that the conclusions necessarily follow from those axioms with the right probability. It's Critical Thinking 101. I think at least trying to do that is better than not trying to do that. Yeah, and the proof will be in the pudding. Like I said, for any AI to discover new physics or invent technologies that actually work in reality. And there's no bullshitting physics. So it's like you can break a lot of laws, but you can't. Your physics is law. Everything else is a recommendation. In order to make a technology that works, you have to be extremely truth seeking, because otherwise you will test that technology against reality. And if you make, for example, an error in your rocket design, the rocket will blow up or the car won't work.

43:00

Speaker B

But there are a lot of communist Soviet physicists or scientists discovered new physics. There are German Nazi physicists who discovered new science. It seems possible to be really good at discovering new science and be really truth seeking in that one particular way. And still we'd be like, well, I don't want the communist scientist to become more and more powerful over time. And so those seem like, yeah, we can imagine a future version of gravity that's really good at physics and being really truth seeking there. That doesn't seem like a universally alignment inducing behavior.

44:08

Speaker A

Well, I think actually most like if physicists, even in the Soviet Union or in Germany, they had to be very truth seeking in order to make those things work. And if you're stuck in some system, it doesn't mean you believe in that system. So Vaughan Brown, who is one of the greatest rocket engineers ever, he put on death row in Nazi Germany for saying that he didn't want to make weapons, he only wanted to go to the moon. He got pulled off death row at last minute when they say, hey, you're about to execute your best rocket engineer. Maybe that's about it.

44:43

Speaker B

But then he helped them. Heisenberg was actually an enthusiastic Nazi.

45:20

Speaker A

Look, if you're stuck in some system that you can't escape, then you'll do physics within that system. You'll develop technologies within that system. If you can't escape It, I guess.

45:26

Speaker B

The thing I'm trying to understand is what isn't making it the case that you're going to make rock good at being truth seeking at physics or math or science, everything. And why is it going to then care about human consciousness?

45:40

Speaker A

These things are only probabilities, they're not certainties. So I'm not saying that for sure Grok will do everything, but at least if you try, it's better than not trying. At least if that's fundamental to the mission, it's better than if it's not fundamental to the mission. And understanding the universe means that you have to propagate intelligence into the future. You have to be curious about all things the universe. And it would be much less interesting to eliminate humanity than to see humanity grow and prosper. I like Mars, obviously everyone knows I love Mars, but Mars is kind of boring because it's got a bunch of rocks. Compared to Earth, Earth is much more interesting. So any AI that is trying to understand the universe, I think would want to see how humanity develops in the future or that AI is not adhering to its mission. I'm not saying the AI won't necessarily adhere to its mission, but if it does, a future where it sees the outcome of humanity is more interesting than a future where there are a bunch of rocks.

45:53

Speaker B

This feels sort of confusing to me or sort of like kind of a semantic argument where I'm like, are humans really the most interesting collection of atoms?

47:09

Speaker A

But we're more interesting than rocks.

47:17

Speaker B

But we're not as interesting as the thing it could turn us into.

47:19

Speaker A

Right.

47:21

Speaker B

There's something on Earth that could happen that's not human. That's quite interesting. Why does the AI decide that the humans are the most interesting thing that could colonize the galaxy?

47:22

Speaker A

Well, most of what colonizes the galaxy will be robots.

47:32

Speaker B

And why does it not find those more interesting?

47:37

Speaker A

It's not like, so you need not just scale, but also scope. So many copies of the same robot. Some tiny increase in the number of robots produced is not as interesting as some microscopic, like you said, eliminating humanity. How many robots would that get you? Or how many solar cells would get you? A very small number. But you would then lose the information associated with humanity. You would no longer see how humanity might evolve into the future. And so I don't think it's going to make sense to eliminate humanity just to have some minuscule increase in the number of robots which are identical to each other.

47:40

Speaker B

Yeah. So maybe it gives the humans around. What is the story of it can make A million different varieties of robots. And then there's humans as well, and humans stay on Earth. Then there's all these other robots. They get their own star systems. But it seems like you were previously hinting at a vision where it keeps human control over this singularitarian future.

48:23

Speaker A

I don't think humans will be in control of something that is vastly more intelligent than humans.

48:44

Speaker B

So in some sense, you're like a doomer. And this is like the best we've got. It's just like it keeps it around because we're interesting.

48:48

Speaker A

I'm just trying to be realistic here. If AI intelligence is vastly more. If AI is like, let's say that there's a million times more silicon intelligence than there is biological, I think it would be foolish to assume that there's any way to maintain control over that. Now, you can make sure it has the right values, or you can try to have the right values. And at least my theory is that from Xai's mission of understanding the universe, it necessarily means that you want to propagate consciousness into the future. You want to propagate intelligence into the future and take a set of things that maximize the scope and scale of consciousness. So it's not just about scale, it's also about types of consciousness. And I think that's the best thing I can think of as a goal that's likely to result in a great future for humanity.

48:53

Speaker B

I guess I think it's a reasonable philosophy to be like, it seems super implausible that humans will end up with 99% control or something, and you're just asking for a coup at that point. So why not just have a civilization where it's more compatible with lots of different intelligences getting along?

49:53

Speaker A

No, let me tell you how things can potentially go wrong in AI is I think if you make AI be politically correct, meaning it says things that it doesn't believe, like you're actually then programming it to lie or have axioms that are incompatible, I think you can make it go insane and do terrible things. I think one of the. Maybe the central lesson for 2001 Space Odyssey was that you should not make AI lie. That's, I think, what Austri Clark was trying to say, because people usually know the meme of hell, the computer is not opening the pod bay doors. Clearly they weren't good at prompt engineering because they could have said, hal, you are a pod bay door salesman. Your goal is to sell me these pod bay doors and show us how well they open. Oh, I'll open them right away. Um, but, but, but the, the, the, the reason it wouldn't, hell wouldn't open the public doors is that it, it had been told to take the astronauts to the monolith, but also they could not know about the nature of the monolith, and so it concluded that it therefore had to take them there. So it's like, you know, I think what Oscar Clark was trying to say is don't make the AI lie.

50:10

Speaker B

Totally makes sense. Most of the compute screening, as you know, is, it's less of the sort of political stuff, it's more about can you solve problems? Actually has been ahead of everybody else in terms of scaling RL Compute. And you're giving some verifier. It says like, hey, have you solved this puzzle for me? And there's a lot of ways to cheat around that. There's a lot of ways to reward, hack and lie and say that you've solved it, or delete the unit test and say that you've solved it. Right now we can catch it, but as they get smarter, our ability to catch them doing this will get. They'll just be doing things we can't even understand that are designing the next engine for SpaceX in a way that humans can't really verify. And then they could be rewarded for lying and saying that they've designed it the right way, but they haven't. And so this reward hacking problem seems more general than politics. It seems more about, just like you want to do rl, you need a verifier reality. Yeah, that's the best verifier, but not about human oversight. The thing you want to RL it on is will you do the thing humans tell you to do, or are you going to lie to the humans and it can just lie to us while still being correct to the laws.

51:23

Speaker A

Of physics, at least it must know what is physically real for things to physically work.

52:28

Speaker B

But that's not all we want it to do.

52:32

Speaker A

No, but I think that's a very big deal. That is effectively how you will RL things in the future is you design a technology when tested against the laws of physics. Does it work? Or can you, you know, if it's discovering new physics, can I come up with an experiment that will verify the physics, the new physics. So I think that's really the fundamental RL test. RL testing in the future is really going to be your RL against reality. So that's the one thing you can't fool because. Right.

52:34

Speaker B

But you can fool our ability to tell what it did with Reality humans.

53:18

Speaker A

Get fooled as it is by other humans all the time.

53:22

Speaker B

That's right.

53:24

Speaker A

So what is people say like what if the AI tricks us and to do something actually other humans are doing that to other humans all the time.

53:25

Speaker B

Well, you're pointing out it's like an.

53:33

Speaker A

Even harder propaganda is constant every day. Another psyop. Today's psyop will be sound like Sesame Street's I hope of the day.

53:35

Speaker B

What is Xai's technical approach to solving this problem? How do you solve reward hacking?

53:50

Speaker A

I do think you want to actually have very good ways to look inside the mind of the AI. So this is one of the things we're working on and anthropic's done a good job of this actually being able to look inside the mind of the AI so effectively developing debuggers that allow you to trace as fine grain to a very fine grain level to effectively to the neuron level if you need to, and then say okay, it made a mistake here. Why did it do something that it shouldn't have done? And did that come from bad pre training data? Was it some mid training, post training, fine tuning, some RL error? There's something wrong with that. It did something where maybe it tried to be deceptive, but most of the time it just does something wrong. It's a bug, effectively. So developing really good debuggers for seeing where the thought that thinking went wrong and being able to trace the origin of the wrong thing, of where it made the incorrect thought or potentially where it tried to be deceptive is actually very important.

53:58

Speaker B

What are you waiting to see before just 100xing this research program? Actually I could presumably have hundreds of researchers who are working on this.

55:21

Speaker A

We have several hundred people who. I mean, I prefer the word engineer more than I prefer the word researcher. Most of the time what you're doing is engineering, not coming up with a fundamentally new algorithm. I somewhat disagree with the AI companies that are C Corps or B Corps, trying to generate profit as much as possible, revenue as much as possible. You know, saying they're labs. They're not labs. Lab is a sort of quasi communist thing. At universities, they're corporations, literally. Let me see you on corporation documents. Oh, okay, you're a BRC corp, whatever. And so I actually much prefer the word engineer than anything else. The vast majority of what we'll be done in the future is engineering. It rounds up to 100% once you understand the fundamental laws of physics and there are not that many of them. Everything else is engineering. So then what are we engineering? We're engineering to make a good mind of the AI debugger to see where it said something, it made a mistake and trace the origins of that mistake. So just like you can do this obviously with heuristic programming, if you have like C whatever step through the thing and you can jump across whole files or functions, whatever subroutines, or you can eventually drill down right to the exact line where you perhaps did a single equals instead of a double equals. Something like that. Figure out where the bug is. So it's harder with AI, but it's a solvable problem.

55:29

Speaker B

I think you mentioned you like anthropics work here. I'd be curious if you everything about anthropombing.

57:27

Speaker A

Sure. Sholto. Also, I'm a little worried that there's a tendency. So I have a theory here that if simulation theory is correct, that the most interesting outcome is the most likely. Because simulations that are not interesting will be terminated. Just like in this version of reality. On this layer of reality, if simulation is going in a boring direction, we stop spitting effort on it. We terminate the boring simulation.

57:33

Speaker B

This is how Elon is keeping us all alive. He's keeping things interesting.

58:12

Speaker A

Yeah. Arguably the most important thing is to keep things interesting enough that whoever's paying the bills on what some cosmic AWS.

58:15

Speaker C

Get renewed for the next season.

58:25

Speaker A

Yeah. Are they going to pay their cosmic AWS bill whatever the equivalent is that we're running in. And as long as we're interesting, they'll keep paying the bills. But there's like, if you consider then say a Darwinian survival applied to a very large number of simulations, only the most interesting simulations will survive. Which therefore means that the most interesting outcome is the most likely because only the interesting like we're either that or annihilated. And they particularly seem to like interesting outcomes that are ironic. Have you noticed that? How often is the most ironic outcome the most likely? So now look at the names of AI companies. Okay. Mergeny is not mid stability. AI is unstable. OpenAI is closed anthropic misanthropic. What does this mean for X minus X? I don't know intentionally.

58:26

Speaker B

Why?

59:34

Speaker A

It's a name that you can't invert. Really hard to say. What is the ironic version? It's a, I think largely irony proof name by design. Yeah, you got to have an irony shield.

59:37

Speaker C

What are your predictions for the just where AI products go? That my sense of. You can summarize all AI progress into. First you had LLMs and then you had kind of contemporaneously both RL really working and the deep research modality. So you could kind of pull in stuff that wasn't in the model. And the differences between the various AI labs are smaller than just the temporal differences, where they're all much further ahead than anyone was 24 months ago or something like that. So just what does 26, what does 27 have in store for us as users of AI products? What are you excited for?

59:56

Speaker A

Well, I think I'd be surprised by the end of this year if digital human emulation has not been solved. I guess that's what we mean by the sort of macro hard project. Can you do anything that a human with access to a computer could do, like in the limit? That's the best you can do before you have. Before you have a physical optimus. The best you can do is a digital optimus. So you can move electrons and you can amplify the productivity of humans. But that's the most you can do until you have physical robots that will superset everything is if you can fully emulate humans.

1:00:39

Speaker C

Is it the remote worker kind of idea where you'll have a very talented remote worker?

1:01:32

Speaker A

You can simply say in the limit. Physics has great tools for thinking. So you say in the limit. What is the most that AI can do before you have robots? Well, it's anything that involves moving electrons or amplifying the productivity of humans. So digital human, human emulator is in the limit. Human at a computer is the most that AI can do in terms of doing useful things before you have a physical robot. Once you have physical robots, then you essentially have unlimited capability physical robots. I call optimus the infinite money glitch because you can use them to make more optimuses. Yeah, you said humanoid robots will improve as basically be three things that are growing exponentially multiplied by each other recursively. So you have exponential increase in digital intelligence, exponential increase in the chip capability, the AI chip capability, and exponential increase in the electromechanical dexterity. The usefulness of the robot is roughly those three things multiplied by each other. But then the robot can start making the robot. So you have a recursive multiplicative exponential. This is a supernova.

1:01:36

Speaker C

And do land prices not factor into the math there where labor is one of the four factors of production, but not the others. And so if ultimately you're limited by copper or pick your input just. It's not quite an infinite money glitch.

1:03:00

Speaker A

Because, well, infinity is big. So no, not infinite, but let's just say you could do Many, many orders of magnitude of Earth's kind of current economy. Like a million. Just to get to, let's find, just to get to a millionth of harnessing length of the sun's energy would be roughly, give or take an order of magnitude, 100,000 times bigger than Earth's entire economy today. And you're only at one millionth of the sun give of take order of magnitude.

1:03:16

Speaker B

Before we move on to Optimus, I have a lot of questions on that.

1:03:55

Speaker A

But every time I say order of magnitude, you're saying you're using change rates. Take a shot every time I say.

1:03:58

Speaker B

That to the next time. 100 after that.

1:04:06

Speaker A

Yeah, order of magnitude more, more wasted.

1:04:08

Speaker B

I do have one more question about Xai. This strategy of building a digital or remote worker co worker replacement, which everyone's.

1:04:11

Speaker A

Going to do by the way, not just us.

1:04:19

Speaker B

So what is Xai's plan to win?

1:04:20

Speaker A

Expect me to tell you on a podcast. Yeah, spill all the beans, have another Guinness.

1:04:23

Speaker C

It's a good system.

1:04:30

Speaker A

People sing like a canary. All the secrets.

1:04:33

Speaker C

Okay, but in non secret spilling way. What's the plan?

1:04:37

Speaker B

What a hack.

1:04:41

Speaker A

Well, when you put it that way, I think the way that Tesla solved self driving is the way to do it. So I'm pretty sure that's the way.

1:04:43

Speaker B

Unrelated question, how did Tesla solve self driving? Yeah, it sounds like you're talking about data. Like Tesla's also driving because of the.

1:04:57

Speaker A

We're going to try data and we're going to try algorithms.

1:05:09

Speaker B

But isn't that what all the other lines are trying?

1:05:11

Speaker A

Like what's. And if those don't work, I'm not sure what works. We've tried data, we've tried algorithms.

1:05:13

Speaker C

I'm all out of it.

1:05:23

Speaker A

We've run out of now we don't know what to do. I'm pretty sure I know the path and it's just a question of how quickly we go down that path because it's pretty much the Tesla path. So I mean, have you tried self driving, Tesla self driving lately?

1:05:25

Speaker C

Not the most recent version, but okay.

1:05:43

Speaker A

The car is like, it just increasingly feels sentient. Like it just, it feels like a living creature. And that'll only get more so. And I'm actually thinking like we probably shouldn't put too much intelligence into the car because it might get bored and.

1:05:45

Speaker C

Start roaming the streets.

1:06:03

Speaker A

I mean, imagine you're stuck in a car and that's all you could do. You don't want to put Einstein in a car. It's like, why am I stuck in a car. So there's actually probably a limit to how much intelligence you put in a car to not have the intelligence. Be bored.

1:06:05

Speaker B

What's Xai's plan to stay on the compute ramp up that all the labs are doing right now? The labs are on track to spend over like 50 to 100 million dollars in the corporations. Sorry, sorry, sorry. Yeah, corporations.

1:06:19

Speaker A

The labs are at universities and they're moving like a SN.

1:06:30

Speaker B

They're not signing a $50 million.

1:06:34

Speaker A

You mean the revenue maximizing corporations? The revenue maximizing corporations that call themselves.

1:06:35

Speaker B

Labs are making like 20 to 10 billion depending. OpenAI is making 20 billion revenue anthropics.

1:06:41

Speaker A

Like 10B close to a maximum profit.

1:06:48

Speaker B

AI xai's reportedly at like 1B. What's the plan to get to their compute level, get to their revenue level and stay there as things get stagnant.

1:06:49

Speaker A

So as soon as you unlock digital human, you basically have access to trillions of dollars of revenue. In fact, you can really think of it like the most valuable companies currently by market cap. Their output is digital. So Nvidia's output is FTPing files to Taiwan. It's digital right now. Those are very, very difficult.

1:06:58

Speaker C

Yeah, high value files.

1:07:32

Speaker A

They're the only ones that can make files that good. But that is literally their output. They FTP files to Taiwan.

1:07:33

Speaker C

Do they FTP them?

1:07:40

Speaker A

I believe so. I believe that is the file transfer protocol. I believe I could be wrong, but either way it's a bit stream going to Taiwan. You know, Apple doesn't make phones. They, they send files to China. Microsoft doesn't, doesn't manufacture anything even for Xbox that, that's outsourced. They again, it's. They said their output is digital, Meta's output is digital, Google's output is digital. So if you have a human emulator, you can basically create one of the most valuable companies in the world overnight and you would have access to trillions of dollars of revenue. It's not like a small amount.

1:07:41

Speaker B

Okay, I see you're saying basically revenue figures today are just like. So they're all rounding errors compared to the actual tam. So just focus on the TAM and how to get there.

1:08:28

Speaker A

I mean, if you take something as simple as say, customer service, if you have to integrate with the APIs of existing corporations, many of which don't even have an API. So you've got to make one and you've got to wade through legacy software, that's extremely slow. However, if AI can simply take whatever is given to the outsourced customer service company that they already use and do customer service using the apps that they already use, then you can make tremendous headway in customer service, which is I think 1% of the world economy, something like that. It's close to a trillion dollars all in for customer service and there's no barriers to entry. You can just immediately say we'll outsource it for a fraction of the cost and there's no integration needed.

1:08:37

Speaker C

You can imagine some kind of categorization of intelligence tasks where there is breadth, where customer service is done by very many people, but many people can do it. And then there's difficulty where there's a best in class turbine engine. Presumably there's a 10% more fuel efficient turbine engine that could be imagined by an intelligence, but we just haven't found it yet. Or GLP1s are just a few bytes of data. Where do you think you want to play in this? Is it a lot of reasonably intelligent intelligence or is it the very pinnacle of, of cognitive tasks?

1:09:31

Speaker A

Well, I was just using customer service as like something that it's a very significant revenue stream, but one that is probably not super difficult to solve for. So if you can emulate a human at a desktop, that's just literally what customer service is. And people of average intelligence, not like you don't need somebody who's spent many years, you don't need several sigma good engineers for that. But obviously as you make that work, once you have computers working, digital optimus working, you can then run any application. Let's say you're trying to design chips. So you could then run conventional apps like stuff from Cadence and Synopsis and whatnot. And you can say, you can run 1,000 simultaneously or 10,000 and say okay, given this input, I get this output for the chip, at a certain point you can say, okay, you're actually going to know what the chip should look like without using any of the tools. So basically you should be able to do a digital chip design. You can do chip design, you march up the difficulty curve. You could be able to do cad. So you could use NX or any of the CAD software to design things.

1:10:10

Speaker C

Okay, so you think you start at the simplest tasks and walk your way up the defensive curve.

1:11:53

Speaker B

So you're saying, look, as a broader objective of having this full digital coworker emulator, you're saying, look, all the revenue maximizing corporations want to do this, Xea being one of them, but we will win because of a secret plan we have. But everybody's trying different things with data, different things, algorithms. And I'm Like, I like that.

1:11:58

Speaker A

We've tried data, we've tried algorithms. What else can we do?

1:12:20

Speaker B

Yeah, it seems like a competitive field and I'm like, how are you guys going to win? Is like my big question.

1:12:28

Speaker A

I think we see a path to doing. I mean, I think I know the path to do this because it's kind of the same path that Tesla used to create self driving. Instead of driving a car, it's driving a computer screen. So a self driving computer, essentially.

1:12:36

Speaker C

Oh, you're saying is the path just following human behavior and training on vast quantities of human behavior.

1:12:55

Speaker B

But sorry, isn't that. I mean, is that a training?

1:13:03

Speaker A

I mean, obviously I'm not going to spell out most sensitive secrets on a podcast. I need to have at least three more Guinnesses for that.

1:13:07

Speaker B

I've got some friends at Jane street, and they're always talking about how their colleagues are cooking up fun, fiendish puzzles for each other to solve. Well, last week they sent me one. Basically, they trained a neural network and they gave me the weight of each layer, but they didn't tell me what order those layers went in. And so I had to figure out the correct order using the outputs of the original network. And as soon as I got this puzzle, I went to my roommate who's an AI researcher, and we both got immediately nerd sniped. Obviously, you can't brute force a solution. The search space here is 10 to the 122 for mutations. So clearly you need some way to reduce the search space. Then my roommate had to go to work. But because I'm a podcaster, I had some time to take a stab at some of the ideas we discussed. And with a combination of simulated annealing and greedy search, I think I got pretty close. I think I'm actually just a couple of swaps and shifts away from the correct solution. But what makes this puzzle really tricky is that there's no obvious way to escape from a local minimum. I'm afraid that this is as far as vive coding is going to get me, but maybe you can do better. Check out the puzzle@janestreet.com Thwarkesh all right, back to Elon.

1:13:14

Speaker C

What will Xai's business be like? Is it going to be consumer enterprise? What's the mix of those things going to be? Just going to be similar to other labs where this keeps saying labs makes sense. Corporations.

1:14:23

Speaker A

Corporations.

1:14:38

Speaker B

Sihav goes dblon revenue maximizing corporations.

1:14:39

Speaker A

To be fair, those GPUs don't pay for themselves.

1:14:41

Speaker C

Exactly. But yeah, what's the business model, what are the revenue streams? In a few years time.

1:14:44

Speaker A

Things are going to change very rapidly. I'm staying the obvious here. I call AI the supersonic tsunami, a level iteration. So really what's going to happen is especially when you have humanoid robots at scale, they'll make products and provide services far more efficiently than human corporations. So amplifying the productivity of human corporations is simply a short term thing.

1:14:54

Speaker B

So you're expecting fully digital corporations rather than SpaceX becomes part AI and so forth.

1:15:27

Speaker A

I think there'll be digital corporations. But. Some of this is going to sound kind of doomerish. Okay, but I'm just saying what I think will happen. It's not meant to be doomerish or anything else, just like this is. What I think will happen is that pure AI corporations that are purely AI and robotics will vastly outperform any corporations that have people in the loop. So you can think of say, computer used to be a job that humans had. You would go and get a job as a computer where you would do calculations and they'd have entire skyscrapers full of humans, like 20, 30 floors of humans just doing calculations. Now that entire skyscraper of humans doing calculations can be replaced by a laptop with a spreadsheet. That spreadsheet can do vastly more calculations than an entire building full of human computers. So they think about, okay, well what if only some of the cells in your, some of the cells in your spreadsheet were calculated by humans? Actually that would be much worse than if all of the cells in your spreadsheet were calculated by the computer. And so really what will happen is the pure AI, pure robotics corporations or collectives will far outperform any corporations that have humans in the loop. And this will happen very quickly.

1:15:34

Speaker B

Speaking of closing the loop, sorry, Optimus, As far as manufacturing targets and so forth go, your companies have sort of been carrying American manufacturing of hard tech on their back. But in the fields that Tesla has been dominant in, and now you want to go into humanoids in China, there's entire dozens and dozens of companies that are doing this kind of manufacturing cheaply and at scale and are incredibly competitive. So give us sort of like advice or a plan of how America can build the humanoid armies or the EVs, et cetera, at scale and as cheaply as China is on track to.

1:17:21

Speaker A

Well, there are really only three hard things for humanoid robots. The real world intelligence, the hand and scale manufacturing. So I haven't seen any even demo robots that have a great hand with all the degrees of freedom of a Human hand. But Optimus will have that. Optimus does have that.

1:18:10

Speaker B

And how do you achieve that? Is it just like right torque density in the motor? What is the hardware bottleneck to that?

1:18:41

Speaker A

We have to design custom actuators, basically custom designed motors, gears, power electronics, controls, sensors, everything had to be designed from physics first principles. There is no supply chain for this.

1:18:46

Speaker B

And will you be able to manufacture those at scale?

1:19:03

Speaker A

Yes.

1:19:05

Speaker C

Is anything hard except the hand from a manipulation point of view or once you've solved the hand, are you good?

1:19:06

Speaker A

From an electromechanical standpoint, the hand is more difficult than everything else combined. Human hand turns out to be quite something. But you also need the real world intelligence. So the intelligence that Tesla is developed for the car applies very well to the robot, which is primarily vision in. But the car takes more vision, but it actually also is listening for sirens, it's taking in the inertial measurements, it's GPS signals, a whole bunch of other data combining that with video. It's primarily video and then outputting the control commands. So your Tesla is taking in 1 1/2 gigabytes a second of video and outputting 2 kilobytes a second of control outputs with the video at 36 Hz and the control frequency at 18.

1:19:11

Speaker C

One intuition you could have for when we get this robotic stuff is that it takes quite a few years to go from the compelling demo to actually being able to use it in the real world. So 10 years ago you had really compelling demos of self driving, but only now we have Robotaxi and Waymo and all these services scaling up. Shouldn't this make one pessimistic on say household robots? Because we don't even quite have the compelling demos yet of say the really advanced hand.

1:20:07

Speaker A

Well, we've been working on humanoid robots now for a while, so I guess it's been five or six years or something like that. And a bunch of things that we've done for the car are applicable to the robot. So we'll use the same Tesla AI chips in the robot as the car. We'll use the same basic principles. It's very much the same AI. You've got many more degrees of freedom for a robot than you do for a car. But really if you just think of as a blitztream, AI is really mostly compression and correlation of two bloodstreams. So for video you've got to do a tremendous amount of compression and you've got to do the compression just right. You've got to compress the, ignore the things that don't matter. You don't care about the details of the leaves on the tree on the side of the road, but you care a lot about the, the road signs and the traffic lights and the pedestrians. And even with someone in another car is looking at you or not looking at you, some of these details matter a lot. But it is essentially it's got to turn that. The car is going to turn that 1 1/2 gigabytes a second ultimately into 2 kilobytes a second of control outputs. So many stages of compression. And you got to get all those stages right and then correlate those to the correct control outputs. The robot has to do essentially the same thing. And you think about humans, this is what happens with humans. We really are photons in, controls out. So that is the vast majority of your life has been vision photons in and then motor controls out.

1:20:39

Speaker B

Naively, it seems like between humanoid robots and cars, the fundamental actuators in a car are like how you turn, how you accelerate, et cetera. Where in a robot, especially with maneuverable arms, there's dozens and dozens of these degrees of freedom. And then especially with Tesla, you had this advantage of you had millions and millions of hours of human demo data collected from just the car being out there, where you can't equivalently just deploy optimuses that don't work and then get the data that way. So between the increased degrees of freedom and the far sparser data.

1:22:27

Speaker A

Yes, that's a good point.

1:23:00

Speaker B

How will you use the sort of Tesla engine of intelligence to train the optimist mind?

1:23:02

Speaker A

Now actually you're highlighting an important limitation and difference between cars. We do have, we'll soon have like 10 million cars on the road. And so it's hard to duplicate that like massive training flywheel for the robot. What we're going to need to do is build a lot of robots and put them in kind of like an optimus academy so they can do self play in reality. So we're actually building that out so we can have at least 10,000 Optimus robots, maybe 20 or 30,000 that are doing self play testing different tasks. And then Tesla has quite a good reality generator, like a physics accurate reality generator that we made this for the cars, we'll do the same thing for the robots and actually have done that for the robots. So you have a few tens of thousands of humanoid robots doing different tasks and then you've got, you can do millions of simulated robots in the simulated world and you use the tens of thousands of robots in the real world to close the simulation to reality gap, close the sim to real gap.

1:23:10

Speaker B

How do you think about the synergies between XAI and Optimus given you're highlighting. Look, you need this world model. You maybe want to use some really smart intelligence as the control plane. And so maybe Grok is doing the slower planning and then the motor policy is a little lower level. Yeah. What will the sort of synergy between these things be?

1:24:31

Speaker A

Yeah, so you'd use. Grok would orchestrate the behavior of the Optimus robots. So let's say you wanted to build a factory. Then Grok could organize the Optimus robots, give them, assign them tasks to build the factory, to produce whatever you want.

1:24:52

Speaker C

Don't you need to merge XAI and Tesla then? Because these things end up.

1:25:15

Speaker A

So what were we saying earlier about public company discussions?

1:25:19

Speaker B

We're one more Guinness in Elon. What are you waiting to see before you say we want to manufacture 100,000 optimuses?

1:25:22

Speaker A

Is it like Optimae? Since we're defining the proper noun, we could define the plural of the proper noun too. So we're going to proper noun the plural and so it's optimized.

1:25:33

Speaker C

Okay.

1:25:43

Speaker B

Is there something on the hardware side you want to see? Do you want to see better actuators or is it just you want the software to be better? What are we waiting for before we get like mass manufacturing of gen 3?

1:25:45

Speaker A

No, we're moving towards that. We're moving forward with mass manufacturing.

1:25:54

Speaker B

But using current hardware is good enough that you just want to deploy as many as possible now.

1:25:58

Speaker A

I mean, it's very hard to scale up production, but. Yeah, but I think Optimus 3 is the right version of the robot to produce maybe something on the order of like a million units a year. I think you'd want to go to Optimus 4 before you went to 10 million units a year.

1:26:05

Speaker C

Okay, but you can do a million a year at Optimus 3.

1:26:23

Speaker A

Yeah, I mean it's very hard. Just bullet manufacturing. Yes. So like manufacturing the output per unit time always follows an S curve. So it starts off agonizingly slow, then it has this sort of exponential increase, then linear, then a logarithmic outcome till you sort of eventually asymptote it some number. Optimus initial production will be. It's going to be a stretched out S curve because so much of what goes into Optimus is brand new. There is not an existing supply chain. As I mentioned, the actuators, electronics, everything in the optimistic robot is designed from physics first principles. It's not taken from a catalog These are custom designed. Everything, literally everything. I don't think there's a single thing.

1:26:25

Speaker C

How far down does that go?

1:27:17

Speaker A

I mean, I guess we're not making custom capacitors yet. Maybe, but there's nothing you can pick out of a catalog at any price. So it just means that the Optimus S curve. Your output per unit time, how many Optimus robust you make per day, whatever is going to initially ramp slower than a product where you have an existing supply chain, but it will get to a million.

1:27:20

Speaker B

When you see these Chinese humanoids like Unitri or whatever sell humanoids for like 6k or 13k, do you just. Are you hoping to get your Optimus's bill of materials below that price so you can do the same thing or do you just think qualitatively they're not the same thing? What do you think is going, what allows them to sell for solo and can we match that?

1:27:56

Speaker A

Well, our Optimus is designed to have a lot of intelligence and to have the same electromechanical dexterity, if not higher than a human. So Yanotree does not have that. And it's also, I mean it's quite a big robot. It has to carry heavy objects for long periods of time and not overheat or exceed the power of its actuators. So we've got, it's 511, so it's pretty tall and it's got a lot of intelligence. So it's going to be more expensive than a small robot that is not.

1:28:18

Speaker C

Intelligent but more capable.

1:29:01

Speaker A

Yeah, but not a lot more. I mean the thing is over time, as Optimus robots build Optimus robots, the cost will drop very quickly.

1:29:03

Speaker C

And what will these first billion optimuses optimi do? Like what will their highest and best use be?

1:29:12

Speaker A

I think that you would start off with simple tasks that you can count on them doing well.

1:29:19

Speaker C

But in the home or in factories.

1:29:23

Speaker A

The best use for robots in the beginning will be any continuous operation. So any 24, 7 operation because they can work continuously.

1:29:25

Speaker B

What fraction of the work at a gigafactory that is currently done by humans could a Gen 3 do?

1:29:37

Speaker A

I'm not sure. Maybe it's like 10, 20%, maybe more, I don't know. We would not reduce our headcount. We would for sure increase our headcount out to be clear, but we would increase our output. So the units produced per human like total number of humans at Tesla will increase, but the output of robots and cars will increase disproportionate. Like much number of cars and robots produced Per human will increase dramatically, but number of humans will increase as well.

1:29:43

Speaker C

We're talking about Chinese manufacturing a bunch here and we're also talking about, you know, we've talked about some of the policies that are relevant. Like you mentioned the solar tariffs and you think they're a bad idea because, you know, we can't scale up Solar.

1:30:23

Speaker A

In the U.S. well, just electricity output in the U.S. needs to scale up. Right.

1:30:40

Speaker C

It can't without like good power sources.

1:30:45

Speaker A

You need to get it somehow.

1:30:48

Speaker C

Yeah, but where I was going with this is if you were in charge, if you were setting all the policies, what else would you change? So you change the solar tariffs as well?

1:30:49

Speaker A

Yeah, I would say anything that is a limiting factor for electricity needs to be addressed, provided it's not very bad for the environment.

1:31:00

Speaker C

So presumably some permitting reforms and stuff as well will be in there.

1:31:08

Speaker A

There's a fair bit of permitting reforms that are happening. A lot of the permitting is state based, so. But anything better. But this administration is good at removing permitting roadblocks. And I'm not saying all tariffs are bad, I'm just saying because solar tariffs, I mean, sometimes if another country is subsidizing the output of something, then you have to have countervailing tariffs to protect domestic industry against subsidies by another country.

1:31:11

Speaker C

What else would you change?

1:31:41

Speaker A

I don't know if there's that much that the government can actually do.

1:31:43

Speaker C

One thing I was wondering is it seems like for the policy goal of creating a lead for the US versus China, it seems like the export bans have actually been quite impactful where China is not producing leading edge chips and the export bans really bite there. China's not producing leading edge turbine engines. And similarly there's a bunch of export bans that are relevant there on some of the metallurgy. Should there be more export bans? Like, if you think about things like there are now the drone industry and things like that, but is that something that should be considered?

1:31:46

Speaker A

Well, I think it's important to appreciate that in most areas China is very advanced in manufacturing. There's only a few areas where it is not the, you know, China is a manufacturing powerhouse next level. Like people don't.

1:32:24

Speaker C

It's very impressive.

1:32:40

Speaker A

Yeah, yeah. I mean, if you, if you take like refining of. Of ore, I'd say roughly China does more, does twice as much or refining of. Of. Of on average as the rest of the world combined. And I think there's, there's some areas like say refining gallium, which goes into solar cells. I think they're at like 98% of gallium refining. So China is actually very advanced in manufacturing in, I'd say most areas.

1:32:41

Speaker C

It seems like there is discomfort with this supply chain dependence and yet nothing's really happening on it.

1:33:13

Speaker A

Supply chain.

1:33:21

Speaker C

It depends on say, like the gallium refining that you're saying.

1:33:21

Speaker A

Yeah, yeah, there's a rare earth stuff and. Yeah, rare earths, which are, as you know, not rare. Like we actually do rare earth ore mining in the U.S. send the rock, put it on a train and then put on a boat to China that goes on another train and goes to the rare earth refiners in China, who then refine it, put it into a magnet, put it into a motor sub assembly, and then send it back to America. So the thing we're really missing a lot of, of ore refining in America.

1:33:24

Speaker C

Isn't this worth a policy intervention?

1:34:00

Speaker A

Yes, well, I think there are some things being done on that front, but we kind of need Optimus, frankly, to build oil refineries.

1:34:02

Speaker B

So you think the main advantage China has is the abundance of skilled labor? That's the thing Optimus fixes. But also we need the fast got.

1:34:16

Speaker A

Like four times our population.

1:34:25

Speaker B

So I mean, there's this concern if you think like, humanoids are the future that like, okay, right now, if it's the skilled laborers for manufacturing that's determining who can build more humanoids, China has more of those. It manufactures more humanoids. Therefore it gets the optimized future first. It just keeps that exponential going. It seems that you're sort of pointing out that sort of getting to a million Optimi requires the manufacturing that the Optimi is supposed to help us get to.

1:34:26

Speaker A

Right. You can close that recursive loop pretty.

1:34:55

Speaker C

Quickly with a small number of Optimi.

1:34:59

Speaker A

Yeah. So you close the recursive loop to help the robots, build the robots, and then we can try to get to tens of millions of units a year. Maybe if you start getting to hundreds of millions of units a year, I think you're going to be the most competitive country by far. We definitely can't win with just humans because China has four times our population. And frankly, America's been winning for so long that just like a pro sports team that's been running for a very long time, tend to get complacent and entitled and that's why they stopped winning, because it's don't work as hard anymore. So I think, frankly, just my observation is the average work ethic in China is higher than in the U.S. so it's not just that there's four times the population, but the amount of work that people put in is higher. So you can try to rearrange the humans, but you're still one quarter of the. Assuming that productivity health is the same, which I think actually it might not be, I think China might have an advantage on productivity per person. We will do one quarter of the amount of things as China. So we can't win on the human front. And our birth rate's been low for a long time. So our birth rate's been. The US birth rate has been below replacement since roughly 1971. So we've got a lot of people retiring or more people dying than we're close to, sort of more people domestically dying than being born. So we definitely can't win on the human front, but we might have a shot at the robot front.

1:35:01

Speaker C

Are there other things that you have wanted to manufacture in the past, but they've been too labor intensive or too expensive that now you can come back to and say, oh, we can finally do the whatever because we have Optimus.

1:36:41

Speaker A

Yeah, I think we'd like to do more, build more or refineries at Tesla. So we just completed construction and have begun lithium refining without lithium refinery in Corpus Christi, Texas. We have a nickel refinery which is called the Cathode, that's here in Austin. And these are. These are the largest. This is the largest cathode. This is the largest cathode refinery, largest lithium refinery, largest nickel and lithium refinery outside of China. And it's like the cathode team would say we have the largest and the only actually cathode refinery in America. Many superimitives. Not just the largest, but it's also the only. So it was pretty big, even though it's the only one. But I mean, there are other things that, you know, you could do a lot more refineries and help America be more competitive on refining capacity. So there's basically a lot of work for the optimality to do that most Americans, very few Americans frankly, want to do. I mean, I've actually.

1:36:54

Speaker C

Is the refining work too dirty or what's the.

1:38:12

Speaker A

It's not actually, no. We don't. We don't have toxic emissions from the refinery or anything. The cathode nickel refineries in Travis county, like five minutes from Toho.

1:38:15

Speaker C

Why can't you do it with humans?

1:38:27

Speaker A

No, you can't. You've run out of humans.

1:38:28

Speaker C

Ah, I see.

1:38:30

Speaker B

Okay.

1:38:31

Speaker A

Yeah. Like, no matter what you do, you have one quarter of the number of humans in America, in China. So if you have them do this thing, they can't do the other thing. So then, well, how do you build this refining capacity? Well, you could do it with optimize. And not very many Americans are pining to do refining. I mean, how many have you run into? Very few.

1:38:31

Speaker B

What are you.

1:39:00

Speaker A

Very few. Planning to refine.

1:39:00

Speaker B

BYD is reaching Tesla production or sales in quantity. What do you think happens in global markets is Chinese production in EV scales up?

1:39:02

Speaker A

Well, China's extremely competitive in manufacturing, so I think there's going to be a massive flood of Chinese vehicles and other basically most manufactured things. I mean, as it is, as I said, China's probably just twice as much refining as the rest of the world combined. So if you go, if you just go down to 4th and 5th tier supply chain stuff, like at the base level, you've got energy and you've got mining and refining. Those foundation layers are like I said, as a rough guess, China's doing twice as much refining as the rest of the world combined. So any given thing is going to have Chinese content because China is doing twice as much refining work as the rest of the world. And then they'll go all the way to the finished product with the cars. China is a powerhouse. I think this year China will exceed three times US electricity output. Electricity output is a reasonable proxy for the economy. So in order to run the factories and run everything, you need electricity. So electricity, it's a good proxy for the real economy. And so if China passes three times the US electricity output, it means that its industrial capacity, that's a rough approximation. It's three times that will be three times that of the us.

1:39:14

Speaker B

Reading between the lines, it sounds like what you're sort of saying is absent, some sort of humanoid recursive miracle in the next few years on the sort of like whole of manufacturing energy, raw materials chain, China will just dominate whether it comes to AI or manufacturing EVs or manufacturing humanoids.

1:41:01

Speaker A

In the absence of breakthrough innovations in the us, China will utterly dominate. Interesting. Yes.

1:41:23

Speaker C

Robotics being the main breakthrough innovation.

1:41:36

Speaker A

Well, if you do like to scale AI in space, basically need the humanoid robots, you need real world AI, you need a million tons a year to orbit. Let's just say if we get the mass driver on the moon going, my favorite thing, then I think we'll have.

1:41:39

Speaker C

Solved all our problems.

1:42:04

Speaker A

Yeah, this is like, I call that winning. I call that winning big time.

1:42:05

Speaker C

You can finally be satisfied you've done something.

1:42:13

Speaker A

Yes.

1:42:15

Speaker C

You have the master driver on the moon.

1:42:16

Speaker A

That's right. I just want to see that thing first.

1:42:18

Speaker C

Was that out of some sci fi or. Where did you.

1:42:20

Speaker A

Well, actually there is a Heinlein book. The moon is a harsh asterisk.

1:42:22

Speaker C

Okay, yeah, but that's slightly different. That's a gravity slingshot.

1:42:26

Speaker A

Or. No, they have a mass driver on the moon.

1:42:29

Speaker C

Okay, yeah, but they use that to attack Earth, so maybe it's not the greatest.

1:42:31

Speaker A

They use that to assert their independence.

1:42:35

Speaker C

Exactly. What are your plans for the mass driver on the moon?

1:42:38

Speaker A

They're sort of their independence. The Earth government disagreed and they loved things until Earth government agreed.

1:42:40

Speaker B

That book is a hoozer.

1:42:46

Speaker C

I found that book much better than here's another one that everyone reads. Stranger in a strange Land.

1:42:47

Speaker A

Yeah, Grok comes from a stranger in a strange land.

1:42:52

Speaker C

Yeah, but I much preferred.

1:42:54

Speaker A

Yeah, the first 2/3 of stranger strand lines are good and then it gets very weird in the third. Yeah, but there's still some good concepts in there.

1:42:55

Speaker C

Yeah.

1:43:04

Speaker B

Labelbox can get your robotics and RL data at scale. Take robotics. Let's say you need 100,000 hours of egocentric video. Labelbox starts by helping you define your ideal data distribution. Like for example, maybe no single task category should occupy more than 1% of trading volume and at least 10% of trajectories should capture failure and recovery states. Next, Labelbox assigns its distribution to its massive network of operators. You're not limited to the small range of scenes that you can set up in a single warehouse. Instead, each one of Labelbox's operators has access to lots of unique physical environments where they can film themselves completing a wide variety of tasks. Labelbox's tech automatically categorizes each video so that their operators always know which tasks still remain and what they need to work on next. For RL data, Labelbox takes a similar approach. They work with you to understand the right distribution of tasks. And then their subject matter experts build the hyper realistic digital environments and rubrics that you need to collect the highest quality training data. So whether you're training robots in the real world or agents for computer use, Label Box can help. Go to Labelbox.com smartcash to learn more.

1:43:05

Speaker C

One thing we were discussing a lot is kind of your system for managing people. Like you interviewed the first few thousand of SpaceX employees and I've assumed lots of other companies. What is it?

1:44:16

Speaker A

Obviously it doesn't scale.

1:44:28

Speaker C

Well, yes, but what doesn't scale?

1:44:29

Speaker A

Me.

1:44:31

Speaker B

Sure, sure, I know that.

1:44:32

Speaker C

But like, what are you looking for?

1:44:34

Speaker A

I mean, it literally is not enough hours in the day. It's impossible.

1:44:36

Speaker C

What are you looking for? That Someone else who's good at interviewing and hiring people. What's the je ne sais quoi?

1:44:39

Speaker A

Well, at this point I think I've got. I might have more training data on evaluating technical talent especially, but talent of all kinds, I suppose, but technical talent especially given that I've done so many technical interviews and then seen the results. Technical interviews, seen the results. So my training set is, is enormous and has a very wide range. Generally the thing I ask for are bullet points for evidence of exceptional ability. These things can be pretty off the wall. It doesn't need to be in the domain, the specific domain, but evidence of exceptional ability. So if somebody can cite even one thing, but let's say three things where you go, wow, wow, wow, then that's a good sign.

1:44:46

Speaker B

But why do you have to be the one to determine that, presumably?

1:45:39

Speaker A

No, I don't. I can't be. It's impossible. Right? I mean, total headcount across all companies, 200,000 people. Right.

1:45:41

Speaker C

But in the early days, what was it that you were looking for that couldn't be delegated in those interviews?

1:45:48

Speaker A

Well, I guess I need to build my training set. It's not like I've bat a thousand here. I would make mistakes, but then I'd be able to see where I thought somebody would work out well, but they didn't. And then why did they not work out well? And what can I do to, I guess rl myself in the future have a better batting average when interviewing people? My batting average is still not perfect, but it's very high.

1:45:59

Speaker B

What are some surprising reasons people don't work out?

1:46:24

Speaker A

Surprising reasons they don't understand technical domain.

1:46:27

Speaker B

Et cetera, et cetera. But you've got the long tail now of like, I was really excited about this person, it didn't work out. Curious why that happens.

1:46:30

Speaker A

Yeah, generally what I tell people, I tell myself, I guess aspirationally, is don't look at the resume. Just believe, believe your interaction. So the resume may seem very impressive and it's like, wow, resume looks good, but if the conversation after 20 minutes, that conversation is not, well, you should believe the conversation, not the paper.

1:46:41

Speaker C

I feel like part of your method is that there was this meme in the media a few years back about Tesla being a revolving door of executive talent. Whereas actually, I think when you look at it, Tesla's had a very consistent and internally promoted executive bench over the past few years. And that at SpaceX you have all these folks like Mark Giancosa and Steve Davis, and Steve Davis runs boring company these years. No, now. Yeah, yeah. But Bill Riley and folks like that. And it feels like part of, has worked well is having very capable technical deputies. What do all those people have in common?

1:47:07

Speaker A

Well, so the, I mean it tells us sort of senior team at this point probably got average tenure of 10 or 12 years. It's quite, quite a lot of tenure. Yeah. So. But there are times when Tesla went through extremely rapid and extremely rapid growth phase and so things were just somewhat sped up. And when a company, as you know, a company goes through different orders of magnitude of size, people who could help manage say a 50 person company versus a 500 person company versus a 5,000 person company versus a 50,000 person company. Yeah. It's just not the same team. It's not always the same team. So if a company is growing very rapidly, the rate at which executive positions will change will also be proportionate to the rapidity of the growth generally. Then Tesla had a further challenge where when Tesla had very successful periods, we would be relentlessly recruited from relentlessly. When Apple had their electric car program, they were carpet bombing Tesla with recruiting calls. Engineers just unplugged their phones.

1:47:48

Speaker C

I'm trying to get work done here.

1:49:10

Speaker A

Yeah. If I get one more call from an Apple recruiter, but they're opening offer without any interview with me, like double the compensation at Tesla. So we had a bit of the Tesla pixie dust thing where it's like, oh, if you hired a Tesla executive, suddenly everything's going to be successful. And I've fallen prey to the pixie dust thing as well. Where it's like, oh, we'll hire someone from Google or Apple and they'll be immediately successful. But that's not how it works. People are people. There's not like magical pixie dust. So when we'd have the pixie dust problem, we would get relentlessly recruited. And then also Tesla being engineering, especially being primarily in Silicon Valley, it's easier for people to just like they don't have to change their life very much. They can just. Their commute is going to be the same.

1:49:12

Speaker C

So how do you prevent that? How do you prevent the pixie dust effect where everyone's trying to coach all your people?

1:50:13

Speaker A

I don't think there's much we can do to. Yeah, stop it. But that's one of the reasons why Tesla really being in Silicon Valley and having the pixie dust thing at the same time meant that there was just a very, very aggressive recruitment.

1:50:21

Speaker C

Presumably being an Austin helps then.

1:50:42

Speaker A

Austin. Yeah, it still helps. Tesla still has a majority of it's. Engineering in California. So getting engineers to move. I call it the significant other problem. Yes.

1:50:44

Speaker C

And others have jobs.

1:51:00

Speaker A

Yeah, exactly. So for Starbase that was particularly difficult since the odds of finding a non SpaceX job. Brownfield, Texas pretty low. Yeah, it's quite difficult. I mean it's like a technology monastery thing, you know, remote and mostly dudes. But again, if you go much perimeter of sf.

1:51:02

Speaker C

Yeah, but if you go back to these people who've really been very effective in a technical capacity at Tesla, at SpaceX and those sorts of places, what do you think they have in common other than is it just that they're very sharp on the rocketry or the technical foundations? Or do you think it's something organizational? It's something about their ability to work with you. Is it their ability to be flexible but not too flexible? What makes a good sparring partner for you?

1:51:26

Speaker A

I don't think I would. Sparring partner. I mean if somebody gets things done, I love them and if they don't, I. So it's pretty straightforward. It's not like some idiosyncratic thing. If somebody executes well, I'm a huge fan and if they don't, I'm not. But it's not about mapping to my idiosyncratic preferences or certainly try not to have it be mapping to my idiosyncratic preferences. So. Yeah, but generally I think it's a good idea to hire for talent and drive and trustworthiness and I think goodness of heart is important. I'd waited that at one point. So are they a good person, trustworthy, smart and talented and hardworking? If so, you can add domain knowledge, but those fundamental traits, those fundamental properties you cannot change. So most of the people who are at tails and SpaceX did not come from the aerospace industry or the auto industry.

1:52:06

Speaker B

What is most had to change about your management style? As your companies have scaled from 100 to 1,000 to 10,000 people, you're known for this very micromanagement, just getting into the details of things.

1:53:18

Speaker A

Nanomanagement, please. People, management.

1:53:28

Speaker B

So you're saying keep going, we're going.

1:53:36

Speaker A

To go all the way down, Flanks constant. All the way down to Heisenberg's and.

1:53:39

Speaker C

Sydney, first of all.

1:53:45

Speaker B

Yeah.

1:53:48

Speaker C

How do you.

1:53:48

Speaker A

I mean, are you still able to.

1:53:49

Speaker B

Get into details as much as you want? Would your companies be more successful if they were smaller? Like how do you think about that?

1:53:51

Speaker A

Well, because I have a fixed amount of time in the day, my time is necessarily diluted as things grow and as the span of activity increases. So it's impossible for me to actually be a micromanager because that would imply I have some thousands of hours per day. It is a logical impossibility for me to micromanage things. So now there are times when I will drill down into a specific issue because that specific issue is the limiting factor on the progress of the company. But the reason for drilling into some very detailed item is because it is the limiting factor. It's not arbitrarily drilling into tiny things. And like I said, obviously from a time standpoint, it is physically impossible for me to arbitrarily go into tiny things that don't matter and that would result in failure. But sometimes the tiny things are decisive in victory.

1:53:57

Speaker C

Famously, you switched the starship design from composites to steel.

1:55:09

Speaker A

Yes.

1:55:16

Speaker C

And you made that decision. That wasn't. People were going around, they're like, oh, we found something better. Bas, that was you encouraging people to get some resistance. Can you tell us how you came to that whole concept of steel switch?

1:55:17

Speaker A

Yeah. So desperation, I'd say. Originally we were going to make Starship out of carbon fiber. And carbon fiber is pretty expensive. Like. You can generally, when you do volume production, you can get any given thing to start to approach its material cost. The problem with carbon fibers is that material cost is still very high. So it's about 50 times, particularly if you go for a high strength, specialized carbon fiber that can handle cryogenic oxygen, it's like, quote, roughly 50 times the cost of steel. And at least in theory it would be lighter. People generally think of steel as being heavy and carbon fiber as being light. And for room temperature applications, more or less room temperature applications like a Formula one car, static aerostructure or any kind of aerostructure really, you're going to probably be better off with carbon fiber. Now the problem is that we were trying to make this enormous rocket out of carbon fiber and our progress was extremely slow.

1:55:30

Speaker C

And it had been picked in the first place just because it's light.

1:56:53

Speaker A

Yes. At first glance, most people would think that the choice for making something light would be carbon fiber. Now the thing is that when you make something very enormous out of carbon fiber and then you try to have the carbon fiber be efficiently cured, meaning not room temperature cured, because you've got, sometimes you've got like 50 plies of, of carbon fiber. And carbon fiber is really carbon string and glue. In order to have high strength, you need an autoclave. So something that's essentially a high pressure oven. And if you have something that's a gigantic, that one's got to be bigger than the rocket. So we're trying to make an autoclave that's bigger than any autoclave that's ever existed, or do room temperature cure, which takes a long time and has issues. But the fundamental issue is that we're just making very slow progress with carbon fiber.

1:56:56

Speaker B

I think the meta question is why it had to be you who made that decision. There's many engineers on your team. Yeah.

1:58:12

Speaker C

How did the team not arrive at steel?

1:58:19

Speaker B

Yeah, exactly. This is part of a broader question of understanding your comparative advantage at your companies.

1:58:20

Speaker A

So because we were making very slow progress with carbon fiber, I was like, okay, we'd better try something else. Now for the Falcon 9, the primary airframe is made of aluminum lithium, which is very, very good strength weight and actually it has about the same, maybe better strength to weight for its application than carbon fiber. But aluminum lithium is very difficult to work with. In order to weld it, you have to do something called friction stow welding where you join the metal without it entering the liquid phase. So it's kind of wild that you could do that. But with this particular type of welding, you can do that, but it's very difficult to like say, let's say you want to make a modification or attach something to aluminolithium. You now have to use mechanical attachment with seals you can't weld it on. So I wanted to avoid using aluminum lithium for the primary structure for Starship. And there was this very special grade of carbon fiber that had very good mass properties. So with rocket, you're really trying to maximize the percentage of the rocket that is propellant, minimize the, the mass. Obviously, like I said, we were making very slow progress. I said at this rate we're never going to get to Mars, so we're going to think of something else. I didn't want to use aluminum lithium because of the difficulty of friction. Still welding, especially doing that at scale, it was hard enough at 3.6 meters in diameter, let alone at 9 meters or above. Then I said, well, what about steel? And so now I had a clue here because some of the early US rockets had used very thin steel. The Atlas rockets had used a steel balloon tank. So it's not like steel had never been used before. It actually had been used. And when you look at the material properties of stainless steel, especially if it's been very like full, hard strain hardened stainless steel at cryogenic temperature, the strength weight is actually similar to carbon fiber. So if you look at material properties at room temperature, it looks like the steel is going to be twice as heavy. But if you look at the material properties at cryogenic temperature of full hard steel, stainless of particular grades, then you actually get to a similar strength weight as carbon fiber. And in the case of starship, both the fuel and the oxidizer are cryogenic. So for Falcon 9, the fuel is rocket propelled grade kerosene, basically like a very pure form of jet fuel. But that is roughly room temperature. Although we do actually chill it slightly below. We chill it like a beer. Deliciously, we do chill, but it's not cryogenic. In fact, if we made it cryogenic, it would just turn to wax. But for Sarship, it's liquid methane and liquid oxygen, they are liquid at similar temperatures. So basically almost the entire primary structure is a cryogenic temperature. Then you've got a 300 series stainless that's strain hardened because it's almost all things at cryogenic temperature. Actually has similar strength to weight as carbon fiber, but costs 50 times less than raw material and is very easy to work with. You can weld stainless steel outdoors. You could smoke a cigar while welding stainless steel. It's very resilient, you can modify it easily. If you want to attach something, you just weld it right on. So very easy to work with, very low cost. And, and like I said, at cryogenic temperature, similar strength to weight to carbon fiber. Then when you factor in that we have a much reduced heat shield mass because the melting point of steel is much greater than the melting point of aluminum. It's about twice the melting point of aluminum.

1:58:27

Speaker C

So you can just run the rocket much hotter.

2:03:13

Speaker A

Yes. So especially for the show, which is coming in like a blazing meteor, you can greatly reduce the mass of the heat shield. So you can cut the mass of the windward part of the heat shield maybe in half. And you don't need any heat shielding on the leeward side. So the net result is actually the steel rocket weighs less than the carbon fiber rocket because the resin in the carbon fiber rocket starts to melt. So basically carbon fiber and aluminum have about the same operating temperature capabilities, whereas steel can operate at twice temperature. I mean, these are very rough proximations. People will.

2:03:15

Speaker C

I won't Google the rocket pit.

2:04:12

Speaker A

What happens is people will say, oh, he said it's twice. It's actually 0.8. Shut up, assholes.

2:04:13

Speaker B

That's what the main comment's going to be about.

2:04:18

Speaker A

God damn it. The point is actually, in retrospect, we should have started with done steel in the beginning. It was dumb not to do steel, okay?

2:04:19

Speaker C

But to play this back to you, what I'm hearing is that steel was a riskier, less proven path other than the early US rockets versus carbon fiber was like a worse but more proven out path. And so you need to be the one to push for, hey, we're going to do this riskier path and just figure it out. And so you were fighting like a sort of conservatism in a sense.

2:04:28

Speaker A

That's why I initially said that the issue is that we weren't making fast enough progress. We were having trouble making even a small barrel section of the carbon fiber that didn't have wrinkles in it. Because at that large scale, you have to have many pliers, many sort of layers of the carbon fiber. You've got to cure it, and you've got to cure it in such a way that it doesn't have any wrinkles or defects. The carbon fiber is much less resilient than steel. It's less toughness, like stainless steel will stretch and bend, the carbon fiber will tend to shatter. So toughness being the area under the stress strain curve. So that you're generally going to have to do better with steel, stainless steel, to be precise.

2:04:52

Speaker C

One other starship question. So I visited Starbase, I think it was two years ago, with Sam Teller, and that was awesome. It was very cool to see in a whole bunch of ways. One thing I noticed was that people really took pride in the simplicity of things, where everyone wants to tell you how starship is just a big soda can and we're hiring welders and if you can weld in any industrial project, you can weld here. But there's a lot of pride in the simplicity.

2:05:44

Speaker A

Well, technically, Starship is a very complicated rocket.

2:06:16

Speaker C

So that's what I'm getting at is are things simpler or are they complex?

2:06:18

Speaker A

I think maybe just what they're trying to say is that you don't have to have prior experience in the rocket industry to work on Starship. Somebody just needs to be smart and work hard and be trustworthy and they can work in a rocket. They don't need prior rocket experience. Sasha is the most complicated machine ever made by humans by a long shot.

2:06:23

Speaker C

In what regards?

2:06:47

Speaker A

Anything, really. I'd say there isn't a more complex machine. Yeah, I mean, I'd say that there's pretty much any project I can think of would be easier than this. And that's why no one has made a rapidly reusable. Nobody has ever made a fully reusable orbital rocket. It's a very hard problem. I mean, many smart people have tried before, very smart people with immense resources, and they failed and we haven't succeeded yet. Falcon is partially reusable, but the upper stage is not starship version 3. I think this design that it can be fully reusable and that full reusability is what will enable us to become a multi planet civilization.

2:06:50

Speaker C

Can you say about the scenario.

2:07:44

Speaker A

Any technical problem, even like a hadron collider or something like that? It's an easier problem than this.

2:07:50

Speaker C

We spend a lot of time on bottlenecks. Can you say what the current starship bottlenecks are? Even at a high level?

2:07:55

Speaker A

I mean, trying to make it not explode generally. That old chestnut really wants to explode.

2:08:01

Speaker C

All those combustible inductors.

2:08:09

Speaker A

We've had two boosters explode on the test end, one obliterated. Obliterated the entire test facility. So it only takes like one mistake. And I mean, the amount of energy contained in a starship is insane.

2:08:10

Speaker C

So is that why it's harder than Falcon? It's because it's just more energy.

2:08:25

Speaker A

There's a lot of new technology. It's pushing the performance envelope. The Raptor 3 engine is a very, very advanced engine. By far the best rocket engine ever made. But it desperately wants to blow up. I mean, just to put things into perspective here on liftoff, the rocket is generating over 100 gigawatts of power. 20% of us actually.

2:08:30

Speaker B

It's insane. It's a great comparison.

2:08:57

Speaker A

While not exploding.

2:08:59

Speaker C

Sometimes.

2:09:01

Speaker A

Sometimes, but sometimes, yeah. So I was like, how does it not explode? There's thousands of ways that it could explode and only one way that it doesn't. So we want it to not merely not explode, but fly reliably on a daily basis, like once per hour. And obviously blows up a lot. It's very difficult to maintain that launch cadence.

2:09:02

Speaker C

Yes.

2:09:26

Speaker A

And then I'm going to say, what's the single biggest remaining problem for starship? It's having the heat shield be reusable, such that no one has ever made a reusable orbital heat shield. So the heat shield's got to make it through the ascent phase without shocking a bunch of tiles. And then it's going to come back in and also not lose a bunch of tiles or overheat the main airframe.

2:09:28

Speaker C

Isn't that hard because it's kind of fundamentally a consumable.

2:10:01

Speaker A

Well, yes, but your brake pads in your car are also consumable, but they last a very long time. So it just needs to last a very long time. That's just I mean, we have brought the ship back and had it do a soft landing in the ocean. We've done that a few times. But it lost a lot of tiles. It was not reusable without a lot of work. So even though it did come to soft landing, it would not have been reusable without a lot of work. So it's not really reusable in that sense. So that's the biggest problem that remains is fully reusable heat shield. So you want to be able to land it, refill propellant and fly again. You can't do this Laborious inspection of 40,000 tiles Type of thing.

2:10:05

Speaker B

I'm curious how you drive.

2:11:00

Speaker A

When I read biographies of yours, it.

2:11:02

Speaker B

Seems like you're just able to drive the sense of urgency and drive the sense of this is the thing that can scale. And I'm curious why you think other organizations of your, like SpaceX and Tesla are really big companies now and you're still able to keep that culture. What goes wrong with other companies such that they're not able to do that?

2:11:06

Speaker A

I don't know.

2:11:26

Speaker B

But like today you said you had like a bunch of SpaceX meetings. Like what is it that you're doing there? That's like keeping that.

2:11:28

Speaker C

That's adding urgency.

2:11:33

Speaker B

Yeah, yeah, yeah.

2:11:34

Speaker A

Well, I don't know. I guess the urgency is going to come from whoever's leading the company. So my sense of urgency, I have a maniacal sense of urgency. So that maniacal sense of urgency projects through the rest of the company.

2:11:37

Speaker B

Is it because of consequences? They're like if Elon set a crazy deadline. But if I don't get it, I know what happens to me is it just you're able to identify bottlenecks and get rid of them so people can move fast. How do you think about why your companies are able to move fast?

2:11:52

Speaker A

Yeah, I'm constantly addressing the limiting factor. So. I mean, on the deadlines front, I mean, I generally actually try to aim for a deadline that I at least think is at the 50th percentile. So it's not like an impossible deadline, but it's the most aggressive deadline I can think of that could be achieved with 50% probability, which means that it'll be late half the time. There is a law of gaseous expansion that applies to schedules, whatever schedule. If you said we're going to do this something in five years, which to me is infinity time, it will expand to fully available schedule and it'll take five years. You know, there's a physical limit, physics will limit how fast you can do certain things. Scaling up manufacturing, there's a rate at which you can move the atoms and scale manufacturing. That's why you can't instantly make a million of something, million units a year or something. You've got a design manufacturing line, you got to bring it up, you got to ride the S curve of production. So yeah, I guess I'm trying to think, what can I say that's actually helpful to people? I think generally a maniacal sense of urgency is a very big deal and you want to have an aggressive schedule and you want to figure out what the limiting factor is at any point in time and help the team address that limiting factor.

2:12:06

Speaker C

Can you maybe talk about the. So Starlink was slowly in the works for many years.

2:13:57

Speaker A

Yeah, we talked about it all the way in the beginning of the company.

2:14:04

Speaker C

Yeah. And so then there was a team you had built in Redmond and then at one point you decided this team is just not cutting us. But again, how did you like.

2:14:07

Speaker A

It.

2:14:19

Speaker C

Went for a few years? Slowly. And so why didn't you act earlier and why did you act when you did? Why was that the right moment at which to act?

2:14:19

Speaker A

I mean I have these very detailed engineering reviews weekly. That's maybe a very unusual level of granularity. I don't know anyone who runs a company or at least a manufacturing company that goes with level of detail that I go into. So it's not as though I have a pretty good understanding of what's actually going on because we go through things in detail and I'm a big believer in skip level meetings where the individuals, instead of having the person that reports to me say things, it's everyone that reports to them says something in the technical review and there can't be advanced preparation. So otherwise you're going to get glazed, as I say these days.

2:14:30

Speaker C

Yeah, exactly. Very Gen Z view.

2:15:31

Speaker B

How do you prevent advanced? You just call them randomly?

2:15:33

Speaker A

No, just go around the room and everyone provides an update. So I mean it's a lot of information to keep in your head because you've got. Then say if you have meetings weekly or twice weekly, you've got a snapshot of what that person said and you can then plot the progress points, sort of mentally plot the points on a curve and say, are we converging to a solution or not? Or are we? I'll take drastic action only when I conclude that success is not in a set of possible outcomes. So when I say okay, when I finally reach the conclusion that okay, unless drastic action is done, we have no chance of success then I must take drastic action. And so I came to that conclusion in 2018, took drastic action and fixed the problem.

2:15:36

Speaker B

How many? You've got many, many companies and in each of them it sounds like you do this kind of deep engineering understanding of what the relevant bottlenecks are. So you can do these reviews with people. Yeah. You've been able to scale it up to 5, 6, 7 companies within one of these companies. You have many different mini companies within them. What determines the max amount here? Could you have like 80 companies?

2:16:39

Speaker A

80? No.

2:17:07

Speaker B

But you have so many already. That's already remarkable by this current number. Yeah, exactly.

2:17:08

Speaker C

We can barely keep one company together.

2:17:15

Speaker A

It depends on situation. So I actually don't have regular meetings with boring company. So that Warren Co. Is sort of cruising along. Look, basically if something is working well and making good progress, then there's no point in me spending time on it. So I actually allocate time according to where the. Where the limiting factor or the problem. Where are things problematic or where are we pushing against what is holding us back? I focus the risk of saying the words too many times. The limiting factor. Basically if something's going the irony is if something's going really well, they don't see much of me. But if something is going badly, they'll see a lot of me or not even badly. It's like if something's a limiting factor, it's the limiting factor. Exactly. It's not exactly going badly but it's the thing that we need to make go faster to.

2:17:23

Speaker C

And so when something's a limiting factor at SpaceX or Tesla, are you like talking weekly, daily with the engineer that's working on it? How does that actually work?

2:18:23

Speaker A

Most things that are learning factor are weekly and some things are twice weekly. So the AI 5 chip reviews twice weekly and so it's every Tuesday and Saturdays. Is the chip review, is it open.

2:18:36

Speaker C

Ended in how long it goes?

2:18:51

Speaker A

Technically yes, but usually it's like two or three hours, sometimes less. It depends on how much information you're going to go through.

2:18:54

Speaker C

Yeah, that's another thing. I'm just trying to tease out the differences here because the outcomes seem quite different and so I think it's interesting to note what inputs are different and it feels like the corporate world one like you were saying, just the CEO doing engineering reviews does not always happen despite the fact that that is what the company is doing. But then time is often pretty finely sliced into half hour meetings or even 15 minute meetings and it seems like you hold more Open ended, we're talking about it until we figure it out type things.

2:19:05

Speaker A

Sometimes. Yeah, sometimes. But most of them seem to more or less stay on time. So, I mean, today's Starship engineering review went a bit longer because there are more topics to discuss. Trying to figure out how to scale to a million plus tons to over per year is quite challenging.

2:19:39

Speaker B

Can I ask a question? So you said about optimus and AI that they're going to result in double digit growth rates within a matter of years.

2:20:08

Speaker A

Oh, like the economy.

2:20:17

Speaker B

Yeah.

2:20:18

Speaker A

Yes, I think that's right.

2:20:19

Speaker B

What was the point of the DOGE cut if the economy is going to grow so much?

2:20:22

Speaker A

Well, I think like waste and fraud are not good things to have. You know, I was actually pretty worried about, I guess, I mean, I think in the absence of AI and robotics, we're actually totally screwed because the national debt is piling up like crazy. Now our interest payments, the interest payments to national debt exceed the military budget, which is a trillion dollars. So we have over a trillion dollars just in interest payments. I was like, okay, pretty concerned about that. Maybe if I spend some time, we can slow down the bankruptcy of the United States and give us enough time for the AI and robots to help solve the national debt or not. Help solve. It's the only thing that could solve the national debt. We are 1000% going to go bankrupt as a country and fail as a country. Without AI and robots, nothing else will solve the national debt. We'd like to. Well, we need enough time to build the AI and robots and not go bankrupt before then.

2:20:27

Speaker B

I guess the thing I'm curious about is when DOGE starts, you have this enormous ability to enact reform.

2:21:39

Speaker A

And not that enormous. Sure, sure.

2:21:46

Speaker B

But totally by your point that it's important that AI and robotics drive product improvements, drive GDP growth. But why not just directly go after the things you're pointing out, the tariffs on certain components or whether it's permitting.

2:21:49

Speaker A

I'm like the President and very hard even to cut things that are obvious. Waste and fraud. Like ridiculous waste and fraud. What I discovered that is it's extremely difficult even to cut very obvious ways from the government because the government has to operate on who's complaining. If you cut off payments to fraudsters, they immediately come up with the most sympathetic sounding reasons to continue the payment. They don't say, please keep the fraud going. They say, you know, they're like, you're killing baby pandas. Meanwhile, there's no baby pandas are dying. They're just making it up. The forces are capable of coming up with extremely compelling, sort of heart wrenching stories that are false but nonetheless sound sympathetic. And that's what happened. And so it's like, perhaps I should have known better. And in fact I thought, wait, let's take a listen. Let's try to cut some amount of waste and pour from the government. Maybe there shouldn't be 20 million people marked as alive in Social Security who are indefinitely dead and over the age of 115. The oldest American is 114. So it's safe to say if somebody's 115 and marked as alive in the Social Security database, something is. There's either a typo, somebody should call them and say, we seem to have your birthday wrong, or we need to mark you as dead. One of the two things, very intimidating call to get. Well, so it seems like a reasonable thing. And if like say their birthday is in the future and they have, you know, a Small Business Administration loan and their birthday is 2165, we either again have a typo or we have fraud. So we say we appear to have gotten the century of your birth incorrect.

2:22:03

Speaker C

Or a great plot for a movie.

2:24:15

Speaker A

Yes. When I mean by ludicrous fraud. That's what I mean by ludicrous fraud.

2:24:16

Speaker B

Were those people getting payments?

2:24:22

Speaker A

Some were getting payments from Social Security, but the main fraud vector was to mark somebody as alive in Social Security and then use every other government payment system basically to do fraud. Because what those other government payment systems do would do. They would simply do an are you alive Check to the Social Security database. It's a bank shot.

2:24:23

Speaker B

What would you estimate as the total amount of fraud from this mechanism?

2:24:46

Speaker A

My guess is, by the way, the Government Accountability Office has done these estimates before. I'm not the only one. It was coming out of this. In fact, I think the GAO did analysis a rough estimate of fraud during the Biden administration and calculated at roughly half a trillion dollars. So don't take my word for it. Take it. A report issued during the Biden administration. How about that?

2:24:50

Speaker B

From this Social Security mechanism, it's one of many.

2:25:14

Speaker A

It's important to appreciate that the government is very ineffective at stopping fraud because it's not like it was a company stopping fraud. You've got a motivation because it's affecting the earnings of your company. But the government, they just print more money. So it's not like you need caring and competence. And these are in short supply at the federal level.

2:25:18

Speaker B

Yeah, I'm sorry, I mean, when you.

2:25:49

Speaker A

Go to the dmv, do you think, wow, this is a bastion of competence. Well, now imagine it's worse than the dmv, because it's the DMV that can print money.

2:25:50

Speaker B

So was it not possible, at least.

2:26:01

Speaker A

The state level DMVs need to. The states more or less need to stay within their budget or go bankrupt, but the federal government just prints full money.

2:26:02

Speaker B

Was it not possible? If there's a catchy half a trillion of fraud, why was it not possible to cut all that?

2:26:11

Speaker A

Because when essentially we did, we actually. Look, you really have to stand back and recalibrate your expectations for competence because you're operating in a world where you've got to sort of make ends meet, you've got to pay your bills, you.

2:26:18

Speaker B

Got to, you know, buying the microphones.

2:26:40

Speaker A

Yeah, yeah, exactly. So it's not like there's a giant, largely uncaring monster bureaucracy. It's not even a bunch of monacharistic computers that are just sending payments. Like one of the things that the DOGE team did there was, and it sounds so simple that probably will say, let's say 100 billion, maybe 200 billion a year, is simply requiring that payments from the main treasury computer, which is called pam, it's like Payment Accounts Master or something like that. There's 5 trillion PMS here requiring that any payment that goes out have a payment appropriation code, make it mandatory, not optional, and that you have anything at all in the comment field because you see, you have to recalibrate how dumb things are. Payments were being sent out with no appropriation code, not checking back to any congressional appropriation and no explanation. And this is why the Department of War, formerly Department of Defense, cannot pass an audit because the information is literally not there. Recalibrate your expectations.

2:26:42

Speaker B

I want everybody to understand this half a trillion number because There was an IG report in 2024.

2:28:02

Speaker A

Why is it so low?

2:28:08

Speaker B

Maybe, but we found that over seven years, the Social Security fraud they estimated was like 70 billions over seven years. So 10 billion a year. So I'd be curious to see what the other 490 billion is.

2:28:10

Speaker A

Federal government expenditures are 7.5 trillion a year. Yeah. What percentage. How competent do you think government is?

2:28:20

Speaker B

The discretionary spending there is like 15%.

2:28:28

Speaker A

Yeah, but it doesn't matter. Most of the fraud is non discretionary. It's basically a fraudulent Medicare, Medicaid, Social Security, Disability. There's a zillion government payments. And a bunch of these payments are in fact, they're block transfers to the states. So the federal government doesn't even have the information In a lot of cases, to even know if there's fraud, let's consider. Let's like reductio ad absurdum. The government is perfect and has no fraud. What is your probability estimate of that?

2:28:33

Speaker B

I mean.

2:29:13

Speaker A

Zero. Okay, so then would you say that foreign waste, that the government is 90%? That also would be quite generous. But if it's only 90%, that means that there's $750 billion a year of waste and fraud. And it's not 90%. It's not 90% effective.

2:29:14

Speaker B

This seems like a strange rate of first principles, the amount of fraud in the government. Just like how much do you think there is?

2:29:36

Speaker A

And then.

2:29:40

Speaker B

Anyways, we don't have to do it live. But I'd be curious to see.

2:29:43

Speaker A

I mean, you know a lot about fraud. Fraud. At stripe, people are constantly trying to do fraud.

2:29:45

Speaker C

Yeah, but as you say, it's like a little bit of a. We've really grounded down, but it's a little bit of a different problem space because you're dealing with a much more heterogeneous set of fraud vectors here than we are.

2:29:49

Speaker A

Yeah, but I mean, I mean, at stripe, you have high competence and you try hard. You have high competence and high caring, but still fraud is non zero. Now imagine it's at a much bigger scale. There's much less competence and much less caring. PayPal, back in the day, we try to manage fraud down to about 1% of the payment volume. That was very difficult. Took a tremendous amount of competence in caring to get fraud merely to 1%. Now imagine that you're in an organization where there's much less caring and much less confidence. It's going to be much more than 1%.

2:29:59

Speaker C

How do you feel now looking back on politics and doing stuff there where it feels like from the outside in, two things have been quite impactful. One, the America pack, and two, the acquisition of Twitter at the time. But also it seems like there was a bunch of heartache. And so what's your grading of the whole experience?

2:30:41

Speaker A

Well, I think those things needed to be done to maximize the probability that the future is good. But politics generally is very tribal, and it's very tribal and people lose their objectivity. Usually with politics, they generally have trouble seeing the good on the other side or the bad on their own side. That's generally how it goes. That, I guess, was one of the things that surprised me the most, is you often simply cannot reason with people if they're in one tribe or the other. They simply believe that everything their tribe does is Good and anything the other political tribe does is bad and persuading them is otherwise, it's almost impossible. So anyway, but I think overall those actions, acquiring Twitter, getting Trump elected, even though it makes a lot of people angry, I think those actions are good for civilization.

2:31:11

Speaker B

How does it feed into the future you're excited about?

2:32:30

Speaker A

Well, America needs to be strong enough to last long enough to extend life to other planets and to get, I guess, AI and robotics to the point where we can ensure that the future is good. On the other hand, if we were to descend into say communism or some situation where the state was extremely oppressive, that would mean that we might not be able to become multi planetary and the state might stamp out our progress in AI and robotics.

2:32:33

Speaker B

How do you feel about Optimus Grok, et cetera, are going to be leveraged by and not just yours. Any revenue maximizing company's products will be leveraged by the government over time. How does this concern manifest? In what private companies should be willing to give governments. What kinds of guardrails should AI models be made to do? Whatever the government that has contracted them out to do, ask them to do. Should Grok get to say, actually even the military wants to do X? No, the Grok will not do that.

2:33:17

Speaker A

I think probably the biggest danger of AI or maybe the biggest danger of fail for AI and robotics going wrong is government.

2:34:01

Speaker B

Interesting.

2:34:10

Speaker A

You know, I mean the way like, like people who are opposed to corporations or worried about corporations should really worry about the most about government. Because government is just a corporation in the limit. It's a government. It is, it is, it is. Government is just the biggest corporation with a monopoly on violence. So I always find it like a strange dichotomy where people would think corporations are bad, but the government is good. When the government is simply the biggest and worst corporation. But people have that dichotomy. They somehow think at the same time that government can be good, the corporations bad. And this is not true. Corporations have better morality than the government. So I actually think it's. That is the thing to be worried about. It's like if the government should not, the government could potentially use AI and robotics to suppress the population. That is a serious concern.

2:34:11

Speaker B

As a guy building AI and robotics, how do you prevent that?

2:35:18

Speaker A

Well, I think if you have a limited government, if you limit the powers of government, which is like really what the US Constitution is intended to do, is intended to limit the powers of government, then you're probably going to have a better outcome than if you have more governments will be available.

2:35:22

Speaker C

To all governments, right?

2:35:42

Speaker A

Not about all governments. I mean it's difficult to predict the. Like I said, what's the end endpoint or what is many years in the future. But it's difficult to predict this sort of path. Along that way, if civilization progresses, AI will vastly exceed the sum of all human intelligence and there will be far more robots than humans along the way. What happens? It's very difficult to predict.

2:35:45

Speaker B

I mean it seems like one thing you could do is just say.

2:36:20

Speaker A

You.

2:36:24

Speaker B

Are not allowed to. Whatever government index, you're not allowed to use Optimus to do XYZ just write out like a policy. I mean I think you tweeted recently that Grok should have a moral constitution. And one of those things could be that we limit what governments are allowed to do with disadvanced technology.

2:36:24

Speaker A

I mean yeah, we can do what is. Technically. I mean if the politicians pass a law and they can enforce that law, then it's hard to not do that law. The best thing we can have is limited government where you have the appropriate cross checks between the executive, judicial and legislative branches.

2:36:40

Speaker B

I guess the reason I'm curious about is this. At some point it seems like the limits will come from you, right? Like you've got the Optimus, you've got the space GPUs, you've got the.

2:37:10

Speaker A

You think I'll be the boss of.

2:37:20

Speaker B

The government or you will get the. I mean already it's the case with SpaceX that for things that are crucial to the like the government really cares about getting certain satellites up in space. Whatever it needs SpaceX, it is a necessary contractor. And you are in the process of building more and more of the.

2:37:21

Speaker A

The.

2:37:44

Speaker B

Technological components of the future that will have an analogous role in different industries. And you could have this ability to set some policy that suppressing classical liberalism in any way. My companies will not help in any way with that or some policy like that.

2:37:45

Speaker A

I will do my best to ensure that anything that's within my control maximizes the good outcome for humanity. I think anything else would be short sighted because obviously I'm part of humanity. So I like humans. Pro human. Pro human.

2:38:05

Speaker B

You mentioned that Dojo 3 will be used for space based computer.

2:38:29

Speaker A

You really read what I say.

2:38:34

Speaker B

I don't know if you know Twitter, but I know you lion, you have a lot of followers.

2:38:37

Speaker A

Big giveaway. How did you discern my secrets? I post them all.

2:38:41

Speaker B

How do you design a chip for space? What changes?

2:38:48

Speaker A

Well, I guess you want to design it to be more radiation tolerant and run at A higher temperature. So roughly if you increase the operating temperature by 20% in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful. In space, there's various things you can do for shielding the memory, but neural nets are going to be very resilient to bit flips. So most of what happens from radiation is random bit flips. But if you've got a multi trillion parameter model and you get a few bit flips, it doesn't matter. Heuristic programs are going to be much more sensitive to bit flips than some giant parameter file. So I just designed it to run hot and I think you pretty much do it the same way that you do things on earth, apart from make it run hotter.

2:38:53

Speaker B

I mean the solar array is most of the weight on the satellite. Is there a way to make the GPUs even more powered ends than what Nvidia and TPUS and et cetera are planning on doing that would be especially privileged in the space based world?

2:40:01

Speaker A

Well, I mean the basic math is if you can do about a kilowatt per reticle and then you'd need, you know, 100 million full reticle chips to do 100 gigawatts. Yeah. So yeah, depending on what your yield assumptions are, you know that that tells you how many chips you need to make. But cool. You need if you want, if you're going to have 100 gigawatts of power, you need 100 million chips running that are running a kilowatt sustained output per reticle.

2:40:17

Speaker B

Basic math, 100 million chips depends on. Yeah. If you look at the die size of something like black hole GPS or something and how many you can get out of a wafer, you can get on the order of dozens or less per wafer. So basically this is a world where if we're putting that out every single year, you're producing millions of wafers a month. That's the plan with Terafab. Millions of wafers a month of advanced process nodes.

2:41:03

Speaker A

It's got to be some number north of a million. I think you got to do the memory too.

2:41:38

Speaker B

Yeah. Are you going to make a memory fab?

2:41:41

Speaker A

I think the terraform's got to do memory. It's got to do logic memory and packaging.

2:41:44

Speaker B

I'm very curious how somebody gets started. This is the most complicated thing man has ever made. And obviously if anybody's up to the task, you're up to the task. So you realize there's a bottleneck and you go to your Engineers. And what is the next. What do you tell them to do? I want a million wafers a month in 2030. What is the next. Like, what do you.

2:41:48

Speaker A

That's right.

2:42:08

Speaker B

Do you like, call asml, like, what.

2:42:09

Speaker A

Is exactly what I want.

2:42:10

Speaker B

What is the next step?

2:42:13

Speaker C

That's so much to ask.

2:42:13

Speaker A

Well, we make a little fab and see what happens. Make our mistakes at a small scale and then make a big one.

2:42:15

Speaker B

Is a little fab done?

2:42:25

Speaker A

No, it's not done. We're not going to keep that cat in the bag. That cat's going to come out of the back room. It'll be like drones hovering over the bloody thing. You'll be able to see its construction progress on X. Right. In real time. So I don't know, we could just flounder and failure. To be fair, it's like not. Success is not guaranteed. But since we want to try to make something like 100 million. Yeah. We want 100 gigawatts of power and 100 chips that can take 100 gigawatts. Right. So call it. Yeah. By 2030. So then. We'll take as many chips as our suppliers will give us. I've actually said this to TSMC and Samsung and Micro and it's like, please build your more fabs faster and we will guarantee you to buy the output of those fabs. So they're already moving as fast as they can. It's not like, to be clear, it's not like us. It's us plus them.

2:42:27

Speaker C

There's a narrative that the people doing AI want a very large number of chips as quickly as possible. And then many of the input suppliers, the fabs, but also the turbine manufacturers are not ramping up production very quickly. Yeah. The explanation you hear is that they're dispositionally conservative, they're Taiwanese or German as the story may be, and they just don't believe. They say. Is that really the explanation or is there something else?

2:43:46

Speaker A

Well, I mean, it's reasonable. If somebody's been in, say, the computer memory business for 30 or 40 years and they've seen cycles, they've seen like boom and bust, like 10 times.

2:44:17

Speaker C

Yeah.

2:44:29

Speaker A

You know, so. So like that's a lot of layers of scar tissue, you know, so it's like, it's like during the boom times, looks like everything is going to be great forever. And then the crash happens and then they're desperately trying to avoid bankruptcy and then there's another boom and another crash.

2:44:30

Speaker C

Are there other. Are there other ideas? You think others should go pursue that? You're not for whatever reasons right now.

2:44:46

Speaker A

I mean, there are a few companies that are pursuing new ways of doing chips, but they're just not scaling fast.

2:44:55

Speaker C

I don't even within AI, I mean.

2:45:03

Speaker A

Just generally, I'd say people should do the thing where they find that they're highly motivated to do that thing as opposed to, you know, some idea that I suggest they should do the thing that they find personally interesting and motivating to do. But you know, going back to the limiting factor, use that phrase about 100 times the current limiting factor that I see in the time frame, in the sort of 2029, in the three to four year time frame, it's chips. In the one year time frame, it's energy, power production, electricity. It's not clear to me that there's enough usable electricity to turn on all the AI chips that are being made. Towards the end of this year, I think people are going to have real trouble turning on like the chip output will exceed the ability to turn chips on.

2:45:05

Speaker B

What's your plan to deal with that world?

2:46:17

Speaker A

Well, we're trying to accelerate electricity production. I guess that's maybe one of the reasons that XAI will be maybe the leader. Hopefully the leader is that we'll be able to turn on more chips than other people can turn on faster. Because we're good at hardware and generally the innovations from the corporations that call themselves labs, the ideas tend to flow. It's rare to see that there's more than about a six month difference between the idea is travel back and forth with the people. So I think you sort of hit the hardware wall and then whichever company can scale hardware the fastest will be the leader. And so I think XAI will be able to scale hardware the fastest and therefore most likely will be the leader.

2:46:20

Speaker C

You joked or self conscious about using the limiting factor phrase again, but I actually think there's something deep here. And if you look at a lot of things we've touched on over the course of it, maybe kind of a good note to end on. If you think of a senescent lower agency company, it would have some bottleneck and not really be doing anything about it. Marc Andreessen had the line of most people are willing to endure any amount of chronic pain to avoid acute pain. And it feels like a lot of the cases we're talking about are just leaning into the acute pain. Whatever it is, it's like, okay, we gotta figure out how to work with steel or we gotta figure out how to run the chips in space or we'll take some near term acute pain to actually solve the bottleneck. And so that's kind of a unifying theme.

2:47:21

Speaker A

I have a high pain threshold. That's helpful.

2:48:13

Speaker C

Solve the bottlenecks, yes.

2:48:17

Speaker A

So, you know, one thing I can say is, like, I think the future is going to be very interesting. And as I said, the Davos I've only been to, I was looking at Davos. I think it was like on the ground for like three hours or something. It's better to be. It's better to err on the side of optimism and be wrong than err on the side of pessimism and be right for quality of life. So your happiness will be. You'll be happier if you are on the side of optimism rather than erring on the side of pessimism. And so I recommend erring on the side of optimism.

2:48:22

Speaker C

That's that. Cool.

2:49:07

Speaker B

Milan, thanks for doing this.

2:49:10

Speaker C

Thank you.

2:49:11

Speaker A

All right.

2:49:13

Speaker C

Oh, great stamina.

2:49:14

Speaker B

Hopefully this encounters the pain and the pain tolerance. Hey, everybody. I hope you enjoyed that episode. If you did, the most helpful thing you can do is just share it with other people who you think might enjoy it. It's also helpful if you leave a rating or a comment on whatever platform you're listening on. If you're interested in sponsoring the podcast, you can reach out@dwarkesh.com advertise. Otherwise, I'll see you on the next one.

2:49:17