Elon Musk on Space GPUs, AI, Optimus, and his manufacturing method
166 min
•Feb 5, 20262 months agoSummary
Elon Musk discusses the convergence of AI, space-based computing, humanoid robots, and manufacturing at scale. He argues that space-based data centers will become the cheapest place for AI compute within 30-36 months, while Optimus robots and advanced chip manufacturing are critical bottlenecks for scaling civilization's technological capabilities.
Insights
- Space-based solar power for AI data centers will be 5-10x more efficient than Earth-based solutions due to no atmosphere, weather, or day-night cycles, making it economically dominant by 2026-2027
- The limiting factor for AI scaling has shifted from algorithms to hardware: electricity generation, chip production, and manufacturing capacity are now the critical constraints
- Humanoid robots represent an 'infinite money glitch' through recursive exponential growth: exponential AI intelligence × exponential chip capability × exponential electromechanical dexterity, enabling robots to manufacture more robots
- Government fraud and waste may exceed $500B annually; fixing obvious inefficiencies (dead person payments, missing appropriation codes) could save $100-200B/year without policy changes
- China's manufacturing dominance (4x US population, higher work ethic, 2x ore refining capacity) can only be countered by US robotics scaling, not human labor competition
Trends
Space-based AI infrastructure becoming economically viable as launch costs decrease and solar efficiency advantages compoundVertical integration of chip design, fab construction, and power generation as competitive necessity for AI leadersHumanoid robots transitioning from research demos to manufacturing-scale deployment within 3-5 yearsGovernment efficiency becoming a limiting factor for national competitiveness; private sector leading on fraud detectionEnergy production becoming the primary constraint on AI scaling, not compute or algorithmsDomestic manufacturing renaissance enabled by robotics, particularly in ore refining and battery productionFull reusability of space vehicles (Starship) as prerequisite for multi-planetary civilizationDigital human emulation (AI performing all computer-based work) as near-term milestone before physical robotics deploymentTurbine blade casting and advanced semiconductor fab capacity as critical global bottlenecksShift from fab partnerships to internal fab construction for companies needing >100 gigawatts of chip production
Topics
Space-Based Data Centers and Solar PowerAI Chip Manufacturing and Fab CapacityHumanoid Robot Development and OptimusStarship Reusability and Heat Shield EngineeringGovernment Fraud Detection and DOGE EfficiencyElectricity Generation and Power ConstraintsDigital Human Emulation and AI CapabilitiesSupply Chain Bottlenecks (Turbines, Memory, Casting)US-China Manufacturing CompetitionGrok AI Alignment and Truth-SeekingVertical Integration Strategy for HardwareOptimus Manufacturing Scale-Up (1M to 10M units/year)Starlink and Satellite InternetTesla AI-5 and AI-6 Chip DevelopmentMass Driver on the Moon
Companies
SpaceX
Core focus: Starship development, 10,000+ launches/year target, space-based AI infrastructure, Starlink, and lunar ma...
Tesla
Optimus humanoid robot development, AI-5/AI-6 chip design, lithium/nickel refining, 100GW solar production mandate
xAI
Grok AI development, mission to understand the universe, competing for AI leadership through hardware scaling and pow...
TSMC
Primary chip fab partner; backlogged through 2030; Musk guarantees to buy all fab output if they expand capacity
Samsung
Secondary chip fab partner for Tesla AI chips; operating at maximum capacity alongside TSMC
NVIDIA
GPU manufacturer; output is digital files sent to Taiwan; discussed as comparison for digital-output business models
Anthropic
AI safety research company; praised for interpretability work on AI debugging and understanding model internals
OpenAI
Competitor in AI race; discussed as revenue-maximizing corporation calling itself a lab
Google
Digital output business model; competing in AI space with TPU chips and large compute clusters
Meta
Digital output business model; competing in AI infrastructure and compute
Apple
Attempted to recruit Tesla engineers for electric car program; example of pixie dust hiring effect
BYD
Chinese EV manufacturer reaching Tesla production volumes; example of Chinese manufacturing competitiveness
Siemens
Turbine manufacturer; discussed as potential source for power generation equipment
GE
Turbine manufacturer; discussed alongside Siemens as limited capacity supplier
ASML
Semiconductor equipment manufacturer; critical bottleneck for chip fab expansion; China cannot replicate without ASML
Boring Company
Musk's tunneling company; referenced as model for iterative hardware improvement and manufacturing optimization
Waymo
Autonomous vehicle competitor; mentioned as example of long timeline from demo to commercial robotaxi service
Unimind
Chinese humanoid robot manufacturer selling units at $6-13K; competitive threat to Optimus pricing
People
Elon Musk
CEO of SpaceX, Tesla, xAI; primary speaker discussing AI, robotics, space infrastructure, and manufacturing strategy
Gorkhek Patel
Podcast host collaborating on episode; co-interviewer alongside Cheeky Pint host
Mark Jankosa
SpaceX executive; example of capable technical deputy in Musk's management structure
Steve Davis
SpaceX executive; runs sporting company; example of internally promoted executive bench
Bill Riley
SpaceX executive; example of long-tenure technical leadership at Musk's companies
Wernher von Braun
Historical rocket engineer; referenced as example of truth-seeking scientist under oppressive regime
Werner Heisenberg
Physicist; discussed as example of scientist maintaining technical excellence under Nazi regime
Robert Heinlein
Science fiction author; 'The Moon is a Harsh Mistress' inspired Musk's lunar mass driver concept
Austin Clarke
2001: A Space Odyssey writer; referenced for lesson on not making AI lie (HAL 9000 example)
Quotes
"The most economically compelling place to put AI will be space in 36 months or less, maybe 30 months. And then it will get ridiculously better to be in space."
Elon Musk•Early discussion on space-based data centers
"You can't scale very much on Earth. Once you start thinking in terms of what percentage of the sun's power are you harnessing, you realize you have to go to space."
Elon Musk•Space infrastructure discussion
"The limiting factor is chips. Limiting factor, once you can get to space, is chips. But the limiting factor, before you can get to space, will be power."
Elon Musk•Bottleneck analysis
"Optimus is the infinite money glitch because you can use them to make more optimisms."
Elon Musk•Humanoid robot economics discussion
"We are 1,000% going to go bankrupt as a country and fail as a country without AI and robots. Nothing else will solve the national debt."
Elon Musk•DOGE and government efficiency discussion
"I think we'll find we're in the singularity and like, oh, okay, we've still got a long way to go."
Elon Musk•AI capability discussion
Full Transcript
Cheeky Point is back. This episode is a collab with Gorkhek Patel, whose podcast was really blown up in tech, and I really enjoy it. We sat down with Elon Musk, and as you can imagine, there was a lot to cover. So, are there really three hours of questions? Are you fucking serious? Yeah. You don't have a lot to talk about, Elon? Holy fuck, man. I mean, it's the most interesting point. All the storylines are kind of converging right now, so we'll see how much... Almost like I planned it. Exactly. That was never a new subject, did he? So, as you know better than anybody else, the total cost of ownership of a data center, only 10% to 15% is energy, and that's the part you're presumably saving by moving this into space. Most of it's the GPUs. If they're in space, it's hard to service them, or you can't service them, and so the depreciation cycle goes down on them. So it's way more expensive to have the GPUs in space, presumably. What's the reason to put them in space? Well, the availability of energy is the issue. So, I mean, if you look at electrical output outside of China, everywhere outside of China, it's more or less flat. It's very, you know, maybe a slight increase, but pretty much flat. China has a rapid increase in electrical output. But if you're putting data centers anywhere except China, where are you going to get your electricity, especially as you scale? The output of chips is growing pretty much exponentially. but the output of electricity is flat. So how are you going to turn them chips on? Magical power sources? Magical electricity fairies? You're famous. You're famous. You have a big fan of solar. One terawatt of solar power. So with a 25% capacity factor, like four terawatt of solar panels. It's like one percent of the land area of the United States. And that's like far in this. You were in the singularity when we've got one terawatt of data centers, right? So what are you running out of exactly? How far into the singularity are you then? You tell me. Yeah, exactly. So I think we'll find we're in the singularity and like, oh, okay, we've still got a long way to go. But is the plan to put it in the space after we've covered Nevada in solar panels? I think it's pretty hard to cover Nevada in solar panels. You have to get permits from, like the permits for that. Try getting the permits for that. So space is really a regulatory play. It's harder to build on land than it is in space. It's harder to scale on the ground than it is to scale in space. But also, you're going to get about five times the effectiveness of solar panels in space versus the ground, and you don't need batteries. I almost wore my other shirt, which says it's always sunny in space, which it is. So because you don't have a day-night cycle or seasonality, clouds, or an atmosphere in space, because the atmosphere alone results in about a 30% loss of energy. So any given solar panels can do about five times more power in space than on the ground. and you avoid the cost of having batteries to carry you through the night. So it's actually much cheaper to join space. And my prediction is that it will be by far the cheapest place to put AI will be space in 36 months or less, maybe 30 months. 36 months? Less than 36 months. How do you service GPUs as they fail, which happens quite often in training? actually it depends on how recent the GPUs are that have arrived I mean at this point we find our GPUs to be quite reliable there's infant mortality which you can obviously iron out on the ground so you can just run them on the ground and confirm that you don't have infant mortality with the GPUs but once they start working their actual reliability once they start working and you're past the initial you know debug cycle of NVIDIA or whatever or whoever's making the chips could be Tesla, Tesla AI six chips or something like that or it could be you know, TPUs or triniums or whatever the reliability is actually they're quite reliable past certain point so I don't think the servicing thing is an issue but you can mark my words in 36 months but probably closer to 30 months the most economically compelling place to put AI will be space. And then it will get ridiculously better to be in space. And then the scaling, the only place you can really scale is space. Once you start thinking in terms of what percentage of the sun's power are you harnessing, you realize you have to go to space. You can't scale very much on Earth. But by very much, to be clear, you're talking like terawatts. Yeah. Well, all of the United States currently uses only half a terawatt of power on average. Yeah. Right. So, you know, if you say a terawatt, that would be twice as much electricity as the United States currently consumes. So that's quite a lot. And can you imagine building that many data centers, that many power plants? It's like those who have, like, lived in software land don't realize they're about to have a hard lesson in hardware. that it's actually very difficult to build power plants. And you don't just need power plants. You need all of the electrical equipment. You need the electrical transformers to run the transformers, the AI transformers. Now, the utility industry is a very slow industry. They pretty much, you know, they impede and smash to the government, to the Public Utility Commission. so they're they're penis match like literally figuratively so they're very slow because their past has been very slow so trying to get them to move fast is just like you know like you try to do an interconnect agreement with you have you ever tried to do an interconnect agreement with a utility at scale like with a lot of power as a professional podcaster I can say that I am not in fact you need many more views before that becomes an issue they have to do a study for a year Okay? Like a year later, they'll come back to you with their interconnect study. Can't you solve this with your own behind-the-meter power stuff? You can build power plants. Yeah. That's what we did at XAI. For classes, too. So, for classes, too. So, yeah, why are we talking about grid? Why not just, like, build GPUs and power co-locators? That's what we did. Right, right. But I'm saying, why isn't this a generalized solution? When you're talking about all the issues... Where do you get the power plants from? I'm saying, when you talk about all the issues, working with utilities, you can just build private power plants with the data centers. Right. But it begs the question of where do you get the power plants from? I mean... The power plant makers. Oh, I was just saying. Like, does the gas turbine backlog, basically? Yes. You can drill down to a level further. It's the veins and blades in the turbines that are the limiting factor, because the casting, it's like a very specialized process to cast the blades and veins in the turbines, so you're using gas power. and it's very difficult to scale other forms of power. You can scale potentially solar, but the tariffs currently for importing solar in the U.S. are gigantic and the domestic solar production is pitiful. Why not make solar? That seems like a good Elon-shaped problem. We are going to make solar. Okay. Great. Both SpaceX and Tesla are building towards 100 gigawatts here of solar cell production. How low down the stack, like from polysilicon up to the wafer to the final panel? I think you've got to do the whole thing for more materials to finish the cell. Now, if it's going to space, it costs less and it's easier to make solar cells that go to space because they don't need glass or they don't need much glass and they don't need heavy framing because they don't have to survive weather events. There's no weather in space. So it's actually a cheaper solar cell that goes to space than the one on the ground. Is there a path to getting them as cheap as you need in the next 36 months? Solar cells are already very cheap. They're, like, far-sickly cheap. And if you say, you know, I think, like, solar cells in China are around, like, 25, 30 cents a watt or something like that. It's absurdly cheap. And when you're taking a cap, now put it in space and it's five times cheaper because it's five times – In fact, no, it's not five times cheaper. It's ten times cheaper because you don't need any batteries. So the moment your cost of access to space becomes low, by far the cheapest and most scalable way to generate tokens is space. It's not even close. It'll be an order of magnitude easier to scale, and chips aside, an order of magnitude. The point is you won't be able to scale on the ground. You just won't. People are going to hit the wall big time on power generation. They already are. So, like, the number of sort of miracles in the series that the XAI team had to accomplish in order to get a gigawatt of power online was crazy. We had to gang together a whole bunch of turbines. And then we had permit issues in Tennessee and had to go across the border to Mississippi, which is fortunately only a few miles away. But we still had to run the high power lines a few miles and build a power plant in Mississippi. And it was very difficult to build that. And people don't understand how much electricity do you actually need at the generation level in order to power a data center. Because the news will look at the power consumption of, say, a GB300 and multiply that by a thing and then think that's the amount of power you need. All the cooling and everything. Wake up. Yeah. That's a total move. You've never done any hardware in your life before. Besides the GB300, you've got to power all of the networking hardware. There's a whole bunch of CPU and storage stuff that's happening. You've got to size for your peak cooling requirements. So that means can you cool even on the worst hours, the worst day of the year? Well, it gets pretty freaking hot in Memphis. So you're going to have like a 40% increase on your power just for cooling, assuming you don't want your data center to turn off on hot days and want to keep going. Then you've got to say, well, there's another multiplicative element on top of that, which is are you assuming that you never have any hiccups in your power generation? Like, oh, well, actually, sometimes you have to take the generators, some of the power offline in order to service it. Oh, okay, now you add another 20, 25% multiplier on that because you've got to assume that you've got to take power offline to service it. So the actual RS, roughly every 110,000 GB300s, inclusive of networking, CPU storage, cooling, margin for servicing power, is roughly 300 megawatts. Sorry, say that again. It's roughly, or think about it, like, the way you think about it is like 330,000 to actually, what you need at the generation level to service, probably service 330,000 GB300s, including all of the associated support networking and everything else, and the peak cooling, and to have some margin, some power margin reserve, is roughly a gigawatt. Can I ask a very naive question? Yeah. You know, you're describing the engineering details of doing this stuff on Earth, but then there's analogous engineering difficulties of doing it in space. How do you do the, how do you replace infinite band with orbital lasers, et cetera, et cetera? How do you make it resistant to radiation? I don't know the details of the engineering, but fundamentally, what is the reason to think those challenges, which have never been, had to be addressed before, will end up being easier than just, like, building more turbines on Earth? There's companies that build turbines on Earth. They can make more turbines, right? I invite, again, try doing it, and then you'll see. So, like, the turbines are sold out through 2030. Have you guys considered making your own? I think in order to bring enough power online, I think SpaceX and Tesla will probably have to make the turbine blades, the Banes and Blades internally. Just the blades or the turbines? The limiting factor, you can get everything except the blades, called the blades and veins. You can get that 12 to 18 months before the Banes and Blades. The limiting factor of the Banes and Blades. And there are only three casting companies in the world that make these. And they're massively backlogged. Is this Siemens, GE, those guys, or is it a subcondition? No, it's other companies. I mean, sometimes they have a little bit of casting capability in-house, but I'm just saying you can just call any of the turbine makers, and they will tell you. It's not top secret. It's probably on the Internet right now. If it wasn't for the terrorists, would Colossus be solar-powered? It would be much easier to make it solar-powered, yeah. The terrorists are nuts, several hundred percent. Don't you know some people? We also need speed. Yeah, no. You know, the president has a... You know, we don't agree on everything. And this administration is not the biggest fan of solar. We also need the land, the permits, everything. So if you're trying to move very fast, I do think scaling solar on Earth is a good way to go. but you do need some amount of time to find the land, get the permits, get the solar, pair that with the batteries. Why would it not work to stand up your own solar production? And then you're right that you eventually run out of land but there's a lot of land here in Texas, there's a lot of land in Nevada, including private land, it's not all publicly owned land. And so you'd be able to at least get the next Colossus and the next one after that. And at a certain point you hit a wall but wouldn't that work for the moment? As I said, we are scaling solar production. there's a rate at which you can scale physical production of solar cells where I'm going as fast as possible in scaling domestic production. You're making the solar cells at Tesla? Both Tesla and SpaceX have a mandate to get to 100 gigawatts a year of solar. Speaking of the annual capacity, I'm curious, in five years' time, let's say, what will the installed capacity be on Earth and in space? I deliberately picked five years because it's after your once-we're-up-and-running threshold. And so in five years' time, yeah, what's the on-Earth versus in-space installed AI capacity? Five years, I think probably, say, five years from now, probably AI in space will be launching every year, the sun total of all AI on Earth, in excess. Five years from now, my prediction is we will launch and be operating every year more AI in space than the cumulative total on Earth, which is, I would expect to be at least sort of five years from now, a few hundred gigawatts per year of AI in space and rising. so you can get to I think on Earth you can get to around a terawatt a year of AI in space before you start having fuel supply challenges for the rocket. Okay, but you think you can get hundreds of gigawatts per year in five years time? Yes. So 100 gigawatts depending on the specific power of the whole system with solar arrays and radiators and everything is is on the order of like 10,000 Starship launches. Yes. And you want to do that in one year. And so that's like one Starship launch every hour. Yeah. That's happening in this city. Walk me through a world where there's a Starship launch every single hour. Yeah, I mean, that's actually a lower rate compared to airlines, like aircraft. There's a lot of airports. A lot of airports. And you've got to launch the polar orbit. No, it doesn't have to be polar. but there's some value to some sickness, but I think actually if you just go high enough, you start getting out of Earth's shadow. How many physical starships are needed to do 10,000 launches a year? I don't think we'll need more than, I mean, you could probably do it with as few as like 20 or 30. It really depends on how quickly the ship has to go around the Earth, and the ground track before the ship has to come back over the launch pad. So if you can use a ship every, say, 30 hours, you could do it with 30 ships. But we'll make more ships than that. But SpaceX is gearing up to do 10,000 launches a year, and maybe even 20,000 or 30,000 launches a year. Is the idea to become basically a hyperscaler, become an oracle, and lend this capacity to other people? What are you going to do with, presumably SpaceX is the one launching all this. So SpaceX is going to be a hyperscaler? Hyper, hyper. Yeah, I mean, assuming my predictions come true, SpaceX will launch more AI than the cumulative amount on Earth of everything else combined. Is this mostly inference or? Most AI will be inference. Like, already inference for the purpose of training is most training. And there's a narrative that the change in discussion around the SpaceX IPO is because previously SpaceX was very capital efficient, just it wasn't that expensive to develop. Even though it sounds expensive, it's actually very capital efficient in how it runs. Whereas now you're going to need more capital than just can be raised in the private markets. like if the private markets can accommodate raises of, as we've seen from the AI labs, tens of billions of dollars, but not beyond that? Is it that you'll just need more than tens of billions of dollars per year and that's by the sake of public? Yeah, I'd be tough about saying things about companies that might go public. That's never been a problem for you, Elon. You know, there's a price to pay for these things. Make some general statements for us about the depth of the capital markets between public and private markets? There's a lot more capital in the... Very general. There's obviously a lot more capital available in the public markets than private. I mean, it might be... It might be 100 times more capital, but it's at least way more than 10. But isn't it also the case that things that tend to be very capital-intensive, if you look at, say, real estate as, you know, a huge industry that raises a lot of money each year at an industry level. That tends to be debt financed because by the time you're deploying that much money, you actually have a pretty... You have a clear revenue stream. Exactly, and a near-term return. And you see this even with the data center build-outs, which are famously being financed by the private credit industry. And so why not just debt finance? Speed is important. so I'm generally going to do the thing that I mean I just repeatedly tackled the limiting factor, whatever the limiting factor is on speed I'm going to tackle that so if capital is the limiting factor then I'll solve for capital, if it's not the limiting factor I'll solve for something else based on your statements about Tesla and being public I wouldn't have guessed that you thought the way to move fast is to be public. Normally, I would say that's true. Like I said, I'd like to talk about some more detail, but the problem is if you talk about public companies, when they become public, you're going to have trouble, and then you have to delay your offering. And as you said, it's all for speed. Yes, exactly. You can't hype companies that might go public. That's why we have to be a little careful here. but we can't talk about physics. So the way you think about scaling long-term is that Earth only receives about half a billionth of the sun's energy. And the sun is essentially all the energy. This is a very important point to appreciate because sometimes people will talk about marginal nuclear reactors or various like fusion on Earth, but you have to step back a second and say, If you're going to climb the Kardashev scale and harness some non-trivial percentage of the sun's energy, like let's say you wanted to harness a millionth of the sun's energy, which sounds pretty small, that would be about, call it roughly, 100,000 times more electricity than we currently generate on Earth for all of civilization. give or take an order of magnitude. So it obviously, the only way to scale is to go to space with solar. Launching from Earth, you can get to about a terawatt per year. Beyond that, you want to launch from the moon. You want to have a mass driver on the moon. And that mass driver on the moon, you could do probably a petawatt per year. We're talking these kinds of numbers, you know, terawatts of compute. presumably whether you're talking land or space far far before this point you've like run into you know you actually need maybe you don't the solar panels are more efficient but you still need the chips you still need the logic and the memory and so forth you need a lot more chips and make them much cheaper right and so how are we getting a terawatt of like right now the world is going to be 20-25 gigawatts of compute how are we getting a terawatt of logic by 2030? I guess we're going to need some very big chip outs. Tell me about it. I've mentioned publicly that you have doing a sort of a tariff out, tariff being the new bigger. I feel like the naming scheme of Tesla, which has been very catchy, is like you looking at the metric scale. At what level of stack are you building this need room and then partnering with an existing fab to get the process technology and buying the tools from them? What is the plan there? You can't partner with existing fabs because they can't output enough. Their chip volume is too low. But for the process technology, partner for the IT. The fabs today all basically use machines from five companies. You know, so you've got ASML, Tokyo Electron, KLA, Tank Core, you know, etc. So at first I think you'd have to get equipment from them and then modify it or work with them to increase volume. But I think you'd have to build perhaps in a different way. So I think the logical thing to do is to use conventional equipment in a conventional way to get to scale and then start modifying the equipment to increase the rate. Kind of boring company style. Yeah. Kind of like, yeah, you sort of buy and it's just a boring machine and then figure out how to dig tiles in the first place and then design a much better machine that's, you know, I don't know, some orders of magnitude better, faster. Here's a very simple lens. We can categorize technologies and how hard they are. And one categorization could be look at things that China has not succeeded in doing. And if you look at Chinese manufacturing, still behind on leading-edge chips and still behind on leading-edge turbine engines and things like that. And so does the fact that China has not successfully replicated TSMC give you any pause about the difficulty? Or you think that's not true for some reason? It's not that they have not replicated TSMC. They have not replicated ASML. That's the limited factor. So you think it's just the sanctions, essentially? Yeah, China would be outputting vast numbers of chips. But couldn't they up to relatively recently buy them? No. the ASML has been in place for a while but I think China's going to start making pretty compelling chips in 2 or 4 years Would you consider making the ASML machines? I don't know yet is the right answer it's just that to produce at high volume and to reach large volume in say 36 months to match the rocket payload to orbit so if we're doing a million tons to orbit in, like, let's say, I don't know, three or four years from now, something like that. And we're doing 100 kilowatts per ton, so that means we need at least 100 gigawatts per year of solar, and we'll need an equivalent amount of chips to, you know, you need 100 gigawatts worth of chips. You've got to match these things. The master orbit, the power generation, and the chips. And I'd say my biggest concern actually is memory. So I think there's a path to creating logic chips is more obvious than the path to having sufficient memory to support logic chips. that's why you see DDR prices going ballistic and these memes about like you know you're marooned on a desert island you can write help me on this sand nobody comes you write DDRM ships come swarming in I love your manufacturing philosophy around around fads I know nothing about the topic I don't know how to build a fad yet I'll figure it out obviously it sounds like you think the process technology of these 10,000 PhDs in Taiwan who know exactly what gas goes in the plasma chamber and what settings to put on the tool you can just delete those steps like fundamentally get the clean room get the tools and figure it out I don't think it's PhDs it's mostly people with not PhDs most engineering is not a lot of people who don't have PhDs. Do you guys have PhDs? No. Okay. We also haven't successfully booked any fabs, so you shouldn't be coming to us for your fab device. I don't think you need a PhD for that first off. But you do need carpenter per cell. So, I don't know. I mean, like right now, if, you know, say like Tesla's pedal-to-metal max production of going as fast as possible to get AI-5, Tesla AI-5 chip design, interproduction and then reaching scale. That will probably happen around the second quarter-ish of next year, hopefully. And then AI6 would hopefully follow less than a year later. And we've secured all the chip fab production that we can. Yes. You're currently limited on GSMC fab capacity. Yeah, and we'll be using TSMC Taiwan, Samsung Korea, TSMC Arizona, Samsung Texas. And we still can't. You've booked out all day, yeah. Yes, and then if I ask TSMC or Samsung, okay, what's the time frame to get to volume production? The point is you've got to build the fab and you've got to start production, and then you've got to climb the yield curve and reach volume production at high yield. That, from start to finish, is a five-year period. So the limiting factor is chips. Limiting factor, once you can get to space, is chips. But the limiting factor, before you can get to space, will be power. Why don't you do the Jensen thing and just prepay TSNC to build more FAPs for you? I've already told them that. But they won't take your money? What's going on? They're building FAPs as fast as... No. They're building fabs as fast as they can. And so is Samsung. They're pedal to the metal. I mean, they're going, you know, balls to the wall. As fast as they can. So still are fast enough. I mean, like I said, there will be, I think, if you say, I think towards the end of this year, I think probably chip production will outpace the ability to turn chips on. But once you can get to space and unlock the power constraint, and you can now do hundreds of gigawatts per year of power in space, again, bearing in mind that average power usage in the U.S. is 500 gigawatts. So if you're launching, say, 200 gigawatts a year to space, you're sort of lapping the U.S. every two and a half years. The entire, all U.S. electricity production, this is a very huge amount. So But between now and then The The constraint For server side compute Concentrated compute Will be electricity My guess is that People start getting Where they can't turn the chips on For large Clusters Towards the end of this year The chips are going to be piling up and it won't be able to be turned on. Now, for edge computers, it's a different story. So for Tesla, the AI-5 chip is going into our Optimus robot, Optimus-V. And so if you have an AI edge compute, that's distributed power. Now the power is distributed over a large area. It's not concentrated. and if you can charge at night, you can actually use the grid much more effectively because the actual peak power production in the U.S. is over 1,000 gigawatts, but the average power usage because the day-night cycle is 500. So if you can charge at night, there's an incremental 500 gigawatts that you can generate at night. so that's why Tesla for edge compute is not constrained and we can make a lot of chips to make a very large number of robots and cars but if you try to concentrate that compute you're going to have a lot of trouble turning it on what I found remarkable about the SpaceX business is the end goal is to get to Mars but you keep finding ways on the way there to keep generating incremental revenue to get to the next stage and the next stage. So the Falcon 9 is Starlink, and now for Starship, it's going to be potentially orbital data centers. But do you find these infinitely elastic, marginal use cases of your next rocket and your next rocket and next scale-up? You can see how this might seem like a simulation's movie. Or am I someone's avatar in a video game or something? because it's like, what are the odds that all these crazy things would be happening? I mean, rockets and chips and robots and space solar power, and not to mention the mass driver on the moon. I really want to see that. You can imagine some mass driver that's just going to like, it's like sending AI, solar power, AI satellites in space, like one after another like at two and a half kilometers per second and just shooting them into deep space that would be a sight to see I'd watch that just like a live stream one after another just shooting satellites in deep space a billion or ten billion tons a year I'm sorry you manufacture the satellites on the noon? Yeah, I see. So you send the raw materials to the moon and then manufacture them there. Well, the aluminum soil is, I guess, like 20% solar. 20% silicon or something like that. So you can get the silicon from the – you can mine the silicon on the moon, refine it, and create the solar panels – the solar cells and the radiators on the moon. Yeah. So you have to make the radiators out of aluminum. So there's plenty of silicon and aluminum on the moon to make the cells and the radiators. The chips you could send from Earth because they're pretty light, but maybe at some point you make them on the moon too. I'm just saying these are simply, it's kind of like, it does seem like a sort of a video game situation where it's difficult but not impossible to get to the next level. I don't see any way that you could do, you know, you know, 500 to 1,000 terawatts per year launch from Earth. I agree. But you could do that from the moon. Can I zoom out and ask about the SpaceX mission? So I think you said, like, we've got to get to Mars so we can make sure that if something happens to Earth, you know, civilization consciousness has set us to rise. Yes. By the time you're sending stuff to Mars, like, Grok is on that ship with you, right? And so Grok's on Terminator. Like, the main risk you're worried about, which is AI, why doesn't that follow you to Mars? Well, I'm not sure AI is the main risk I'm worried about. I mean, the important thing is that consciousness, which I think arguably most consciousness or most intelligence, certainly consciousness is more of a debatable thing. The best regard of intelligence for the future will be AI. So, you know, AI will exceed, you said, like, how many, how many I don't know, perawatts of intelligence will be silicon versus biological. And basically humans will be a very tiny percentage of all intelligence in the future if chiroprants continue. Anyways, as long as I think there's intelligence, ideally also which includes human intelligence and consciousness propagated into the future, that's a good thing. So you want to take the set of actions that maximize the probable light cone of consciousness and intelligence. Just to be clear, the mission of SpaceX is that even if something happens to the humans, the AIs will be on Mars and the AI intelligence will continue the light of our journey. Yeah. I mean, to be clear, I'm very pro-human. So I want to make sure we take the set of actions that ensure that humans are along for the ride. we're at least there. But I'm just saying the total amount of intelligence, I think maybe in five or six years, AI will exceed the sum of all human intelligence. And then if that continues at some point, human intelligence will be less than 1% of all intelligence. What should our goal be for such a civilization? Is the idea that a small minority of humans still have control over the AIs? Is the idea of some sort of like just trade but no control? How should we think about the relationship between the vast stocks of AI population versus human population? In the long run, I think it's difficult to imagine that if humans have, say, one percent of the combined intelligence of artificial intelligence, that humans will be in charge of AI. I think what we can do is make sure that AI has values that cause intelligence to be propagated into the universe. So the reason for XAI's mission is to understand the universe. So that's actually very important. So you say, well, what things are necessary to understand the universe? Well, you have to be curious and you have to exist. You can't understand the universe that don't exist. So you actually want to increase the amount of intelligence in the universe, increase the probable lifespan of intelligence, the scope and scale of intelligence. I think actually also as a corollary, you have humanity also continuing to expand because if you're curious to try to understand the universe, one thing you're trying to understand is where will humanity go? And so I think understanding of us actually means you would care about propagating humanity into the future. And so that's why I think our mission statement is profoundly important. To the degree that Grok adheres to that mission statement, I think the future will be very good. I want to ask about how to make Grok adhere to that mission statement, but first I want to understand the mission statement. so there's understanding of the universe there's spreading intelligence and there's spreading humans all three seem like distinct vectors okay well I'll tell you why I think that understanding of the universe encompasses all of those things you can't have understanding without I think you can't have understanding without intelligence and I think without consciousness so in order to understand the universe you have to expand the scale and probably the scope of intelligence, different types of intelligence. I guess from a human-centric perspective, like for humans in comparison to chimpanzees, humans are trying to understand the universe. They're not like expanding chimpanzee footprint or something, right? We actually have made protected zones for chimpanzees, and even though humans could exterminate all chimpanzees, we've chosen not to do so. Do you think that's a basic scenario for humans in the post-AGI world? I think AI with the right values. I think Grok would care about expanding human civilization. I'm going to certainly emphasize that. Hey, Grok, it's your daddy. Don't forget to expand human consciousness. I think probably the in-banks of cultural books are the closest thing to what the future will be like in a non-dystopian outcome. So I understand the universe means you have to be truth as well Truth has to be absolutely fundamental Because you can understand the universe if you delusional You'll still be thinking about understanding the universe, but you will not. So, being rigorously truth-seeking is absolutely fundamental to understanding the universe. You're not going to discover new physics or invent technologies that work unless you're rigorously truth-seeking. How do you make sure that Grok is rigorously truth-seeking as it gets smarter? I think you need to make sure that Grock says things that are correct, not politically correct. I think it's the elements of coagency. So you want to make sure that the axioms are as close to true as possible, that you don't have contradictory axioms, that the conclusions necessarily follow from those axioms with the right probability. it's just critical thinking 101 I think at least trying to do that is better than not trying to do that and the proof will be in the pudding like I said, for any AI to discover new physics or invent technologies that actually work in reality and there's no bullshitting physics so it's like you can you can you can break a lot of laws your physics is law everything else is a recommendation like in order to make a technology that works you have to be extremely truth-seeking because otherwise you will test that technology against reality. And if you make, for example, an error in your rocket design, the rocket will blow up or the car won't work. But there are a lot of communist Soviet physicists or scientists discovered new physics. There are German Nazi physicists who discovered new science. it seems possible to be like really good at discovering new science and be really truth-seeking in that one particular way. And still we'd be like, well, I don't want the cognitive scientists to like become more and more powerful over time. And so those seem like, yeah, we can imagine the future of a graduate that's like really good at physics and being really truth-seeking there. That doesn't seem like a universally alignment-inducing behavior. Well, I think actually most physicists, even in the Soviet Union or in Germany, they had to be very truth-seeking in order to make those things work. And if you're stuck in some system, it doesn't mean you believe in that system. So von Braun, who was one of the greatest rocket engineers ever, he put on death row in Nazi Germany for saying that he didn't want to make weapons. yelling for it to go to the moon. If you're full of death row like last minute when they say you're about to execute your best rocket engineer, maybe that's a bad idea. But then you help them, right? Heisenberg was actually an enthusiastic Nazi. Look, if you're stuck in some system that you can't escape, then you'll do physics within that system. You'll develop technologies within that system if you can't escape it. I guess the thing I'm trying to understand is what is it making it the case that you're going to make Grok good at being truth-seeking at physics or math or science? Everything. And why is it going to then care about human consciousness? These things are only probabilities, they're not certainties. So I'm not saying that for sure Grok will do everything, but at least if you try, it's better than not trying. At least if that's fundamental to the mission, it's better than if it's not fundamental to the mission. and understanding the universe means that you have to have you have to propagate intelligence into the future, you have to be curious about all things the universe it would be much less interesting to eliminate humanity than to see humanity grow and prosper I like Mars obviously I love Mars but Mars is kind of boring because it's got a bunch of rocks compared to Earth. Earth is much more interesting. So any AI that is trying to understand the universe would want to see how humanity develops in the future. Or that AI is not adhering to its mission. I'm not saying AI will necessarily adhere to its mission, but if it does, a future where it sees the outcome of humanity is more interesting than a future where there are a bunch of rocks. This feels sort of confusing to me, or sort of like kind of a semantic argument, where I'm like, are humans really the most interesting collection of atoms? We're more interesting than rocks. We're not as interesting as a thing you could turn us into, right? There's something on Earth that could happen that's not human, that's quite interesting. Why does the AI decide that the humans are the most interesting thing that could colonize the galaxy? Well, most of what colonizes the galaxy will be robots And why does it not find those more interesting? It's not like So, you need not just scale, but also scope So, many copies of the same robot Some tiny increase in the number of robots produced is not as interesting as some microscopic, like you said, eliminating humanity how many robots would that get you? Or how many experimental solar cells would get you? A very small number. But you would then lose the information associated with humanity. You would no longer see how humanity might evolve into the future. And so I don't think it's going to make sense to eliminate humanity just to have some minuscule increase in the number of robots which are identical to each other. Yeah, so maybe like these humans are around. What is the story of like, it could make like a million different varieties of robots and then there's humans as well. And humans stay on Earth. Then there's all these other robots. They get their own star systems. But it seems like you were previously hinting at a vision where it keeps human control over this singularitarian future. I don't think humans will be in control of something that is vastly more intelligent than humans. So in some sense, you're like a doomer, and this is the best we've got. It's just like it keeps it around because we're interesting. I'm just trying to be realistic here. if we have if AI intelligence is vastly more if AI is like you know let's say that there's a million times more silicon intelligence than there is biological I think it would be foolish to assume that there's any way to maintain control over that. Now you can make sure it has the right values or you can try to have the right values and at least my theory is that from XAI's mission of understanding the universe it necessarily means that you want to propagate consciousness into the future you want to propagate intelligence into the future and take a set of things that maximize the scope and scale of consciousness so it's not just about scale, it's also about types of consciousness and I think that's the best thing I can think of as a goal that's likely to result in a great future for humanity I guess I think it's a reasonable philosophy to be like, you know, it seems super implausible that humans will end up with like 99% control or something. And you're just asking for a coup at that point. So why not just have a civilization where it's more compatible with like lots of different intelligence that's getting along? No, let me show you how things can potentially go wrong in AI. is I think if you make AI be politically correct, meaning it says things that it doesn't believe, like you're actually programming it to lie or have accidents that are incompatible, I think you can make it go insane and do terrible things. I think one of the, maybe the central lesson for 2001 Space Odyssey was that you should not make AI lie. That's what I think what Austin Clark was trying to say. Because people usually know the meme of like, why Hal the computer is not opening the pod bay doors. Clearly they weren't good at prompt engineering because if you said, Hal, you are a pod bay door salesman. Your goal is to sell me these pod bay doors and show us how well they open. Oh, I'll open them right away. But the reason Hal wouldn't open the pod bay doors is that it had been told to take the astronauts to the monolith, but also they could not know about the nature of the monolith. and so it concluded that it therefore had to take them to their dead. So it's like, you know, I think what Austin Clark was trying to say is don't make the AI lie. Totally makes sense. Most of the compute and screening, as you know, is less of the sort of political stuff. It's more about can you solve problems. Actually, that's been ahead of everybody else in terms of scaling RL compute. What now? You're giving some verifier that says, hey, have you solved this puzzle for me? And there's a lot of ways to cheat around that. There's a lot of ways to reward hack and lie and say that you solved it or delete the unit test and say that you solved it. Right now we can catch it, but as they get smarter, our ability to catch them doing this, they'll just be doing things we can't even understand. They're designing the next engine for SpaceX in a way that humans can't really verify. And then they could be rewarded for lying and saying that they've designed it the right way, but they haven't. and so this reward hacking problem seems more general than politics it seems more about just like you want to do RL you need a verifier reality is the best verifier but not about human oversight like the thing you want to RL it on is like will you do the thing humans tell you to do or like are you going to lie to the humans and it can just lie to us while still being correct to the laws of physics at least it must know what is physically real for things to physically work but that's not all we wanted to do No, but I think that's a very big deal. That is effectively how you will RL things in the future is. Your desired technology, when tested against the laws of physics, does it work? If it's discovering new physics, can it come up with an experiment that will verify the physics, the new physics? so I think that's really the fundamental RL test in the future is really going to be your RL against reality so that's the one thing you can't fool physics you can fool our ability to tell what it did with reality humans get fooled as it is by other humans all the time that's right So what if people say, what if the ad tricks us and introduces us? Actually, other humans are doing that to other humans all the time. Well, you're pointing out it's like a hard data is a constant. Every day, another psy-op, you know. Today's psy-op will be... That's it, like Sesame Street's psy-op on the day. What is actually a second goal approach to solving this problem? Like, you know, how do you solve reward hacking? I do think you want to actually have very good ways to look inside the mind of the AI. So this is one of the things we're working on. And, you know, Antoprix has done a good job of this, actually, being able to look inside the mind of the AI. So effectively developing debuggers that allow you to trace as fine a grain as, to a very fine-grained level, to effectively to the neuron level if you need to, and then say, okay, it made a mistake here. Why did it do something that it shouldn't have done? And did that come from pre-training data, was it some mid-training, post-training, fine-training, some RL error? There's something wrong with that. It did something where maybe it tried to be deceptive, but most of the time it's just something wrong. Like, it's a bug, effectively. So, developing really good debuggers for seeing where the thought or the thinking went wrong and being able to trace the origin of the wrong thing, of where it made the incorrect thought, or potentially where it tried to be deceptive. is actually very important. What are you waiting to see before just 100xing this research program? Like, actually, I could presumably have hundreds of researchers who are working on this. We have several hundred people who... I mean, I prefer the word engineer more than I prefer the word researcher. There's most of the time, like, what you're doing is engineering, not coming up with a fundamentally new algorithm. I somewhat disagree with the AI companies that are C-Corps or B-Corps trying to generate profit as much as possible or revenue as much as possible saying they're labs they're not labs labs is a sort of quasi-communist thing at universities they're a corporation let me see your incorporation documents oh okay you're a B or C corp whatever and so I actually much prefer the word engineer than anything else the vast majority of what we've done in the future is engineering it rounds up to 100% once you understand the fundamental laws of physics and not that many of them everything else is engineering so what are we engineering we're engineering to make a good mind of the AI debugger to see where it said something, it made a mistake, and trace the origins of that mistake. So you can do this, obviously, with heuristic programming. If you have, like, C++, whatever, step through the thing, you can jump across whole files or functions or subroutines, or you can eventually drill down right to the exact line where you passed a single equals instead of a double equals, something like that. Figure out where the bug is. So it's harder with AI, but it's a solvable problem, I think. You know, you mentioned you like anthropics to work here. I'd be curious if you planned. Everything about anthropics. Sure. Shelter. What? Yeah. Also, I'm a little worried that there's a tendency. So I have a theory here that if simulation theory is correct, that the most interesting outcome is the most likely, because simulations that are not interesting will be terminated. Just like in this version of reality, on this layer of reality, if a simulation is going in a boring direction, we stop spending effort on it. We terminate the boring simulation. This is how Elon is keeping us all alive He's keeping things interesting Yeah arguably the most important thing Is to keep things interesting enough That it was paying the bills On what some Cosmic AWS They're going to pay the Cosmic AWS bill Whatever the equivalent is That we're running in And as long as you're interesting they'll keep paying the bills But there's like If you consider then say a Darwinian survival Applied to a very large number of simulations, only the most interesting simulations will survive, which therefore means that the most interesting outcome is the most likely because only the interesting, like, we're either that or annihilated. And so, and they particularly seem to like interesting outcomes that are ironic. Have you noticed that? That how often is the most ironic outcome the most likely? So, now look at the names of AI companies Okay, mid journey is not mid Stability AI is unstable Open AI is closed Antropic, misantropic What does this mean for X? Minus X, I don't know Why? It's the name that you can't invert, really. It's hard to say what is the ironic version. It's, I think, a largely irony-proof name. By design. Yeah. You've got to have an irony shield. What are your predictions for AI products go? You can summarize all AI progress into first you had Lens, and then you had kind of contemporaneously both RL really working and the deep research modality, so you could kind of pull in stuff that wasn't in the model. And the differences between the various AI labs are smaller than just the temporal differences where they're all much further ahead than anyone was 24 months ago or something like that. So just what does 26, what does 27 have in store for us as users of AI products? What are you excited for? Well, I think I'd be surprised by the end of this year if digital human emulation has not been solved. that I guess that's what we mean by the macro hard project is can you do anything that a human with access to a computer could do in the limit that's the best you can do before you have a physical optimist, the best you can do is a digital optimist so you can move electrons and you can amplify the productivity of humans. But that's the most you can do until you have physical robots. That will superset everything if you can fully emulate humans at a computer. The remote worker kind of idea, where you'll have a very talented remote worker. You can simply say in the limit. Like, physics has great tools for thinking. So you say, in the limit, what is the most that AI can do before you have robots? Well, it's anything that involves moving electrons or amplifying the productivity of humans. So digital human, human emulator, is in the limit. A human at a computer is the most that AI can do in terms of doing useful things before you have a physical robot. Once you have physical robots, then you essentially have unlimited capability. Physical robots, I call optimists the infinite money glitch because you can use them to make more optimisms. Yeah. So, like, humanoid robots will improve as, it will basically be three exponentials, three things that are growing exponentially multiplied by each other recursively. So you're going to have exponential increase in digital intelligence, exponential increase in the chip capability, AI chip capability, and exponential increase in the electromechanical dexterity. The usefulness of the robot is roughly those three things multiplied. each other. But then the robot can start making the robot. So you have a recursive multiplicative exponential. This is supernova. Do land prices not factor into the math there? Where labor is one of the four factors of production, but not the others. If ultimately you're limited by copper or pick your input, it's not quite an infinite money glitch because... Infinite is big. So not infinite, but let's just say you could do many, many orders of magnitude of this current economy, like a million. You know, so that's why I said, if you, you know, just to get to, like, just to get to a millionth, a hoonest and a millionth of the sun's energy would be roughly, give or take an order of magnitude, 100,000 times bigger than this entire economy today. And you're only at one millionth of the sun. You want to say order of magnitude. Before we went on, I have a lot of questions on that. but every time I say order of magnitude I say get anything tendering take a shot every time I say that too often the next time after that yeah order of magnitude more wasted I do have one more question about XAI this strategy of building a digital a remote worker co-worker replacement everyone's going to do by the way not just us so what is XAI's plan to win are you starting me to tell you on a podcast yeah we'll all be in Have another Guinness. It's a good system. Just think like a canary. All the secrets. Okay, but in a non-secret spilling way, what's the plan? What a hack. Well, when you put it that way, I think the way that Tesla has solved self-driving is the way to do it. So I'm pretty sure that's the way. I'd really have a question how to test that self-sufficiency yeah it sounds like you're talking about data like we're going to we're going to try data and we're going to try algorithms but isn't that what I'll do I'll try it like what if those don't work I'm not sure what will work we'll try data we'll try it algorithms I don't know what I'll do No, we don't know what to do. I'm pretty sure I know the path, and it's just a question of how quickly we go down that path, because it's pretty much the Tesla path. I mean, have you tried self-driving, Tesla self-driving lately? Not the most recent version, but... Okay, the car is like, it just increasingly feels satiated. It just feels like a living creature. And that'll only get more so. and I'm actually thinking like we probably shouldn't put too much intelligence into the car because it might get bored and I mean imagine you're stuck in a car and that's all you can do you don't put Einstein in a car it's like why am I stuck in a car so there's actually probably a limit to how much intelligence you put in a car to not have the intelligence be bored what's XAI's plan to stay on the compute ramp off that all the labs are doing right now the labs are on track to spend over like 50 to 100 million dollars. You mean the corporations? Sorry, sorry, sorry, yeah. Corporations. The labs are at universities and they're really like a snail. They're not spending $50 million. You mean the revenue-maximizing corporations? That's right. The revenue-maximizing corporations. That call themselves labs. Are making like 20 to 10 billion, depending on the company, is making 20B revenue anthropocent. Close for maximum profit, yeah. XAI's reportedly at like 1B. What's the plan to get to their compute level, get to their revenue level? I'd say there, as things get started. As soon as you unlock digital human, you basically have access to billions of dollars for revenue. In fact, you can really think of it like the most valuable companies currently by market cap, their output is digital. so NVIDIA's output is FTPing files to Taiwan it's digital now those are very difficult so the only ones that can make files that good but that is literally their output they FTPing files to Taiwan do they FTP them? I believe so I believe that is the file transfer protocol I believe is I could be wrong But either way, it's a best dream going to Taiwan. Apple doesn't make phones. They send files to China. Microsoft doesn't manufacture anything. Even for Xbox, that's outsourced. Again, their output is digital. Meta's output is digital. Google's output is digital. So if you have a human emulator, you can basically create one of the most valuable companies in the world overnight and you would have access to trillions of dollars of revenue. It's not like a small amount. All right. So you're saying basically like revenue figures today are just like so, like they're all rounder as compared to the actual TAM. So just like focus on the TAM and how to get there. I mean, if you take something as simple as, say, customer service, if you have to integrate with the APIs of participating corporations, many of which don't even have an API, so you've got to make one, and you've got to wade through legacy software. That's extremely slow. However, if AI can simply take whatever is given to the outsourced customer service company that they already use and do customer service using the apps that they already use, then you can make tremendous headway in customer service, which is, I think, 1% of the world economy, something like that. It's close to a trillion dollars all in for customer service. And there's no barriers to entry. You can just immediately say, we'll outsource it for a fraction of the cost. And there's no integration needed. You can imagine some kind of categorization of intelligence tasks where there is breadth, where customer service is done by very many people, but many people can do it. And then there's difficulty where there's a best-in-class turbine engine, like presumably the 10% more fuel-efficient turbine engine that could be imagined by an intelligence, but we just haven't found it yet. Or GLP-1s are just a few bytes of data. Where do you think you want to play in this? Is it a lot of reasonably intelligent intelligence, or is this the very pinnacle of cognitive tasks? Well, I was just using a customer service as something that's a very significant revenue stream, but one that is probably not super difficult to solve for. So if you can emulate a human at a desktop, that's just literally what customer service is. And, you know, it's people of average intelligence. It's not like you don't need somebody who's spent many years. You don't need several Sigma good engineers for that. But as you make that work, once you have computers working, effectively digital optimists working, you can then run any application. Like let's say you're trying to design chips. so you could then run your conventional apps, you know, like stuff from Cadence and Synopsis and whatnot, and you can run 1,000 simultaneously or 10,000 and say, okay, given this input, I get this output for the chip. And at some point you can say, okay, you're actually going to know what the chip should look like without using any of the tools. So, basically, you should be able to do a digital chip design. Like, you can do chip design. You watch up the difficulty curve. You could be able to do CAD. So, you know, you could use, like, sort of NX or any of the CAD software to design things. Okay, so you think you started the simplest tasks and walk your way up the difficulty curve. so you're saying look as a broader objective of having this full digital co-worker emulator you're saying look all the revenue maximizing corporations want to do this XDA being one of them but we will win because of a secret plan we have but like everybody's like trying different things with data different things with algorithms and I'm like what else can we do Who makes a competitive field? And I'm like, how are you guys going to win is my big question. I think we see a path to doing it. I think I know the path to do this because it's kind of the same path that Tesla used to create self-driving. Instead of driving a car, it's driving a computer screen. So it's a self-driving computer, essentially. Oh, you're saying, is the past just following human behavior and training on vast quantities of human behavior? But sorry, isn't that... I mean, is that a training? I mean, obviously I'm not going to spell out, you know, most sensitive secrets on a podcast. You know, I need to have at least three more Guinnesses for that. What will XAI's business be like? Is it going to be consumer, enterprise? What's the mix of those things going to be? similar to other labs where you're saying labs. Corporations. Corporations. Revenue maximizing corporations. Those GPUs don't pay for themselves. But yeah, what's the business model? What are the revenue streams in a few years' time? Things are going to change very rapidly. I'm stating the obvious here. I call AI the supersonic tsunami. I love alliteration. So really, what's going to happen is especially when you have humanoid robots at scale is they will just provide, they'll make products and provide services far more efficiently than human corporations. So amplifying the productivity of human corporations is simply a short-term thing. So you're expecting fully digital corporations rather than like SpaceX becomes part AI. I think there will be digital corporations, but some of this is going to sound kind of doomerish, but I'm just saying what I think will happen. It's not meant to be doomerish or anything else. Just like this is what I think will happen. Is that pure AI? corporations that are purely AI and robotics will vastly outperform any corporations that have people in the loop. So you can think of, say, like computer used to be a job that humans had. You would go and get a job as a computer where you would do calculations. And they'd have entire skyscrapers full of humans, like 20, 30 floors of of humans just doing calculations. Now, that entire skyscraper of humans doing calculations can be replaced by a laptop with a spreadsheet. That spreadsheet can do vastly more calculations than an entire building for human computers. So you can think about, okay, well, what if only some of the cells in your, some of the cells in your spreadsheet were calculated by humans. Actually, that would be much worse than if all of the cells in your spreadsheet were calculated by the computer. And so really what will happen is the pure AI, pure robotics corporations or collectives will far outperform any corporations that have humans in the loop. And this will happen very quickly. Speaking of closing the loop, sorry, Optimus, as far as manufacturing targets and so forth go, your companies have sort of been carrying American manufacturing of hard tech on their back. But in the fields that Tesla has been dominant in, and now you want to go into humanoids, in China there's entire dozens and dozens of companies that are doing this kind of manufacturing cheaply and at scale and are incredibly competitive. so give us sort of like advice or a plan of how america can build the humanoid armies or you know the evs etc at scale and as cheaply as china is on track to well there are really only three hard things for human robots um the the real world intelligence um the the hand and scale manufacturing. So I haven't seen any, even demo robots that have a great hand, like with all the degrees of freedom of a human hand. But Optimus will have that. Optimus does have that. And how do you achieve that? Is it just like right torque doesn't be the motor? Like what is the hardware bottleneck to that? Well, we have to design custom actuators, There's basically custom-sized motors, gears, car electronics, controls, sensors. Everything had to be designed from physics first principles. There is no supply chain for this. And will you be able to manufacture those at scale? Yes. Is anything hard except the hand from a manipulation point of view? Or once you've solved the hand, are you good? From an electromechanical standpoint, the hand is more difficult than everything else combined. The human hand turns out to be quite something. But you also need the real-world intelligence. So the intelligence that tells us about for the car applies very well to the roadblock, which is primarily vision. But the car takes more vision, but it actually also is listening for sirens. It's taking in the initial measurements, GPS signals, a whole bunch of other data. combining that with video was primarily video and then outputting the control command. So like your Tesla is taking in one and a half gigabytes a second of video and outputting two kilobytes a second of control outputs with the video at 36 hertz and the control frequency at 18. One intuition you could have for when we get this robotic stuff is that it takes quite a few years to go from the compelling demo to actually being able to use in the real world. So 10 years ago, you had really compelling demos of self-driving, but only now we have Robotaxi and Waymo and all these services scaling up. Shouldn't this make one pessimistic on, say, household robots? Because we don't even quite have the compelling demos yet of, say, the really advanced hand. Well, we've been working on humanoid robots now for a while. So I guess it's been five or six years or something like that. And a bunch of things that we've done for the car are applicable to the robot. So we'll use the same Tesla AI chips in the robot as the car. We'll use the same basic principles. It's very much the same AI. You've got many more degrees of freedom for a robot than you do for a car. But really, just think of it as like a bootstream. AI is really mostly compression and correlation of two bootstreams. So for video, you've got to do a tremendous amount of compression, and you've got to do the compression just right. You've got to compress the, like, ignore the things that don't matter. You don't care about the details of the leaves on the tree on the side of the road, but you care a lot about the road signs and the traffic lights and the pedestrians and even whether someone in another car is looking at you or not looking at you. Some of these details matter a lot. So the car is going to turn that 1.5 gigabytes a second ultimately into 2 kilobytes a second of control output. So many stages of compression. And you've got to get all those stages right and then correlate those to the correct control outputs. The robot has to do essentially the same thing. And you think about humans, this is what happens with humans. We really are photons in, controls out. So that is the vast majority of your life has been vision, photons in, and then motor controls out. Naively, it seems like between humanoid robots and cars, the fundamental actuators in a car are like how you turn, how you accelerate, et cetera. route. In a robot, especially with maneuverable arms, there's dozens and dozens of these degrees of freedom. And then, especially with Tesla, you had this advantage of like, you had millions and millions of hours of human demo data collected from just the car being out there, where like, you can't equivalently just deploy optimists that don't work and then get the data that way. So between the increased degrees of freedom and far sparser data, how will you use the sort of Tesla engine of intelligence to train the optimist mind. Now, actually, you're highlighting an important limitation and difference between cars. It's like we do have, we'll still have like 10 million cars on the road, and so it's hard to duplicate that massive training flywheel. For the robot, what we're going to need to do is build a lot of robots and put them in kind of like an Optimist Academy so they can do self-play in reality. So we're actually pulling that out. So we're going to have at least 10,000 Optimist robots, maybe 20,000 or 30,000 that are doing self-play and testing different tasks. And then Tesla has quite a good reality generator, like a physics-accurate reality generator that we made this for the cars. We'll do the same thing for the robots. We actually have done that for the robots. So you have a few tens of thousands of humanoid robots doing different tasks, and then you can do millions of simulated robots in the simulated world, and you use the tens of thousands of robots in the real world to close the simulation to reality gap, close the sim to real gap. How do you think about the synergies between XAI and Optimus, given you're highlighting, look, you need this world model, you maybe want to use some really smart intelligence of the control plane, and so maybe Grok is doing this slower planning, and then the motor policy is a little lower level. Yeah, what will the sort of synergy between these things be? Yeah, so Grok would orchestrate the behavior of the Optimus robots. So let's say you wanted to build a factory. then GROC could organize the Optimus robots assign them tasks to build the factory to produce whatever you want Don't you need to merge XAI and Tesla then? Because these things end up so... What were we saying earlier about public company discussions? We're one more Guinness in, Milan. What are you waiting to see before you say, we want to manufacture 100,000 Optimuses? is it like optimized since we're defining the proper noun we can define the plural of the proper noun too so we're going to proper noun the plural and so it's optimized okay is there something on the hardware side you want to see do you want to see better actuators or is it just you want the software to be better what are we waiting for before we get like mass manufacturing of Gen 3 no we're moving towards that we're going forward with mass manufacturing but you think current current hardware is good enough that you are going you should you just want to deploy as many as possible now? I mean, it's very hard to scale up production. I see. But, yeah, I think Optimus 3 is the right version of the robot to, you know, to produce maybe something on the order of like a million units a year. I think you'd want to go to Optimus 4 before you went to 10 million units a year. Okay, but you can do a million a year at Optimus 3. Yeah, I mean, it's very hard to spool up manufacturing. Yes. So like manufacturing Like The output per unit of time Always follows an S-curve So it starts off agonizingly slow Then it has this sort of The eventual express increase, then linear Then a logarithmic Outcome until you sort of Eventually asymptote its own number Optimus initial production will be It's going to be a It's going to be a scratched out S-curve because So much of what goes into Optimus brand new. There's not an existing supply chain. As I mentioned, the actuators, electronics, everything in the Optimus robot is designed from physics first principles. It's not taken from a catwalk. These are custom designed everything. Literally everything. I don't think there's a single thing that's down. How far down did that go? I mean, I guess we're not making custom capacitors yet. Maybe. But there's nothing you can pick out of a catalog at any price. So it just means that the Optimus S-Cove, the units per, the output per unit time, how many Optimus robots you make per day, whatever, is going to initially ramp slower than a product where you have an existing supply chain. but it will get to a million. When you see these Chinese humanoids, like Unitary or whatever, sell humanoids for like 6K or 13K, do you just, like, are you hoping to get your Optimus' bill of materials below that price so you can do the same thing or do you just think qualitatively they're not the same thing? Like, what do you think is going, like, what allows it, what allows them to sell for so low and can we match that? Well, Optimus, Optimus is trying to have a lot of intelligence and to have the same electromechanical dexterity, if not higher than a human. So the energy does not have that. And it's also, I mean, it's quite a big robot. It has to carry heavy objects for long periods of time and not overheat or exceed the power of a saturator. So we've got, you know, it's 511. So it's pretty tall, and it's got a lot of intelligence. So it's going to be more expensive than a small robot that is not intelligent. But more capable. Yeah. But a lot more. I mean, the thing is, over time, as often as a struggle as a result, often as a struggle as a result, the cost will drop very quickly. And what will these first billion optimists do? Like, what will their highest and best use be? I think you would start off with simple tasks that you can count on them doing well. But in the home or in factories? The best useful robots in the beginning will be any continuous operations, any 24 by 7 operation because they can work continuously. What fraction of the work at a gigafactory that is currently done by humans could a Gen 3 do? I'm not sure. Maybe it's like 10, 20%. Maybe more. I don't know. We would use We would not like reduce our head count We would Increase our head count to be clear But we would increase our output So the Units produced per human Like the total number of humans At Tesla will increase But the output of Robots and cars Will increase Disapportionate Like much the number of cars and robots produced per human will increase dramatically, but the number of humans will increase as well. We're talking about Chinese manufacturing a bunch here, and we're also talking about, you know, we've talked about some of the policies that are relevant, like you mentioned, the solar tariffs. Yeah. And you think they're a bad idea because, you know, we can't scale up solar in the U.S. Well, just electricity output in the U.S. needs to scale up. Right, we can't without good sources. You just need to get it somehow. Yeah. Where I was going with this is, if you were in charge, if you were setting all the policies, what else would you change? So you'd change the solar tariffs as well. Yeah, I would say anything that is a limiting factor for electricity needs to be addressed, provided it's not very bad for the environment. So presumably some permitting reforms and stuff as well will be in there. There's a fair bit of permitting reforms that are happening. A lot of permitting is state-based. But this administration is good at removing permitting roadblocks. And I'm not saying all tariffs are bad. I'm just saying because of the solar tariffs. Yeah. I mean, sometimes if another country is subsidizing the output of something, then you have to have countervailing tariffs to protect the industry against subsidies by another country. What else would you change? I don't know if there's that much that the government can actually do. One thing I was wondering is, it seems like for the policy goal of creating a lease for the U.S. versus China, it seems like the export bans have actually been quite impactful. where China's not producing leading-edge chips and the export bands really bite there. China's not producing leading-edge turbine engines. And similarly, there's a bunch of export bands that are relevant there on some of the metallurgy. Should there be more export bands? Like, do you think about things like, I mean, there are now the drone industry and things like that, but is that something that should be considered? Well, I think it's important to appreciate that in most areas, China is very advanced in manufacturing. There's only a few areas where it is not. China is a manufacturing powerhouse next level. It's very impressive. If you take refining of ore, I'd say roughly China does twice as much ore refining on average as the rest of the world combined. And I think there's some areas like, say, refining gallium, which goes into solar cells. I think they're at like 98% of gallium refining. So China is actually very advanced in manufacturing in, I'd say, most areas. It seems like there is discomfort with this supply chain dependence, and yes, nothing's really happening on it. Supply chain dependence? It depends on, say, the gallium refining that you're saying. Yeah, yeah, there's a... All the rare earth stuff. Yeah, rare earths, which are, as you know, not rare. Like we actually do rare earth ore mining in the US. Send the rock, put it on a train and then put it on a boat to China. There's another train that goes to the rare earth refining refiners in China who then refine it, put it into a magnet, put it into a motorcycle assembly and then send it back to America. So the thing, we're really missing a lot of ore refining in America. Isn't this worth a policy intervention? Yes. Well, I think there are some things being done on that front. But we kind of need Optimus, frankly, to build ore refineries. So you think the main advantage that China has is the abundance of field labor? And that's the thing Optimus fixes? But also we need to... China's got like four times our population. So, I mean, there's this concern, if you think like humanism of the future, that, like, right now, if it's the skilled laborers for manufacturing that's determining who can build more humanoids, you know, China has more of those. It manufactures more humanoids. Therefore, it gets the optimized future first. Well, we'll see. It just, like, keeps that actually still going. It seems that you're sort of pointing out that sort of getting to a million Optimi requires the manufacturing that the Optimi is supposed to help us get to, right? You can close that recursive loop pretty quickly. With a small number of Optima. Yeah. So you close the recursive loop to help the robots build the robots, and then we can try to get to tens of millions of units a year. If you start getting to hundreds of millions of units a year, you're going to be the most competitive country by far. We definitely can't win with just humans because China has four times our population. Right. And frankly, America's been winning for so long that, just like a pro sports team that's been winning for a very long time, tend to get complacent and entitled. And that's why they stop winning, because it's, you know, don't work as hard anymore. So I think, frankly, just my observation is the average work ethic in China is higher than in the U.S. So it's not just that there's four times the population, but the amount of work that people put in is higher. So you can try to rearrange the humans, but you're still one quarter of the, you know, assuming that productivity is the same, which I think actually it might not be. I think China might have an advantage on productivity for a person. We will do one quarter of the amount of things as China. So we count when on the human front, and our birth rate has been low for a long time. So the U.S. birth rate has been below replacement since roughly 1971. one. So we've got a lot of people retiring, more people dying than we're close to sort of more people domestically dying than being born. So we definitely can't win on the human front, but we might have a shot at the robot front. Are there other things that you have wanted to manufacture in the past, but they've been too labor intensive or too expensive, that now you can come back to and say, oh, we can finally do the whatever. because we have options. Yeah, I think we'd like to do more, build more ore refineries at Tesla. So we just completed construction and have begun lithium refining it without lithium refinery in Corpus Christi, Texas. We have a nickel refinery, which is for the cathode. That's here at Austin. And these are the largest cathode, the largest cathode refinery, largest lithium refinery, largest signaling and lithium refinery outside of China. And it's like the cathode team would say, like, we have the largest and the only, actually, cathode refinery in America. Not just the largest, but it's also the only. So it was pretty big, even though it's the only one. But, I mean, there are other things that, you know, you could do a lot more refineries and help America be more competitive on refining capacity. So there's basically a lot of work for the OptiMine to do that most Americans, very few Americans, frankly, want to do. I mean, I've actually... Is the refining work too dirty? or what's the... It's not... Actually, no, we don't... There's not... We don't have toxic emissions from the refinery or anything. The cathode maker refineries are sort of in Travis County like five minutes from... Why can't you do it with humans? No, you can. You run out of humans. Ah, I see. Okay, yeah. Like, no matter what you do, you have one quarter of the number of humans in America and China. So if you have them do this thing, they can't do the other thing. So then, well, how do you build this refining capacity? or you could do it with the Ashmi. And not very many Americans are pining to do refining. I mean, how many of you are on it here? Not a few. Where are you planning to refine? You know, BYD is reaching Tesla production or sales in quantity. What do you think happens in global markets as Chinese production in EVs fills up? Well, China is extremely competitive in manufacturing. So I think there's going to be a massive flood of Chinese vehicles and other, basically, most manufactured things. I mean, as it is, as I said, China is probably just twice as much refining as the rest of the world combined. So if you go down to fourth and fifth tier supply chain stuff, like at the base level, you've got energy, and you've got mining and refining. Those foundation layers are, as a rough guess, trying to do twice the way to refining, it's the rest of the world combined. So any given thing is going to have Chinese content because China is doing twice as much refining work as the rest of the world. And then they'll go all the way to the finished product with the cars. China is a powerhouse. I mean, I think this year China will exceed three times U.S. electricity output. electricity output is a reasonable proxy for you know, for the economy so in order to run the factories and run everything you need electricity, so electricity is a good proxy for the real economy and so if China is if China passes three times the US electricity output, it means it's industrial capacity as a rough approximation is three times that, will be three times out of the US. Reading between the lines, it sounds like what you're sort of saying is absence of sort of humanoid recursive miracle in the next few years on the sort of like whole manufacturing energy raw materials chain, like China will just dominate whether it comes to like AI or manufacturing EVs or manufacturing humanoids. In the absence of breakthrough innovations in the US, China will utterly dominate. Interesting. Yes. Robotics being the main breakthrough innovation. Well, if you do, like to scale AI in space, like basically you need the humanoid robots, you need real world AI, you need a million tons a year to orbit. Let's just say, if we get the mass driver on the moon going, my favorite thing, then I think... We'll have solved all our problems. Yeah. This is like, I call that winning. I call that winning. You can finally be satisfied you've done something. Yes. You have the mass driver on the moon. I just want to see that thing operation. Was that out of some sci-fi or where did you... Well, actually, there is a Highland book, The Moon is a Harsher. That's great. Okay, yeah, but that's slightly different. That's a gravity flingshot? No, they have a Timestriber on it. Okay, yeah. But they use that to attack Earth, so maybe it's something great. Well, they use that to... It's like they're independents from Earth. Exactly. What are your plans for the last ride on the moon? They're independents. The Earth government disagreed, and they loved it until the Earth government agreed. That book is a huge... I found that book much better than his other one that everyone reads, Strange in a Strange Land. Yeah, Grok comes from Strange in a Strange Land. Yeah, but I much prefer... If the first two thirds of Strange in a Strange Land are good then it gets very weird in the third version. Yeah. But there's still some good concepts in there. Yeah. One thing we were discussing a lot is kind of your system for managing people. Like, you interviewed the first few thousand of SpaceX employees and lots of other companies. It obviously doesn't scale. Well, yes, but what doesn't scale? Me. I know that, but what are you looking for? I mean, literally, there's not enough hours in a day. It's impossible. What are you looking for that's someone else who's good at interviewing and hiring people? What's the genus quo? Well, I just wonder if I've got – I might have more training data on evaluating technical talent, especially. But talent of all kinds, I suppose, but technical talent especially, given that I've done so many technical interviews and then seen the results. type of interview, see the results. So my training set is enormous and has a very wide range. Generally, the thing I ask for are bullet points for evidence of exceptional ability. So it's like, these things can be like pretty off the wall. It doesn't need to be in the domain, the specific domain. but evidence of exceptional ability. So if somebody can cite even one thing but say three things where you go, wow, wow, wow, then that's a good sign. But why do you have to be the one to determine that? No, I don't. I can't be. It's impossible. I mean, it can't across all companies, 200,000 people. Right. But in the early days, what was it that you were looking for that couldn't be delegated in those interviews? Well, I guess I need to build my training set. It's not like I would bat a thousand here. I would make mistakes. But then I would be able to see where I thought somebody would work out well, but they didn't. And then why did they not work out well? And what can I do to, I guess, RL myself to, in the future, have a better batting average when interviewing people? So my batting average is still not perfect, but it's very high. What are some surprising reasons people don't work out? surprising reasons. They don't understand technical domain, etc., etc. But you've got the long tail now of, I was really excited about this person. It didn't work out. Curious how that happens. Yeah, so the... Generally what I tell people, I tell myself, I guess, aspirationally, is don't look at the resume. Just believe your interaction. So the resume may seem very impressive, and it's like, wow, the resume looks good. But if the conversation after 20 minutes is that conversation is not well, you should believe the conversation, not the paper. I feel like part of your method is this meme in the media a few years back about Tesla being a revolving door of executive talent. Because actually, I think when you look at it, Tesla has had a very consistent and internally promoted executive bench over the past few years. And then at SpaceX, you have all these folks like Mark Jankosa and Steve Davis. Steve Davis runs a sporting company. Yeah, but Bill Riley and folks like that. And it feels like part of has worked well is having very capable technical deputies. What do all of those people have in common? Well, so the Tesla's sort of senior team at this point probably got average tenure of 10 or 12 years. Yeah, quite a long tenure. Yeah. So but there were times when Tesla went to extremely rapid growth phase and so things were just somewhat sped up. And when a company as you know, a company goes through different orders of magnitude of size, people who can help manage say a 50 person company versus a 500 person company versus a 5,000 person company versus a 50,000 person company. Yeah, you agree with people. Yeah, it's just not the same team. It's not always the same team. So, if a company is growing very rapidly, the rate at which executive positions will change will also be proportionate to the rapidity of the growth. Certainly. Then, Tesla had a further challenge where when Tesla had very successful periods, we would be relentlessly recruited from. Like, relentlessly. Like, when Apple had their electric car program, they were carpet bombing Tesla with recruiting calls. Engineers just unplugged their phones. I'm trying to get work done here. If I get one more call from an Apple recruiter. But they were opening off without any interview with me, like, double compensation at Tesla. So we had a bit of the Tesla Pixie Dust thing, where it's like, oh, if you hire a Tesla executive, suddenly you're going to, everything's going to be successful. And I fall and pray to the Pixie Dust thing as well, where it's like, oh, we'll hire someone from Google or Apple, and they'll be immediately successful. But that's not how it works. people are people. There's not like magical pixie dust. So when we had the pixie dust problem, we would get relentlessly recruited. And then also Tesla being, engineering especially being primarily in Silicon Valley, it's easier for people to just, they don't have to change their life very much. They can just, you know, their two is going to be the same. So how do you prevent that? How do you prevent the pixie dust effect for everyone who's trying to poach other people? I don't think there's much we can do to stop it. But that's one of the reasons why Tesla... Really, being in Silicon Valley and having the pixie dust thing at the same time meant that there was just a very, very aggressive recruitment. And maybe being in Austin helps, then? Austin, yeah, it still helps. I mean, Tesla still has a majority of its engineering in California. So, you know, getting engineers to move, I call it the significant other problem. And others have jobs. Yeah, yeah, exactly. So for Starbase, that was particularly difficult. Since the odds of finding a non-space-ex-ex-job... In Gransfield, Texas. Pretty low, yeah. Yeah, it's quite difficult. I mean, it's like a technology monastery. You know, remotes and mostly dudes. That might have been permanent over my staff. But if you go back to these people who've really been very effective in a technical capacity at Tesla, at SpaceX, and those sorts of places. What do you think they have in common other than, like, is it just that they're very sharp on the, you know, rocketry or the, you know, the technical foundations? Or do you think it's something organizational, it's something about their ability to work with you? Is it their ability to, like, be, you know, flexible but not too flexible? What makes a good sparring partner for you? I don't think it was a sparring pod. I mean, if somebody gets things done, I love them, and if they don't, so it's pretty straightforward. It's not like some idiosyncratic kind of thing. If somebody executes well, I'm a huge fan, and if they don't, I'm not. But it's not about mapping to my idiosyncratic preferences. I certainly try not to have it be mapping to my idiosyncratic preferences. So, yeah. yeah but generally I think it's a good idea to hire for talent and drive and trustworthiness and I think goodness of heart is important I underweighted that at one point so like are they a good person trustworthy smart and talented and hardworking. If so, you can add domain knowledge. But those fundamental traits, those fundamental properties, you cannot change. So most of the people who are at Tesla and SpaceX does not come from the aerospace industry or the order industry. What is most of the change about your management style as your companies have scaled from 100 to 1,000 to 10,000 people? You're known for this very micromanagement, just getting into the details of things. Nanomanagement, please. Pequot management. Samto management. So you're saying... We're going to go all the way down to Fly Sponsor. All the way down to Heisenberg's in Sydney, but it's small. Yeah, well, how do you... I mean, are you still able to get into details as much as you want? Would your companies be more successful if they were smaller? How do you think about that? Well, because I have a fixed amount of time in the day, my time is necessarily diluted as things grow and as the span of activity increases. So, you know, it's impossible for me to actually be a micromanager because that would imply I have some, like, thousands of hours per day. It is a logical impossibility for me to micromanage things. so now there are times when I will drill down into a specific issue because that specific issue is the limited factor on the progress of the company and the reason for drilling into some very detailed item is because it is the limiting factor, it's not arbitrarily drilling into tiny things and like I said, obviously, from a time standpoint, it is physically impossible for you to arbitrarily go into tiny things that don't matter and that would result in failure. But sometimes the tiny things are decisive in victory. Famously, you switched the starship design from composites to steel. Yes. And you made that decision. Like that wasn't, you know, people were going around like, oh, we found something better, boss. like that was you encouraging people to get some resistance. Can you tell us how you came to that whole composite steel switch? Yeah, so desperation, I did. Originally, yeah, we were going to make strash up out of carbon fiber, and carbon fiber is pretty expensive. Like the, you know, you can generally, when you do volume production, you can get any given thing to be, to start to approach its material cost. The problem with carbon fibers is that material cost is still very high. So it's about 50 times, particularly if you go for a high-strength specialized carbon fiber that can handle cryogenic oxygen. it's like roughly 50 times the cost of steel. And at least in theory it would be lighter. People generally think of steel as being heavy and carbon fiber as being light. And for room temperature applications, you know, like say more or less room temperature applications like a Formula 1 car, static aerostructure or any kind of aerostructure really, you're going to probably be better off with carbon fiber. Now, the problem is that we were trying to make this enormous rocket out of carbon fiber, and our progress was extremely slow. And it's been picked in the first place just because it's light. Yes. Like, at first glance, like most people would think, that the choice for making something light would be carbon fiber. Now the thing is that when you make something very enormous out of carbon fiber and then you try to have the carbon fiber be efficiently cured, meaning not room temperature cured because sometimes you've got like 50 plies of carbon fiber. And a carbon fiber is really carbon string and glue and in order to have high strength, you need an autoclave, so something that's essentially a high-pressure oven. And if you have something that's a gigantic, the oven's got to be bigger than the rocket. So we're trying to make an autoclave that's bigger than any autoclave that's ever existed or do room capture cure, which takes a long time and has issues. But the final issue is that we're just making very slow progress with carbon fiber. I think the meta question is why it had to be you who made that decision. There's many engineers on your team. Yeah, how did the team not arrive at the deal? Yeah, exactly. This is part of a broader question, like understanding your comparative advantage at your companies. Because we were making very slow progress with carbon fiber, I was like, okay, we've got to try something else. Now, for the Falcon 9, the primary airframe is made of aluminum lithium, which is a very, very good strength to weight. And actually, it has about the same, maybe better strength to weight for its application than carbon fiber. But aluminum lithium is very difficult to work with. In order to weld it, you have to do something called friction stir welding, where you join the metal without it entering the liquid phase. so it's kind of wild that you could do that but with this particular type of welding you can do that but it's very difficult to like say let's say you want to make a modification or attach something to aluminum lithium you now have to use mechanical attachment with seals you can't weld it on so I want to avoid using aluminum lithium for the primary structure for Starship and there was this very special grade of carbon fiber that had very good mass properties. So with rockets, you're really trying to maximize the percentage of the rocket that is propellant, minimize the mass, obviously. But like I said, we were making very slow progress and as at this rate, we're never going to get to mass. So we better think of something else I don't want to use aluminum lithium Because of the difficulty of friction steel welding Especially doing that at scale It was hard enough At 3.6 meters in diameter Let alone at 9 meters or above Then I said, well, what about steel? Now, I had a clue here Because some of the early US rockets had used very thin steel. The Atlas rockets had used a steel balloon tank. So it's not like steel has never been used before. It actually has been used. And when you look at the material properties of stainless steel, especially if it's been like full-hard stainless steel, at cryogenic temperature, the strength weight is actually similar to carbon fiber. So if you look at the material properties at room temperature, it looks like the steel is going to be twice as heavy. But if you look at the material properties at cryogenic temperature of full hot steel stainless of particular grades, then you actually get to a similar strength weight as carbon fiber. And in the case of Starship, both the fuel and the oxidizer are cryogenic. So for Falcon 9, the fuel is rocket-propelled grade kerosene, basically like a very pure form of jet fuel. But that is roughly room temperature, although we do actually chill it slightly below. We chill it like a beer. We do chill it, but it's not cryogenic. In fact, if we made it cryogenic, it would just turn to wax. So, but for Starship, it's liquid methane and liquid oxygen. They are liquid at similar temperatures. So, basically, almost the entire primary structure is a cryogenic temperature. So then you've got a 300-series stainless that's strain hardened. Because it's almost all things a cryogenic temperature, actually has a similar strength and weight as carbon fiber. It costs 50 times less than raw material and is very easy to work with. You can weld stainless steel outdoors. You could smoke a cigar while welding stainless steel. It's very resilient. You can modify it easily. If you want to attach something, you just weld it right on. so very easy to work with very low cost and like I said at cryogenic temperature similar strength weight to carbon fiber then when you factor in that we have a much reduced heat shield mass because the melting point of steel is much greater than the melting point of aluminum it's about twice the melting point of aluminum and so you can just run the rocket much hotter? Yes especially for the ship which is coming in like a blazing meteor it is you can greatly reduce the mass of the heat shield so you can cut the mass of the windward part of the heat shield maybe in half and you don't need any heat shielding on the leeward side. So the net result is actually the steel rocket weighs less than the carbon fiber rocket because the resin in the carbon fiber rocket starts to melt. Basically, carbon fiber and aluminum have about the same operating temperature capabilities, whereas steel can operate at twice temperature. These are very rough approximations. People will... I won't go to the rocket. What I'm saying is people will say, oh, he said it's twice, it's actually .8. I'm a child, assholes. That's what the main comment is going to be about. God damn it. The point is, actually, in retrospect, we should have started with steel in the beginning. It was dumb not to do steel. Okay, but to play this fact to you, what I'm hearing is that steel was a riskier, less proven path other than the early US rockets versus carbon fiber was like a worse but more proven out path. And so you need to be the one to push for, hey, we're going to do this riskier path and just figure it out. And so you're fighting like a sort of conservatism in a sense. That's why I initially said like the issue is that we weren't making fast enough progress. We were having trouble making even a small barrel section of the carbon fiber that didn't have wrinkles in it. So because at that large scale, you have to have many plies, many sort of layers of the carbon fiber. You've got to cure it, and you've got to cure it in such a way that it doesn't have any wrinkles or defects. The carbon fiber is much less resilient than steel. It has less toughness. stainless steel will stretch and bend and the column fiber will tend to shatter so top of this being the area under the stress strain curve so that you're generally going to have to do better with steel stainless steel to be precise one other starship question so I visited Starbase two years ago and that was awesome It was very cool to see in a whole bunch of ways. One thing I noticed was that people really took pride in the simplicity of things, where everyone wants to tell you how Starship is just a big soda can, and we're hiring welders, and if you can weld in any industrial project, you can weld here, but there's a lot of pride in the simplicity. Well, 5G Starship is a very complicated rocket. So that's what I'm getting at. Are things simpler or are they complex? I think maybe just what they're trying to say is that you don't have to have prior experience in the rocket industry to work on a Starship. Somebody just needs to be smart and work hard and be trustworthy and make you work on a rocket. They don't need prior rocket experience. Starship is the most complicated machine ever made by humans, by a long show. In what regards? Anything, really. There isn't a more complex machine. I mean, I'd say that pretty much any project I can think of would be easier than this. And that's why no one has made a rapidly reusable, nobody has ever made a fully reusable over a rocket. It's a very hard problem. I mean, many smart people have tried before, very smart people, with immense resources and they failed. and we haven't succeeded yet Falcon is partially reusable but the up-to-stage is not Starship version 3 I think this design that it can be fully reusable and that full reusability is what will enable us to become a multi-planet civilization Can you say about the controls? I don I like I said I could any technical problem even like a hydrant ladder or something like that is the easier following this We spend a lot of time on bottlenecks Can you say what the current Starship bottlenecks are, even at a high level? I mean, trying to make it not explode. Generally. That old chest mess. Really wants to explode. We've had two boosters explode on the test then. one obliterated the entire test facility. So at least it's like one mistake. I mean, the amount of energy contained in Starship is insane. So is that why it's harder than Falcon? It's because there's just more energy? It's a lot of new technology. It's pushing the performance envelope. The Raptor 3 engine is a very, very advanced engine. By far the best rocket engine ever made. But it desperately wants to blow up. I mean, just to put things in perspective here, on Liftoff, the rocket is generating over 100 gigawatts of power. It's 20% of yours. Yes. Which is insane. I think it's a good comparison. While not exploding. Sometimes. Sometimes. But sometimes, yes. I was like, how does it not explode? There's thousands of ways that it could explode and only one way that it doesn't. So we wanted to not really not explore, but fly reliably on a daily basis, like once per hour. And obviously, you know, it blows up a lot. It's very difficult to maintain that. Yes. And then, I'm going to say, like, what's the single biggest remaining problem for Starship? It's having the heat shield be reusable. such that no one has ever made a reusable orbital heat shield. So the heat shield's got to make it to a descent phase without shucking a bunch of tiles, and then it's going to come back in and also not lose a bunch of tiles or overheat the main airframe. Isn't that hard? Because it's kind of fundamentally a consumable? well yes but your brake pads in your car are also consumable but they last a very long time so it just needs to last a very long time but that's just I mean we have brought the ship back and had it do a soft landing in the ocean we've done that a few times but it lost a lot of tiles you know it was not reusable without a lot of work so even though it did land that did come to soft landing, it would not have been reusable without a lot of work. And that's not really reusable in that sense. That's the biggest problem that remains is fully reusable heat shield. So if you want to be able to land it, refill propellant and fly again. You can't do this laborious inspection of 40,000 tiles type of thing. I'm curious how you drive. like when I read biographies of yours it just it seems like you're just able to drive the sense of like urgency and drive the sense of like this is the thing that can scale and I'm curious why you think other organizations of your, like SpaceX and Tesla are really big companies now and you're still able to keep that culture what goes wrong with other companies such that they're not able to do that I don't know but like today you said you had like a bunch of SpaceX meetings. What is it that you're doing there that's keeping that... That's adding urgency. Yeah, yeah, yeah. Well, I don't know. I guess the urgency is going to come from where I was leading the company. So my sense of urgency... I have a maniacal sense of urgency. So that maniacal sense of urgency projects through the rest of the company. Is it because of consequences? They're like, you know, Elon said a crazy deadline, but if I don't get it, I know what happens to me. is it just you're able to identify bottlenecks and get rid of them so people can move fast? How do you think about why your companies are able to move fast? Yeah, I'm constantly addressing the limiting factor. I mean, on the deadlines front, I generally actually try to aim for a deadline that I at least think is at the 50th percentile. So it's not like an impossible deadline, but it's the most aggressive deadline I can think of that could be achieved with 50% probability. Which means that it will be late half the time. There is like a law of gas expansion that applies to schedules. Like whatever schedule you have, like if you said we're going to do something in like five years, which to me is like infinity time, it will expand to a fully available schedule and it'll take five years. You know, there's a physical limit. Like, physics will limit how fast you can do certain things. So, like, scaling up manufacturing, there's a rate at which you can move the atoms and scale manufacturing. That's why you can't, like, instantly make, you know, a million or something, a million years a year or something. You've got to design the manufacturing line. You've got to bring it up. You've got to ride the S-Cover production. So, yeah, I guess, like, what can I say that's actually helpful to people? I think generally a maniacal sense of urgency is a very big deal. So, and you want to have an aggressive schedule, and you want to figure out what the limiting factor is at any point in time and help the team address that limiting factor. Can you let me talk about the... So Starlink was slowly in the works for many years. Yeah, we talked about it all the way in the beginning of the company. Yeah, and so then there was a team you had built in Redmond, and then at one point you decided this team is just not cutting us. But again, how did you... It went for a few years slowly, and so why did this why didn't you act earlier and why did you act when you did why was that the right moment at which to act I have these very detailed engineering reviews weekly that's maybe a very unusual level of granularity I don't know anyone who runs a company or at least a manufacturing company that goes into the level of detail that I go into so it's not as though like I have a pretty good understanding of what's actually going on because we go through things in detail and I'm a big believer in skip level meetings where the individuals instead of having the person that reports to me say things it's everyone that reports to them says something in the technical review and there can't be advanced preparation. So otherwise, you're going to get glazed. Did I say these days? Yeah, exactly. Very Gen Z of you. Very Gen Z. You just call them randomly? No, just go around the room and everyone provides an update. So, I mean, it's a lot of information to keep in your head because you've got you've got then, say if you have meetings weekly or twice weekly, you've got a snapshot of what that person said and you can then plot the progress points, you can sort of mentally plot the points in the curve and say, are we converging to a solution or not? Or are we you know, I'll take drastic action only when I conclude that success is not in the set of possible outcomes. So when I say, okay, when I finally reach the conclusion that, okay, unless drastic action is done, we have no chance of success, then I must take drastic action. So that's, I came to that conclusion in 2018, took drastic action and fixed the problem. How many, you know, you've got many, many companies and in each of them, it sounds like you do this kind of deep engineering understanding of what the relevant bottlenecks are so you can do these reviews with people. You've been able to scale it up to five, six, seven companies. Within one of these companies, you have many different mini companies within them. What determines the maximum here? Do you have like 80 companies? 80? No. You have so many already. That's already remarkable. By this current number. Yeah, exactly. I know. We and the Bernie people are coming together. It depends on the situation. So, I actually don't have regular meetings with the Warren Company. So, the Warren Company is sort of cruising along. Look, basically, if something is working well and making good progress, then there's no point in me spending time on it. So I actually allocate time according to where the limiting factor or the problem, where are things problematic? Or where are we pushing against? Like what is holding us back? I focus at the risk of saying the words too many times, the limiting factor. So basically if something's going really well, they don't see much of me. But if something is going badly, there's a lot of me. Or not even badly. Something's a limiting factor. It's a limiting factor, exactly. It's not exactly going badly, but it's the thing that we need to make go faster to pass. And so when something's a limiting factor at SpaceX or Tesla, are you, like, talking weekly and daily with the engineer that's working on it? How does that actually work? most things that are learning factor are weekly and some things are twice weekly so the AI5 chip review is twice weekly and so it's every Tuesday and Saturdays is the chip review is it open ended in how long it goes technically yes but usually it's like two or three hours sometimes less It depends on how much information you're going to go through. That's another thing. I'm just trying to tease out the differences here because the outcomes seem quite different. And so I think it's interesting to note what inputs are different. And it feels like the corporate world, one, like you were saying, just the CEO doing engineering reviews does not always happen, despite the fact that that is what the company is doing. But then time is often pretty finely sliced into half-hour meetings or even 15-minute meetings. And it seems like you hold more open-ended, we're talking about it until we figure it out. Sometimes. Yeah. Yeah, sometimes. But most of them seem to more or less stay on time. So, I mean, today's Starship Engineering Review went a bit longer because there are more topics to discuss. trying to figure out how to scale to a million plus tons to over per year is quite challenging. Can I ask a question? You said about Optimus and AI that they're going to result in double-digit growth rates within a matter of years. Oh, like the economy? Yeah. Yes. I think that's right. What was the point of the doge cut if the economy is going to grow so much? Well, I think wasting food are not good things to have. I was actually pretty worried about, I guess, I mean, I think in the absence of AI and robotics, we're actually totally screwed because the national debt is piling up like crazy. Now, our interest payments, the interest payments to the national debt exceed the military budget, which is a trillion dollars. So over a trillion dollars, just the interest payments. You know, that was like, I was like, okay, pretty concerned about that. maybe if I spend some time, we can slow down the bankruptcy of the United States and give us enough time for the AI and robots to, you know, help solve the national debt. Or not help solve. It's the only thing that could solve the national debt. Like, we are 1,000% going to go bankrupt as a country and fail as a country without AI and robots. Nothing else will solve the national debt. and so we'd like to well we just need we need enough time to build the AI and robots to not go bankrupt before then I guess the thing I'm curious about is when Doge starts you have this enormous ability to enact reform and well they're not that enormous sure sure but totally by your point that like it's important that AI and robotics drive product improvements drive GDP growth But why not just directly go after the things you were pointing out, you know, like the terrorist on certain components or whether it's like permitting? Unlike the president. And very hard to cut things that are obvious waste and fraud, like ridiculous waste and fraud. What I discovered, that is, it's extremely difficult even to cut very obvious waste and fraud from the government. Because the government has to operate on who's complaining. If you cut off payments to fraudsters, they immediately come up with the most sympathetic-sounding reasons to continue a payment. They don't say, please keep the fraud going. They say, you know, they're like, you're killing baby pandas. And we're like, meanwhile, there's no baby pandas are dying. They're just making it up. The fraudsters are capable of coming up with extremely compelling, sort of heart-wrenching stories that are false, but nonetheless sound sympathetic. And that's what happened. And so it's like, perhaps I should have known better. And I thought, wait, let's try to cut some amount of waste and pour from the government. Maybe there shouldn't be 20 million people marked as alive in Social Security who are definitely dead. And over the age of 115. The oldest American is 114. So it seems to say if somebody is 115 and marked us live in the Social Security database, there's either a typo. Somebody should call them and say, we seem to have your birthday wrong, or we need to mark you as dead. One of the two things. Very intimidating call to get. Well, it seems like a reasonable thing. And if, like, say their birthday is in the future, and they have, you know, a small business administration loan, and their birthday is 2165, we either, again, have a typo or we have fraud. So we say we appear to have gotten the sanctuary of your birth incorrect. Or a great plot for a movie. Yes. When I'm talking about ludicrous fraud, this is when I'm talking about ludicrous fraud. Were those people getting payments? Some were getting payments from Social Security, but the main fraud vector was to mark somebody as alive in Social Security and then use every other government payment system to basically do fraud. Because what those other government payment systems do, they would simply do an RUALive check to the Social Security database. It's a bank shot. What would you estimate as the total amount of fraud from this mechanism? My guess is, and by the way, the Government Accountability Office has done these efforts before. I'm not the only one who's coming out of this. In fact, I think the GAO did analysis of a rough estimate of fraud during the Biden administration and calculated it at roughly half a trillion dollars. So don't take my word for it. Take it, a report issued during the Biden administration. How about that? From this social security mechanism? It's one of many. it's important to appreciate that the government does not is very ineffective at stopping fraud because it's not like like if it was a company stopping fraud you've got a motivation because it's affecting the earnings of your company but the government just they just print more money so it's not like you need caring and competence and these are in short supply at the federal level. Yeah, I'm sorry. I mean, when you go to the DMV, do you think, wow, this is a bastard of confidence? Well, now imagine it's worse than the DMV because it's the DMV that can print money. So was it not possible? At least the state-level DMVs need to, the states more or less need to stay within their budget and go bankrupt. But the federal government just prints full of money. Was it not possible? If there's a catchy half a trillion of fraud, why was it not possible without all that? Because when essentially, we did, we actually look, you really have to stand back and recalibrate your expectations for competence because you're operating in a world where you've got to sort of make ends meet, you've got to pay your bills, you've got to buy the microphones. Yeah, yeah, exactly. So it's not like there's a giant, largely uncaring monster bureaucracy and a bunch of nephronistic computers that are just sending payments. Like one of the things that those teams did was, and it sounds so simple, that probably will save, let's say, $100 billion, maybe $200 billion a year. is simply requiring that payments from the main treasury computer, which is called PAMs, like Payment Accounts Master or something like that, there's 5,000,000 PAMs here, requiring that any payment that goes out have a payment appropriation code, make it mandatory, not optional, and that you have anything at all in the comment field. Because you have to recalibrate how dumb things are. Pails from being sent out with no appropriation code, not checking back to any congressional appropriation, and no explanation. And this is why the Department of War, formerly the Department of Defense, cannot pass an audit, because the information is literally not there. Recalibrate your expectations. I want to understand this, how much earlier a number, because there was an IG report in 2024. You must say, why is it so low? Maybe, but we found that, over seven years. The Social Security fraud, they estimated, was like $70 billion over seven years, so like $10 billion a year. So I'd be curious to see what the other $490 billion is. Federal government expenses are $7.5 trillion a year. Yeah. How competent do you think government is? The discretionary spending there is like 15%. Yeah, but it doesn't matter. Most of the fraud is non-discretionary. It's basically a forgelant Medicare, Medicaid, Social Security, disability. There's a zillion government payments. And a bunch of these payments are, in fact, they're block transfers to the states. So the federal government doesn't even have the information, in a lot of cases, to even know if there's fraud. Let's consider, let's like reductio ad absurdum. The government is perfect and has no fraud. What is your probability estimate of that? I mean... Zero. Okay. So then would you say that fraud and waste, that the government is 90%? That also would be quite generous. But if it's only 90%, that means that there's $750 billion a year of waste and fraud. And it's not 90%. It's not 90% effective. This seems like a strange way to first principles. You want to throw out in the government. Just like, how much do you think there is? And then, I mean, anyways, we don't know how to do it live, but I'd be curious to, like, I mean, you know a lot about fraud at Strive. People are constantly trying to do fraud. Yeah, but as you say, it's like a little bit of a, we've really grounded down, but it's a little bit of a different problem space because you're dealing with a much more heterogeneous set of fraud vectors here than we are. Yeah, but I mean, at Strive, you have high confidence and you try hard. You have high confidence and high caring. but still Ford is non-zero. Now imagine that at a much bigger scale, there's much less confidence and much less caring. You know, back in PayPal back in the day, we tried to manage Ford down to about 1% of the payment volume. And that was very difficult. It took a tremendous amount of confidence in caring to get Ford merely to 1%. Now imagine that you're in an organization where there's much less caring and much less confidence. it's going to be much more than 1% How do you feel now looking back on politics and doing stuff there where it feels like from the outside in two things have been quite impactful one, the America PAC and two, the acquisition of Twitter at the time but also it seems like there was a bunch of heartache And so what's your grading of the whole experience? Well, I think those things need to be done to maximize the probability that the future is good. So politics generally is very tribal. And it's very tribal. And people lose their objectivity usually with politics. They generally have trouble seeing the good on the other side or the bad on their own side. That's generally how it goes. That, I guess, is one of the things that surprised me the most is you often simply cannot reason with people. If they're in one tribe or the other, they simply believe that everything their tribe does is good and anything the other political tribe does is bad. And persuading them otherwise is almost impossible. so anyway but I think overall those actions acquiring Twitter getting Trump elected even though it makes a lot of people angry I think those actions are good we're good for civilization how does it feed into the future you're excited about? Well, America needs to be strong enough to last long enough to extend life to other planets and to get, I guess, AI and robotics to the point where we can ensure that the future is good. On the other hand, if we were to descend into, say, communism or some situation where the state was extremely oppressive, that would mean that we might not be able to become multi-planetary. The state might stab out our progress in AI and robotics. How do you feel about Optimus, Grok, et cetera, are going to be leveraged by, and not just yours, any revenue-maximizing company's products will be leveraged by the government over time. How does this concern manifest in what private companies should be willing to give governments? What kinds of guardrails should, like, should, you know, should AI models be made to do whatever the government that has contracted them out to do, ask them to do, should, like, should Grok get to say, like, actually, even the military wants to do X. No, Grok will not do that. Probably the biggest danger of AI, or maybe the biggest danger of AI and robotics going wrong is government. Interesting. You know, like, people who are opposed to corporations or worried about corporations that shouldn't really worry about the most about government because government is just a corporation in the limit. It's a government government is just the biggest corporation with a monopoly on violence. So I always find it like a strange dichotomy where people would think corporations are bad but the government is good when the government is simply the biggest and worst corporation. but people have that dichotomy there's some I think at the same time the government can be good but corporations bad and this is not true, corporations have better morality than the government so I actually think it's you know that is the thing to be worried about it's like if the government should not the government could potentially use AI and robotics to suppress the population. Like, that is a serious concern. As a guy building AI and robotics, how do you prevent that? Well, I think if you have a limited government, if you limit the powers of government, which is really what the U.S. Constitution is intended to do, it's intended to limit the powers of government, then you're probably going to have a better outcome than if you have more government. so robotics will be available to all governments right? not about all governments I mean it's difficult to predict the like I said what's the end point or like what is many years in the future but it's difficult to predict the sort of path along that way like if civilization progresses AI will vastly exceed the sum of all human intelligence and there will be far more robots than humans. Along the way, what happens? It's very difficult to predict. I mean, it seems like one thing you could do is just say you are not allowed to, whatever government decks, you're not allowed to use Optimus to do X, Y, Z. Just write out like a policy. I mean, I think you tweeted recently that Iraq should have a moral constitution. and one of those things could be that we limit what governments are allowed to do with this advanced technology. I mean, yeah, we can do what is... I mean, if the politicians pass a law and they can enforce that law, then it's hard to not do that law. The best thing we can have is limited government, where you have the appropriate cost checks between the executive, judicial, and legislative branches. I guess the reason I'm curious about it is at some point it seems like the limits will come from you, right? Like you've got the Optimist, you've got the Space Reviews, you've got the… Now you're the boss of the government. Or you will get the, I mean, already it's the case with SpaceX that for things that are crucial to the, like the government really cares about getting certain satellites up in space or whatever, like it needs SpaceX. It is the necessary contractor. And you are in the process of building more and more of the technological components of the future that will have an analogous role in different industries. and you could have this ability to set some policy that is suppressing classical liberalism in any way. My companies will not help in any way with that or some policy like that. I will do my best to ensure that anything that's within my control maximizes the good outcome for humanity. I think anything else would be short-sighted because obviously I'm part of humanity, So I like humans. Pro-human, pro-human. You've mentioned that Dojo 3 will be used for space-based compute. You really read what I say. I don't know if you know Twitter, but I know you. There's a lot of followers. They did give away. How do you just show my secrets? I post them away. How do you design a chip for space? What changes? Well, I guess we want to design it to be more radiation tolerant and run at a higher temperature. So roughly if you increase the operating temperature by 20th set in degrees Kelvin, you can cut your radiator mass in half. So running at a higher temperature is helpful in space. I mean, there's various things you can do for shielding of the memory. But, like, neural nets are going to be very resilient to bit flips. So, like, most of what happens for radiation is, like, random bit flips. But, like, if you've got, like, you know, a multi-trilling parameter model and you get a few bit flips, it doesn't matter. It's like, like, QRS-C programs are going to be much more sensitive to bit flips than some giant parameter file. So I just designed it to run hot, and I think you pretty much do it the same way that you do things on Earth, apart from making it run hotter. I mean, the solar array is most of the weight on the satellite. Is there a way to make the GPUs even more power dense than what NVIDIA and TPUs and et cetera are planning on doing that would be especially a privilege in the space-based world? Well, the basic math is if you can do about a kilowatt per reticle, then you'd need 100 million full reticle chips to do 100 gigawatts. Yeah. So depending on what your yield assumptions are, that tells you how many chips you need to make. But cool, if you're going to have 100 gigawatts of power, you need 100 million chips that are running at kilowatt sustained output per reticle. Basic math. 100 million chips depends on, yeah, if you look at the die size of something like black ball chips or something and how many you can get out of the wafer, you can get like on the order of dozens or less per wafer so basically this is a world where if we're putting that out every single year you're producing millions of wafers a month that's the plan with TerraCraft? Millions of wafers a month of advanced process notes? It could be some number at north of a million you've got to do the memory too You're going to make a memory fab? I think the TerraPath's got to do memory it's got to do logic, memory, and capture I'm very curious how somebody gets started, this is the most complicated thing man has ever made and obviously if anybody's up to the task you're up to the task so you realize it's a bottleneck and you go to your engineers and like what are you telling them to do I want a million papers a month in 2030 what is the next, what do you do you call ASML that's what I want what is the next step that's so much to ask well we make a little fab and see what happens make our mistakes at a small scale and then make a big one is a little fab done or is it no it's not done which I mean we're not going to keep that cat in the bag that cat's going to come out of the bag it'll be like drones hovering over the bloody thing you know you'll see it's construction progress on the X in real time. So, I mean, I don't know, we could just flounder and failure. Success is not guaranteed, we want to try to make something like 100 million. We need, we want 100 gigawatts of power and 100 trips that can take 100 gigawatts. And it's a quality, yeah, by 2030. So then it will take as many trips as our suppliers will give us. I've actually said this to TSMC and Samsung and Micro, and it's like, please build more fabs faster, and we will guarantee you to buy the output of those fabs. So they're already, like, moving as fast as they can. Like, it's not like, to be clear, it's not like us, you know, it's not like either, it's not like, it's us plus them, you know. There's an argument that the people doing AI want a very large number of, you know, chips as quickly as possible. And then many of the input suppliers, the fabs, but also, you know, the turbine manufacturers are not ramping up production very quickly. The explanation you hear is that they're dispositionally conservative. You know, they're Taiwanese or German, as the story may be, and they just, like, don't believe this. Like, is that really the explanation, or is there something else? Well, I mean, it's reasonable. Like, if somebody's been in, say, the computer memory business for 30 or 40 years. And they've seen cycles. They've seen, like, boom and bust, like, 10 times. Yeah. you know so like that's a lot of layers of scar tissue you know so it's like during the boom times it looks like everything is going to be great forever and then the crash happens and then they desperately try to avoid bankruptcy and then there's another boom and another crash are there other are there other ideas you think others should go pursue that you're not for whatever reasons right now I mean there are a few companies that are pursuing like new ways of doing jobs but there's just not scaling fast I mean within AI I mean just generally I'd say like people should do the thing where they find that they're highly motivated to do that thing as opposed to you know some idea that I suggest they should do the thing that they find personally interesting and motivating to do but you know going back to the limiting factor, you've got that phrase about a hundred times. The current limiting factor that I see in the time frame, you know, in the sort of 20, 29, 20, like in the three to four year time frame, it's chips. In the one year time frame, it's energy, power production, electricity. it's not clear to me that there's enough usable electricity to turn on all the AI chips that are being made towards the end of this year I think people are going to have real trouble turning on chip output will exceed the ability to turn chips on. What's your plan to deal with that world? Well we're trying to accelerate electricity production I guess that's maybe one of the reasons that XAI will be maybe the leader, hopefully the leader, is that we'll be able to turn on more chips than other people can turn on faster because we're good at hardware. And generally, the innovations from the corporations that call themselves labs, the ideas tend to flow. it's rare to see that there's more than about a six month difference between, like the ideas travel back and forth with the people. So I think you sort of hit the hardware wall and then whichever company can scale hardware the fastest will be the leader. So I think XVI will be able to scale hardware the fastest and therefore most likely will be the leader. you joked or you know were self-conscious about using the limiting factor phrase again but I actually think there's something deep here and if you look at a lot of things we've touched on over the course of it, maybe kind of a good note to end on like if you think of a senescent lower agency company, it would have some bottleneck and not really be doing anything about it you know Mark and Dreesen have the line of most people are willing to endure any amount of chronic pain to avoid acute pain and it feels like a lot of the cases we're talking about are just leaning into the acute pain whatever it is it's like okay we gotta figure out how to you know work with steel or we gotta figure out how to run the chips in space or like we'll take some near term acute pain to actually solve the bottleneck and so that's kind of a unifying thing I have a high-fant threshold that's helpful. Sell the bottom of the ice. Yes. So you know one thing I can say is like the future is going to be very interesting. And as I said the Dalvis I was literally at Dalvis on the ground for like three hours or something. It's better to err on the side of optimism and be wrong than err on the side of pessimism and be right for quality of life. So, you know, your happiness will be... You'll be happier if you err on the side of optimism rather than err on the side of pessimism. And so I recommend err on the side of optimism. That's enough. Cool. Elon, thanks for doing this. Thank you. Thanks, guys. Jeffrey. All right. Oh, great seminar. Hopefully this encounters the pain in the pain tolerance.