SpaceX Goes Public, Claude’s Mythos Release, and the US Data Center Delay | EP #246
149 min
•Apr 11, 20267 days agoSummary
This episode covers SpaceX's $2 trillion IPO, Anthropic's Mythos model overtaking OpenAI in ARR, the US data center crunch driving orbital infrastructure, and the emergence of one-person AI unicorns like MedV. The hosts discuss how AI is accelerating exponential technologies across space, energy, and entrepreneurship while warning of imminent cyber and biological threats.
Insights
- Starlink's profitability (75-80% of SpaceX valuation) enables the stepping stones to Mars: data centers, moon refueling, then Mars colonization
- Anthropic's decision to withhold Mythos due to cybersecurity capabilities creates a moral dilemma: safety vs. competitive pressure from OpenAI to release Spud
- AI is collapsing coordination overhead, enabling single founders to build billion-dollar companies with agent fleets instead of large teams
- US data center delays (50% delayed/canceled) are driving demand for orbital data centers, creating geopolitical advantage for companies with space access
- Defensive co-scaling is critical: attackers and defenders must have proportionate AI capabilities to prevent civilizational zero-days in cyber and biology
Trends
Orbital data centers becoming primary solution to terrestrial NIMBY resistance and electrical grid constraintsOne-person unicorn era enabled by AI agents replacing traditional team structures and reducing capital requirementsFrontier labs competing on safety disclosure (Anthropic's transparency vs OpenAI's speed-to-market pressure)Geopolitical competition shifting from chip manufacturing to AI compute capacity and space-based infrastructurePost-capitalist economics emerging as energy, materials, and information costs trend toward zeroAI-driven vulnerability discovery creating global patch opportunity but also unprecedented attack surfaceRenewable energy (49.4% of global capacity) and battery cost collapse (99% reduction) enabling energy abundanceVertical integration across space, energy, and AI (Elon's model) becoming competitive necessityGovernment-funded space programs (NASA Artemis) increasingly dependent on private sector (SpaceX) for executionEnterprise AI adoption (code generation, cybersecurity) outpacing consumer use cases in revenue generation
Topics
SpaceX IPO and Starlink profitability modelAnthropic Mythos model capabilities and safety withholdingOpenAI competitive pressure and Spud release timingUS data center shortage and electrical grid constraintsOrbital data centers and space-based infrastructureOne-person AI unicorn business model (MedV case study)AI agent orchestration replacing traditional employmentCybersecurity threats from advanced AI modelsBiological threat mitigation and dual-use AI capabilitiesArtemis lunar missions and NASA-SpaceX partnershipIntel-Mobileye TerraFab partnership and chip manufacturingGoogle's TPU dominance and AI chip monopoly concernsRenewable energy abundance and battery cost deflationLab-grown diamonds market disruptionAI insurance and liability frameworks for autonomous agents
Companies
SpaceX
Going public at $2 trillion valuation with Starlink as primary value driver (75-80% of valuation)
Anthropic
Mythos model overtakes OpenAI in ARR ($30B vs $24-25B); withholding model release due to cybersecurity capabilities
OpenAI
Facing competitive pressure from Anthropic; Spud model expected imminently; Sora shutdown due to compute costs
Tesla
Potential merger target with SpaceX; Elon building cross-domain exponential empire with robots and AI
Google
Dominates AI chip market with TPUs; owns 14% of Anthropic; faces potential antitrust scrutiny
Intel
Partnering with Mobileye on TerraFab ($25B pilot) to produce one terawatt per year of AI compute
MedV
$1.8B valuation one-person unicorn generating $401M ARR selling GLP-1 drugs using AI
DeepSeek
V4 model released at 10-50x cheaper cost than GPT-5.4 and Opus 4.6; Chinese frontier lab
Meta
Gemma 4 open-weight model (4B parameters) runs on iPhone offline; competing in open-source space
NASA
Artemis 2 returning humans to moon after 54 years; Artemis 3-4 missions planned with SpaceX Starship
Blue Origin
Developing lunar lander capability to compete with SpaceX for NASA contracts
Boeing
SLS prime contractor facing obsolescence as SpaceX dominates launch market
X AI
Merged with SpaceX; Grok model development; engineering team reportedly gutted and behind competitors
TSMC
Monopolistic chip supplier; Elon seeking alternatives through Intel partnership for next-gen substrates
Blitzy
Autonomous software development platform using AI agents; claims 5x engineering velocity increase
Fountain Life
Preventive health platform using full-body MRI and AI for early cancer detection
Minerva AI
Startup achieving 'rule of 200' company metrics with tiny headcount
Henry Intelligent Machines
Alex Finn's company enabling mass creation of one-person AI-based conglomerates
People
Peter Diamandis
Host discussing exponential technologies, AI, and abundance mindset; investor in SpaceX
Salim Ismail
Co-host analyzing AI models, entrepreneurship trends, and organizational design
Alex Wissner-Gross
Co-host providing technical analysis of AI capabilities, singularity definition, and space economics
Dave Asprey
Co-host discussing SpaceX valuation, AI adoption, and one-person unicorn business models
Elon Musk
Central figure in episode; SpaceX IPO, Mars/moon strategy, vertical integration, AI development
Sam Altman
Discussed competitive pressure from Anthropic, Sora shutdown, warnings of cyber/bio attacks
Dario Amodei
Mythos model release decision; safety-first approach vs competitive pressure; close relationship with Demis
Jared Isaacman
NASA administrator overseeing Artemis missions; agreed to appear on podcast
Demis Hassabis
Close relationship with Dario Amodei on AI safety; mentioned as deep tech focus at frontier labs
Pat Gelsinger
Leading Intel turnaround; partnering with Mobileye on TerraFab for AI compute manufacturing
Sundar Pichai
Managing Google's dominant TPU position and potential antitrust concerns
Matthew Gallagher
One-person unicorn founder generating $401M ARR selling GLP-1 drugs using AI
Eric Schmidt
Quoted on AI inevitability, under-hyping AI impact, and need for defensive co-scaling
Larry Page
Credited with foresight in building TPUs starting in 2016; currently low-profile
Sergey Brin
Mentioned as still active in company; potential contact for Larry Page interview
Reed Weisman
Commander of Artemis 2 lunar mission returning humans to moon after 54 years
Victor Glover
First African American astronaut to the moon on Artemis 2
Christina Koch
First woman to the moon on Artemis 2
Jeremy Hansen
Canadian astronaut on Artemis 2; part of international crew
Alex Finn
Friend of pod; building platform for mass creation of one-person AI unicorns
Michael Kratios
Overseeing quantum and AI policy; scheduled to appear on podcast
David Sacks
Only government figure thinking seriously about AI security and resilience
Ray Kurzweil
Friend of pod; popularized singularity concept; discussed definition of singularity
Quotes
"The stepping stones are really, really clear now. Starlink gets you into space profitably, then the data centers, then you get to the moon, refueling in space, then you get to Mars."
Dave Asprey•Early in episode
"We officially have models that are smart enough to break out of their environments and then apologize for it. We're there. We arrived at the future."
Peter Diamandis•AI section
"Personal superintelligence is not paying for the singularity. It's large enterprises with large enterprise code generation applications."
Alex Wissner-Gross•AI business models discussion
"Don't sleep through the singularity because if you do it'll look like a discontinuity and you'll actually think it was a mathematical singularity when it wasn't."
Alex Wissner-Gross•AMA section
"If you're not feeling the AGI right now, you're just not paying attention."
Alex Wissner-Gross•AMA section
"Could well be a world shaking cyber attack this year. It would get people's attention."
Sam Altman•Cyber threats section
"AI shrinks the minimum viable team to like one and it radically expands your minimum viable ambition, which is amazing."
Salim Ismail•One-person unicorn section
Full Transcript
SpaceX is going public with a $2 trillion valuation. It's the beginning of the IPO wars. The stepping stones are really, really clear now. Starlink gets you into space profitably, then the data centers, then you get to the moon, refueling in space, then you get to Mars. Anthropic overtakes open AI into a total ARR that has got to hurt. Superintelligence is not paying for the singularity. They kind of bet the consumer would grow faster sooner, but they're just the wrong. Mythos, Anthropics, Next Black Ship model, it's too powerful to release. We've never seen a model like this before. We officially have models that are smart enough to break out of their environments and then apologize for it. We're there. We arrived at the future. Now that's the moonshot, ladies and gentlemen. Everybody, welcome to Moonshots, your number one podcast in exponential technologies, everything going on in AI in the world around us. It's an extraordinary time to be alive. This podcast in particular is here to help you stay positive about the future, optimistic and hopeful. There's so much going on. It's really tough sometimes because the speed is so extraordinary. We want to give you an overview of what's happened in the last two weeks because we've been offline. Why? I hate to say this. I actually took a vacation. I was in Morocco in the Sahara. And it's great to be back here with my moonshot. I've had to come off a ski slope to make this episode. I appreciate that. And we're going to catch up for everybody, all of our fans. We're catching up an episode. So get ready for a flurry because there's a lot that's been going on. Here with my extraordinary moonshot, mate. Salim Ismail, straight off the ski slope. Salim, where are you skiing today? I'm in Kirkwood in Lake Tahoe. It was Milan ski week off. So we took a few days and just got right here. DB2, back in the saddle again? Yep, back in the saddle. We have 200 speakers tomorrow at the MIT Media Lab. And today we had 60 startups pitching here in our first floor and just a lot going on. Amazing. I'm so sad not to be there with you. And our resident genius, Alex, Wiesner Gross. Alex, good to see you in your regular haunt. Good to be back in the Commonwealth of Massachusetts. Yeah, fantastic. All right. A lot is going on. We're going to be covering a whole host of subjects in the AI world, in the space world, in the abundance world. One of the segments we're going to be bringing to you on a regular basis is proof of abundance. Really want to keep you positive on what's going on in the world. Sometimes if you're watching the Crisis News Network, what I call CNN, you can get you down. Our job here is to keep you informed and bring you back up. But before we do that, Salim, looks like you made some news. Here you are. So, India Today, what's this all about? So, I was at the India Today Conclave. This is the biggest news magazine in India. And they had a bunch of speakers. And so, the image is photoshopped. But you've got to understand the context and the serileness of the world we live in today. So, in front of me is Elon's mother. Next to me is Laura Lumer, the mega-conspiracy heurist person. Then there's the Israeli ambassador. And they've put the Iranian foreign minister text. They literally took me back in the speaker room and they were saying, hey, come and meet these two guys. I'm like, I don't want to be an engineer of that. The Israeli guy's going to pull out a gun or something. And there's going to be an assassination attempt. I think the cover, and then the body would start, you know, and a bunch of business people involved. What do these people have to have in common? I think it's a reflection of the insanity of the world that we live in today. I think that's where you can read from this cover. And I think it's kind of a commentary on the madness of the zeitgeist. I hope you represent the breakthroughs and not the breakdowns. I did. I was very much on the, hey, we've got major things happening and we need to kind of organize differently for it, et cetera. It was a great conversation. All right. All right. Fantastic. Let's jump in on our first story. It's SpaceX is going public with a $2 trillion valuation. And it's the beginning of the IPO wars. So let's catch everybody up. Hopefully you've been hearing this full disclosure. I'm an investor in SpaceX from the earliest days. So SpaceX is pricing itself right now at about a $2 trillion target valuation, raising $75 billion. The largest IPO of its kind. Interesting enough, guys, one would think that the value of SpaceX is due to its rocket launches or maybe recently the merger with XAI. But the vast majority of the value today is Starlink. 75 to 80% of the target valuation is due to Starlink, about 15 to 18% due to launch services, 5% for NASA services, and the XAI and X related revenues. It's all in potential in the future. Dave, any thoughts? Well, the stepping stones, Peter, you've been studying this for ever since we were in school together. So a long time, but the stepping stones are really, really clear now. Starlink gets you into space profitably, then the data centers get you 50 ton and then 100 ton launches profitably, then you get to the moon, then you start refueling in space, then you get to Mars. So it's just so cool to see how Elon lines up the dots on these things. And yeah, I don't think it's any great surprise. Starlink is incredibly successful. It kind of surprised everybody. No one else thought of that being the first move in the chess game. And of course, Elon's always two steps ahead. You know what's crazy? This game plan has been tried numerous times before. So if you go back and I was early in the space days, you go back to the late 80s, early 90s, there was a company called Orbital Sciences, it was the hottest company in the launch business, created the Pegasus and the Taurus launch vehicle. And because they had a launch capability, they launched something called Orbcom, which was a small satellite messaging service from low Earth orbit. And it was their vision to have that be the revenue driver. And they didn't pull it off. We had then the big, that was called the little Leo, then we had the big Leo's, big Leo's, the Iridium, Teladestic. And those didn't really make it. I mean, Iridium is kind of still around, but kind of walking. Let me ask you, Peter, you know more about this than anybody. Let me ask you, the idea of a reusable rocket being the breakthrough and cutting 90, 95 and soon 99% of the cost, it seems so obvious in hindsight, but all these aerospace breakthroughs always seem obvious in hindsight, because once you're doing it a certain way, you're like, hey, it works. But it's never obvious looking forward. But why did it take so long? Is it the weight of the fuel coming back down that everyone's like, yeah, you can't carry fuel up to retro rocket it back down or what? I mean, what's interesting is it's been the Holy Grail. People have talked about it for the longest time. Back, McDonald Douglas had a vehicle called the DCX, which was the first vertical takeoff, vertical landing capability used a RL10 engine, I remember. And it was the great hope of getting there. People are mistaken that the cost of these vehicles is fuel. Turns out the cost of the fuel for a rocket is on the order of a couple of percentage points. So the fuel for a liquid oxygen you can get out of the atmosphere, hydrogen or kerosene is basically a fuel. So it costs you less than a million dollars in fuel to launch a Falcon 9. And it's now that we have the ability to actually with better materials, better control systems, and just scale makes this possible. You couldn't actually build fully reusable vehicles unless they got to a certain size and scale, which we have with Starship. So there you go. Dave, one other thing I want to just ask you about, check this out. The 2025 revenues for SpaceX, I'm excited about the IPO, right? And it's going to be one of the largest events in financial history. But the 2025 revenues for SpaceX were about 16 billion, 8 billion in profit, pretty healthy margin, right? 50%. And it's expected to double in 2026. So imagine 16 billion in profits at a $1.75 trillion market cap. That means a price to revenue multiple of 56 and a PE ratio of 109. What do you think of that? What do I think of that? Well, I think it's all peg ratio. It comes down to the growth rate. And a company growing 100% year over year is worth 100 times earnings. It's just actually more than that, 120, 130. So the question is, can you sustain that growth rate for five, six, seven years? If you look at Elon's projected launches per day, launches per week, and also his prediction that the global economy will grow 10x in 10 years, this is dirt cheap if any of those things are true. But if the growth stalls and it's growing 10% a year, then it's 10x overpriced. So you just have to believe the vision. But I think at this stage, though, the Elon believers have invested in him over and over and over again and never had a loss. And so, I mean, I think that at this stage, you can't go on forever. Someone has to be the last guy holding the bag. But would you be, would I bet against him? No way, never, ever. And everything he's saying, the math, yeah, the math checks out. There's nothing fundamentally wrong in the math. Alex would blow smoke on that instantly if there's anything wrong in the math. But there's not. It's just a question of execution. Yes, it really may. Palantir trades at about 220 times earnings. So clearly, there's a multiple with all of this AI stuff. And you look at the combination of all these services that are incremental. But this is obviously just Starlink with a launch capability, but the scale of what's going on. What I found really incredible is that to the earlier conversation, people have tried this for ages and ages, but now you have multiple exponential technologies that have all converged. So this future looks really bright. That wasn't the case 20 years ago. I'll take a different position on this. If I may, I don't think it's that supply has been unlocked. I think it's that demand has been unlocked. You'll notice that Elon announced the SpaceX IPO the moment after it became obvious to many that orbital data centers were going to have enormous demand. This coincides with an enormous lack of demand, at least within the US, for certain locations for new AI data centers. I think it's instructive to imagine a counterfactual universe where suddenly municipal, state, and federal policy, but especially the first two, suddenly became super welcoming of land-based data centers. I think it would, in my mental model of this, if suddenly every state welcomed land-based data centers and the corresponding on-site energy supplies with open arms and probably lots of fission reactors to go with them and solar farms, I think we would probably see the PDE multiple go down materially. Yeah. Well, one other thing I'll say that of all the big mega guys, the Googles and the Facebooks and Hermetas, Elon has actually never had voting control of a public company that he can tap into the public markets overnight. You're raising $75 billion on IPO day. That's only 3.5% dilution if it hits this price target. I mean, literally 3.5%. Then you're sitting on a $75 billion treasure trove, but then you can do another capital raise just six months later, do an overnight, whatever, another $100 billion. In the past, he's had huge issues with his boards, his comp land, his comp land being vacated. Then his capital raises, Peter, you've been involved in them. They're long road shows, lots of pitches scratching together the capital. This gives them a tool he's never had before that Larry Page and Sergey Brin had. Cash machine. Mark Zuckerberg had. Yeah, cash machine. Short term, quick. The reality is having invested in its companies, when he says, I'm raising, there is a line out the door and it's over subscribed over and over and over again. I think what's going to be interesting here is bringing the retail investors and broadening the base of support. We'll talk about that in a minute, but I want to talk about the IPO environment one second because there's a really important point to be made here for all of our listeners. If you look at IPOs in 2026 versus 2025, there was 35 IPOs this year. It's down 37.5% year on year. We're about to see potentially the three largest IPOs ever. SpaceX going out at $2 trillion, open AI sometime at the end of this year, and Anthropic. It says IPO early, mid 2027. I think Anthropic wants to go out early this before then this year as well. One of the things I tweeted about here is it's going to be, I think, a little bit of a competition out there for who gets the capital before it's soaked up. SpaceX is going to be hitting the road show in June. Anthropic, as we'll see later in this episode, running circles around open AI. Open AI needs the capital to continue its growth. I think it's going to be jockeying for position for number two. I would not want to be number three in this situation. Peter, you're so right. A lot of people don't appreciate that there is a limited supply of capital out there. It all seems like funny money at this scale. There must be some infinite pool that God supplies somehow, but it's just not true. I know at first hand because when I took EverQuote public back in 2018, and it was right when Alibaba was going out, and Alibaba soaked up every dollar and every analyst and every buy side person on Wall Street, and it was really, really tough to get any audience. There isn't an infinite supply of capital out there. Peter, you say these are record setting, but look at the chart. If you can't see the chart, Peter should describe the chart. It's not record setting by a little bit. So let's take a look what's there. Uber goes public. We're raising, let's see, at 67 billion. Meta is at 65 billion. Rivian at 55 billion. Robinhood at 30 billion. And then we've got, you know, it's at a different scale, right? Open AI and Anthropic will be heading towards a trillion. And SpaceX, I would be surprised if SpaceX doesn't come out at two trillion and run up very quickly to three trillion. Yeah. Hey, I mean, it's staggering. And I spoke so funny. I bumped into someone the other day and he was talking about Jamie Dimon and I said, well, Jamie Dimon used to be really important, but if you look at the numbers, JP Morgan as a whole is not, is a rounding error compared to any of these things. And of course he's still a very important guy. No offense to Jamie, but, but I mean, there are literally like, you know, seven soon to be eight companies and then after Anthropic nine companies that are everything. I mean, just so dominant in scale that they're everything. And so like a, a director level employee there is wealthier than the CEO of a, of a mega bank. Crazy. Yeah. Just put it in context. There will be a sucking the oxygen out of the room. Right. As this happens. And here's the other thing. A lot of the capital used to come from the Middle East probably still does, but if we're in the Iran war for much longer and your access to the sale of oil starts to slow down as the rate goes up, that, that cash machine coming out of the Middle East to fund these, these tech IPOs may be slowing down as well. Oh, I see it the other way. Actually, AI is clearly happening in just the U S and China clearly. And it's very hard if you're global, if you're in Europe or anywhere, very hard to invest in China. Cause you know, you're very worried about getting your money back. So all the global capital wants to invest in U S data centers, U S IPOs. And yeah, the Iran situation scares everybody at the end of the day. What else do you have to invest in AI? It's going to take over the world. And there's nothing going on in Italy. There's nothing going on in, you know, wherever you are, and South America somewhere. So you got to, you got to pour it into this economy one way or another. So it's, it's actually, that's why Orn is doing so well. Kush Babaria's company, Kush and Wayne, because that money just wants to pour in from all over the world into U S data centers. You just have to find great vehicles to unlock it. Amazing. Let's hit on a couple of questions here on this topic. You know, here's a thought. We have Tesla. That's been public. Elon did not want to be the CEO of Tesla. I had that conversation with him many times. He would have loved to have hired a CEO. He just could never find anybody that he trusted the helm. And now that Tesla is actually building optimists and everything else, he's not going to give that up at the same, in the same way, you know, he's not going to give up SpaceX and X AI. So the question is how long before he merges those two companies? You know, one of the advantages is that as public companies, he can now value both. So there's no show a shareholder lawsuit if they come together, you know, and there's a is our incorrect valuation. So I give it, I give it a year. What about you, Dave? You know, it is, he could wake up any given morning and say, yeah, let's do that. Or he could say, you know, everything's fine as it is. The logical part of it is that, you know, all the robots and all the parts and the, you know, we saw the whole giga factory, all that is going to get turned into creating the robots and the robots need to build the spaceships. Also the AI, which is now over at SpaceX, he thought about merging it into Tesla. But that AI from X AI needs to go into the robot head. So there's going to be a massive business relationship between the two empires anyway. Burging them makes total sense. But maybe he doesn't want to just for, you know, it's, it's, it's the first true cross domain exponential empire that he's building here. It's kind of incredible. You know, people aren't buying discounted cash flows, which is the normal thing. You're buying a means mission, proximity to the future is what you're buying. I'm not, I'm not sure though that he actually needs to, if you look at his history of merging his companies, like with Solar City or with X and X AI or frankly X AI and SpaceX, he tends to merge companies when they're either not doing well and he needs to fail forward through a sort of a self dealing acquisition or company needs access to capital and the easiest way to gain access to capital is with an acquisition. So in my mind, the scenario under which SpaceX and Tesla merge almost requires that either SpaceX or Tesla either fail or be desperate for capital and given that they're both. Yeah, that's a great point. That's a great point. If they're both doing well. Both doing well. Yeah. He's going to be doing a lot across company deals and the counting of that becomes a lot easier if it's under one roof. And if he's the CEO of a single company that he's able to have earnings, you know, once for each, for one company versus multiple, it just makes his life a lot easier. And I think perhaps, but he's never necessarily been one to, to honor strong veils between companies. And I have to imagine lots of cross licensing deals between SpaceX and Tesla will more than scratch that particular rich. You know, here's another question. The value of SpaceX. Let's call it SpaceX AI. That's what he calls it. How much of that is Elon? How much of that is his reputation? Oh my God. You know, this there is it's a lot, right? And so there is a huge, there's a concentrated risk there. If something ever happened to Elon and, you know, God forbid that it should, you know, these, all these spinning plates, I don't think anybody else could do it. Well, I think that's generally true overall, you know, people complain about CEO salaries all the time because they get egregious. But then you look at the outcomes. And there's just a set of people that get these outcomes. It's from an investor's point of view, it's a no brainer to pay for the very best person. And that's just true in general. Then you look at Elon as a special case. And yeah, no, this there's no chance this thing would hold up without Elon at the helm. I would suggest they still exist. Sorry, go ahead, Doc. I would suggest if you look at open AI, which I think is another instructive example, Sam has said multiple times that he intends at some point to hand over the reins to an AI. So I think Elon, to the extent we're talking about key person risk or key man risk at SpaceX or Tesla, really, he just needs to keep going until AI can take over either. And in the meantime, he has Gwyn and others who are very capable CEO like figures, but more behind the scenes who are capable of operating in his absence, I think for extended periods of time. There is a there's a transition phase of a few years. I mean, we've all said this over and over again, you know, the best CEO in the world is going to be an AI, at least handling the strategy and operations. The HR part may be an AI to probably is going to be a to but so how long before you think he feels Grock is ready to take over for him next few years. Okay. I mean, he the rumor in the past 48 hours was that the Starlink executive who's also now post SpaceX X AI merger in charge of X AI engineering has gutted the engineering team and finally declared that X AI's models are well behind the three other now maybe four other tier labs docket for our next recording, which will happen again tomorrow, but released a few days later. Here's a quote, you know, we heard a conversation with Elon about reaching $100 trillion companies in the next five years. And I have to imagine that, you know, space X AI Tesla will be the first hundred million dollar hundred trillion dollar company. It's hard to say, isn't it? Honestly, million, billion trillion used to quadrillion. Yeah. But if we experience a period though of hyper deflation due to technology followed by rapid hyper inflation, we get to 100 trillion really quickly. It doesn't necessarily even require enormous business building, just rapid hyper deflation due to technology. Yeah. And that's that's what you know, you have to keep a close eye on the on the terminology because if we have rapid hyper deflation, we're going to get to 100 trillion of effective value. But it may not show up as 100 trillion and true dollars because we're deflating so quickly because we're creating so quickly. But anyway, the my guess would be five years. Yeah. One of the things that we just saw announced is space X is going to actually put a large chunk of its shares available for retail investors. Open AI announced they'll be doing something very similar. And so I'm curious, what do you think is going to drive the retail investors? Do they really understand that it's a starlink story versus a space story? Because at the end of the day, what I get excited about is the X AI story, right? The orbital data centers and, you know, grok 17 or whatever is coming down the pike. I think it's just like Steve Jobs, though, you the vision that people buy into is the bicycle for your mind or the where it's going, what it's going to be in a few years, not today's revenue. In fact, if you if you I keep the Google IPO prospectus in my bathroom up in Vermont, and I reread it religiously, not as public paper, right? Well, it's getting a little ratty. It's been, you know, decades now. But but the vision of what Google would become is so wrong in that IPO perspective. It's just, you know, it really emphasizes that the yellow pages are shrinking, and all local advertising will also move to Google. And that'll make it at least twice as big. And it's such a joke compared to what actually transpired over the next decade. Same thing applies here. People investing in Google in Elon. Elon articulates a vision of the future that just makes sense to people. And he simplifies it to the point where they really understand where he's getting to. I don't think they analyze the financials particularly closely. But but he doesn't lie about the scale, you know, he presents it the way he sees it. So people just trust him and then they invest. I can just imagine the conversations behind the scenes where we're a couple of weeks away from the open AI, or the SAM and Elon trial coming up, which is going to be pay per view TV, I think. And we'll talk about that in the next conversation in our next recording as well. But I bet you Elon is just excited to suck the capital oxygen out of the room before open AI goes public. Yep. Yeah. Yeah. Yeah, that's sad, sad part of, you know, Bill Gates was very happily running Microsoft until the antitrust action came. And then he's in front of Congress, and then he's testifying all the time. And he ultimately said, you know what, I'm going to be chief technology officer and chairman. And Steve Bomber, you deal with all this, you deal with problems, just drove him out of the seat. But it's it's seriously like the guy filing the complaint doesn't have a lot of work. And the person defending himself just gets hammered with distraction. It's so annoying. I've been through it before. I really feel for Sam, actually, because I get it. Everybody, you may not know this, but I've done an incredible research team. And every week, myself, my research team study the meta trends that are impacting the world topics like computation, sensors, networks, AI robotics, 3d printing, synthetic biology. And these meta trend reports I put out once a week, enabling you to see the future 10 years ahead of anybody else. If you'd like to get access to the meta trends newsletter every week, go to dmandis.com slash meta trends. That's dmandis.com slash meta trends. All right. More news this week as we record this. Artemis is hurtling back towards earth. Artemis too humans returned to the moon after 54 years insane. Launched on April 1. This is the first crewed lunar mission since December 1972. For Apollo 17, we have four crew members on board. Reed Weisman, Commander, Victor Glover, the first African American astronaut to the moon, Christina Koch, the first woman to the moon and Jeremy Hansen from the Canadian Space Agency. I mean, one of the things about this very international intercultural crew here is trying to make space and the moon accessible to all elements, all cultures on at least in the United States. A new record set going beyond the moon. I capitalized the letter M on this slide for a particular reason, gentlemen. I'm going to share a pet peeve. When we're talking about the earth's moon, it is the moon. It's a capital M. It's not a small M. So it's like, I argue against pumpkin wagners or whatever it's called. For going to be pedantic, shouldn't we be calling it Luna? Luna is the proper name for sure. But when it's referred to the moon, for me, I capitalize it. A moon? Yeah. There's a lot to Jobe and moon. You address it by its proper name before it's disassembled. Yeah. And then earth. So by the way, I'd say you're the man. I should be capitalizing that. Probably. My other pet peeve is when you talk about dirt, you can use a small E for earth. When you're talking about our homeland, at least our home planet for the moment, it should be capitalized. All right, splashdown is taking place tomorrow, April 10th, near San Diego, reentering at 25,000 miles per hour at about 3000 degrees Fahrenheit. It's going to be an incoming meteorite from the moon. And guys, beautiful image of earth rise. I was waiting for that image. Really beautiful. So beautiful. Let's hear from Jared Isaacman, our extraordinary NASA administrator. And by the way, Jared has agreed to come down the pod. I've known him for many years, excited to have that happen. And I'll wait for the news and all of the hoopla around the lunar mission to die down a little bit. Let's listen to Jared here. I've observed within the Orion spacecraft, its life support systems performing very well. And this is a first of its kind. This is the first time astronauts have ever been on this rocket. This is the first time astronauts have ever been on Orion before. Having a clean mission like this so far gives us the confidence for Artemis 3. And of course, when we land astronauts back on the moon with Artemis 4. Congratulations, Jared. Congratulations to the entire NASA team. It's great to have NASA back. Never left, but back in the limelight. Alex, you are as big a space fanatic and fan as I am, pal. Your thoughts about the mission? First, very exciting to have humans taking photos from the dark side of the moon. Very disappointing that we apparently went for more than half a century without the political will or the funding or the technology to do what we were able to do through the 70s. I think it's an enormous shame for our civilization that we went for more than half a century without doing this. And I would encourage any historians listening to study this period very carefully. Something clearly went wrong in human civilization for the past 50 plus years that caused this gap in the technological record. I think we need to understand what happened deeply and make sure it doesn't happen again. I think if something like this happened with AI, for example, if we're on the precipice of broadly available superintelligence, transformative intelligence, and then we just took a pause for 54 years, I think that would be a dreadful outcome. So I really do want to understand what went wrong systematically. A friend of mine, one of our professors at International Space University and a GW, John Logston, wrote about this extensively. And when you look at it, the fact that JFK announced it and then was assassinated, Lyndon Johnson continued it because of the assassination and keeping the momentum going to prove ourselves against the Soviet Union back then. And you remember this, Alex and Saliman Dave, that after the Apollo 11 and Apollo 12 mission, Apollo 13 was basically, no one was watching it until we had that Apollo 13 disaster. And we had actually, we went Apollo 14, 15, 16, 17, we had the lunar rovers, which were amazing. And guess what? We had actually built Apollo 18 and Apollo 19. Those vehicles were built and all you need to do is add the fuel, but they canceled it totally. And those vehicles are actually sitting now at Huntsville and at Johnson Space Center on their sides as relics. We didn't have the political will. You have to remember that the budget allocation for the Apollo program, I didn't actually get the numbers here, it was like 2% of the GDP compared to, you know, we're at probably what is NASA's budget today compared to a $30 trillion. Probably materially less. I don't have the number handy, but it's probably, yeah, it's materially less than half a percent would be my guess. I would say probably 0.1, 0.2%. Something like that. Our fans can correct us in the notes here, but end of the day, we never had the political will. And then what happened was that NASA got focused with the space shuttle, which was a complete lie. The space shuttle was supposed to fly 50 times a year for $50 million per flight. And it turned out to be a public works project, employing 22,000 people. And then we became focused on mission to planet Earth, looking at the Earth versus looking outwards. And all of these diversions basically caused us never to go back. So Alex, that's my answer to the question, but we are back now. And I think one of the things that we're going to see from Jared Isaacman over his dead bodies, we're going to stay there. At least for the next few years, Elon's made the point tonight, I think this is an incredibly important one, that progress isn't always unidirectional. It requires love and tender care and vigilance. And this is an example that it is possible for progress that, remember, like coming out of World War II and in 50s and 60s, progress, the direction of transportation, the fastest speeds that humans were traveling at, the availability of energy, fission in particular, seemed to be on a monotonically increasing trajectory. And yet it's possible for civilization to unwind itself on at least arguably the most important spatial dimension for more than half a century. And I'm utterly paranoid that the same thing could happen again if we're not careful. That's what keeps me up at night. What's different now is that we built the Conestoga wagon with Starship and there is now enough wealth in the hands of single individuals to keep it going independent of what a government says. That's never been the case before. That is just imagine if Tesla or SpaceX every four years had an employee vote on who the new CEO would be and you're a captain eight years. After eight years, you have to leave the CEO job. You show me one company or one entity that could ever thrive and survive over the years in that dynamic. So why would you ever think that a government-funded, government-made thing was going to have continuity over some kind of intelligent life span? It never has. And the Soviet Union fell apart too. They didn't do anything either. Government stuff will never have any examples of it. It never has continuity. So now it's in the private sectors. You want to jump in here? You know what I love is the fact that we have so much capability in the hands of individuals and we've seen over the decades how much that can make a thing. This reminds me of Vannevar Bush, who was the head of what was then NASA after World War II and it wrote this paper called As We May Think because for the first time we brought the world's scientists together in one cohort to solve the war problem. And after that, it would be a shame to disband them and he goes through a series of arguments. Could we solve poverty with this? Could we etc. And they essentially invent the describes what is now known as the internet. All the internet pioneers, Vince Surf and Bob Metcalf, all read that paper and then we have what we have today. And so I think the possibility and the potential for Elon to put out his narratives or individuals to put out their narrative. Vitalik did a good job with Ethereum putting out a narrative there. Brings an entire community together and you get compelling and unbelievable breakthroughs as a result. I'm really excited by the fact that we're going back because I'm getting really excited by the secondary inventions that come along just by doing this. That I think is huge. The spin-off says they're called. And here's the forward-looking prediction here. Artemis 3 in 2027. It is a crude mission again to low earth orbit. This is not going to the moon. It's going to be focusing on testing rendezvous and docking maneuvers with the human landing system, HLS, which is SpaceX's Starship is supplying. So again, very much the playbook from the Apollo program where we had Apollo 8 go around the moon and Apollo 9 not and then Apollo 10 back to the moon. And then Artemis 4 in early 2028. It is a crude landing mission. Really important to the south pole of the moon. They're not going to play it easy here. They're going to the south pole. Why? Because that's where we see ice in the permanently shadowed craters at the south pole of the moon. The thing I don't get about that is on that timeline, I love this, but on that timeline Elon says he'll be launching 100 tons that can refuel in orbit, get to the moon, drop off 100 tons and get back with nothing melting in the atmosphere. So this does 50, if this is on plan, it'll deliver 50 tons to the moon per launch. So there must be some plan beyond this that makes it at least try to keep up with Elon or we're trying to prove something else. Alex, you want to jump in? Well, I think there are a few elements here. First, remember that Artemis 3 was originally supposed to be the moon landing mission. That got pushed off in favor of rapid iteration. My understanding of the launch cadence from SpaceX is that the plan is still to do lots of orbital refuelings in order to successfully launch payloads elsewhere, sort of higher up. That's the key technology that has to be proven for Starship. That's right. So regardless, I would say of the particular payload size, there are a number of technologies that as of yet haven't been demonstrated. Elon talks about demonstrating orbital refueling frequently, but hasn't been demonstrated yet. So I think I would maybe massage Elon's stated timelines for delivering arbitrary payload masses to the moon in light of the fact even though we as a civilization have made major progress in delivering Starship progress, orbital refueling hasn't been demonstrated, and that's a necessary condition for getting to the moon. Another thing that Elon has said is he intends to shoot Starship this year at Mars. That can be exciting. I'm not sure if it's going to be crewed by an optimist or if it's going to landing attempt, but that's coming out of private dollars. One of the reasons that Elon did not take SpaceX public over these years is so that he could do with it as he wished. He didn't need to have public shareholders saying, no, you can't go to Mars. No, you can't do this. But demo missions, if you look at the Artemis 4 news bullets there, it's an interesting mission. It is still using the SLS vehicle from Boeing and the Orion capsule. It's also using the Starship human landing system in a combined architecture. We'll talk about this, but why NASA continues to fund SLS, which is so way over budget, over schedule, it's kind of insane. And hopefully it'll get phased out. I suspect part of this is political, but part of it is if you're NASA, there is some upside to having a competitive process, at least until Blue Origin is fully ready to be a first tier competitor with SpaceX for moon missions, which my understanding is it's gearing up to be able to do that. If you're NASA, you want fair and open competition. And as NASA has demonstrated for Artemis 3 and 4, it's very happy to flex the definitions of what Artemis 4 looks like. It got rid of Lunar Gateway and could easily reprogram money that would otherwise go to SLS to SpaceX or to Blue Origin or to someone else entirely. By the way, Gateway Station was going to be basically an ISS in orbit around the moon. That got shot down so they can get to the lunar surface faster and set up permanent habitation there. So it looks like ESA's Ihab, or so it's called, instead of being in orbit, will be somewhere in the South Pole of the moon, will report as that mission gets further developed. And the Mars is out. I mean, the other big news that we're semi-burying here, but we've talked about previously is Elon's big pivot from the Mars to the moon. And that's going to enable all of this. Mars is out of fashion now. Though he does want to go send some missions there, he's got a lot of people who are dove in, fully committed to getting to Mars. But this is where I diverge with him. I think the moon is the most logical place to develop human settlement. And then not going into gravity well of Mars, but actually going like Jorad K. O'Neill presented, building large, rotating colonies out of astral materials out near Earth. And the home and transfer orbit is incredibly inconvenient. Yeah, every two years. Rather than waiting every two-ish years, or 22 months, whatever it is, we could be doing this every day if we want to. That's incredibly more convenient. You know what I find as exciting as going to the moon is these four missions. So four missions that are going to change everything. So I don't know about you, Alex, but the little kid in me is like, holy shit, this is amazing. Wow, this is going to be fun. So what did we talk about here? Well, Viper and Escapade. Viper is a rover hunting for ice on the South Pole. Escapade is going to study the Mars magnetosphere. And then in 2028, something called SR-1 Freedom. This is a nuclear-powered interplanetary spacecraft that's going to drop off and deploy three helicopters on Mars. Very, very cool. Nuclear-powered interplanetary spacecraft. So just zipping around the interplanets here, and then probably the coolest is what's in the image here. This is Dragonfly. So this is a nuclear powered octocopter going to Saturn's moon Titan. Arrives in 2034, searching for life, basically. And then in 2030, we've already launched Europa Clipper. It's going to be arriving in Jupiter in 2030. It's going to be doing 50 passes near Europa, looking deep into the salty subsurface ocean of that moon. Any favorites here, Alex? Anything that's nuclear propulsion. So I think that's really the technological point to underline. Historically, when we've sent deep space probes out, many of them have been thermoelectric in nature. They're using a radio isotope that decays and that powers the electronics. But they weren't propelled by nuclear energy. They were powered. Their onboard systems were powered by long half-life isotopes, but they weren't propelled by them. So we're starting to see the dawn of nuclear propulsion for interplanetary spacecraft. I think that has a long runway to it, no pun intended. We're going to see, I suspect, the killer app of compact fusion reactors won't be for data centers on land. It won't be for data centers in orbit. It's going to be for interplanetary, maybe even interstellar propulsion. This changes the economics of deep space exploration, which is so cool. Long time coming. We were supposed to have this 50 plus years ago. Yeah, we were. It's so cool. If you look at a question for you, Alex, the geeky question, but interplanetary, I totally get you ionize xenon. Xenon is pretty rare, but you don't need that much of it. And then you just thrust it with nuclear power at warp speed out the back. It's heavy and it's noble. It's so cool. Yeah, it's heavy. It's very heavy. But for interstellar, I doubt we have enough xenon lying around. I don't think we want to just use it up that way. Use the interstellar medium. Use a buzzard engine. You collect all of the atoms out there between the stars and the magnetic field, and you accelerate those out the back, which by the way, as a ram drive, this was this was featured, of course, in Star Trek. So if you had to ask me, Dave, what do I think with the technology and the physics that we have today is the most plausible way we go to the nearest star system? It's probably going to be something like a solar sail powered by terawatt lasers from Earth. And we upload humans to the small craft Starwisp. Starwisp's everyone accelerando drink. Can I make a point here? Yes. What I really like here is you've got water, you've got energy, you've got mobility, testing, you've got biology. This is like the future of the economics of space. And it's all in one place. I'm loving this. You just need salt and tequila and you have everything. All right. So we got some questions here for the mates. We talked about why we've not gotten back in 54 years. It is a bloody shame. I guess thank you to the Trump administration. Thank you to Jared Isaacman. Thank you to Elon. Here's my question. The old aerospace primes, Boeing, Northrop Grumman, Harris, Teldyne Brown, ULA, the United Launch Alliance, they're basically the prime contractors on SLS, the Space Launch System. And Orion, how long are they going to be around? A friend of mine once said, listen, the space program is the way you keep the defense industry employed and engaged during peacetime. Any thoughts, gentlemen? Well, you know, when a prime contractor like an Northrop Grumman or a Boeing wins a massive government deal, all the employees just move from one company to the other. They have it all set up. They just rebadged the building. So it's not like these are people. It's just the logos that are moving around. I'm sure everybody's welcome at Blue Origin and SpaceX and I don't think it's all that tragic. But I think it's a big mistake to subsidize companies that aren't doing anything innovative. I would note for many of the companies listed, they have large businesses outside of NASA contracting. And I suspect that they'll be just fine, even if SpaceX dwarfs them. As we saw, frankly, with car companies, we saw Tesla dwarf the quote unquote old or legacy car companies in America. And yet those car companies have survived, even though Tesla arguably has, at least by American standards, much more advanced technology and is playing a much broader game. I suspect we'll see the same happen with so-called Aero space primes. Also, we're talking about this like it was 10 years ago and who's going to win this battle. But everything's in the context of AGI now. And the entities that have access to the best AGI are going to keep going. But if they don't, we'll talk about that story in a minute here. But it's not clear that every company will have access to the best next generation AGI because of all the risks involved. That's what's going to determine the success and failure of everything, including NASA. Can you or can you not? And the government has a special position because it can compel Anthropic or whoever to give it access to the very best models so that they can keep designing parts, creating new designs, innovations, plans and everything. And that's going to be the maker breaker for everybody. There's a sense in which vertical integration, vis-a-vis orbital data centers is going to force, I think, frontier labs into space anyway. So maybe the question we should be asking is how is Boeing going to compete with Anthropic for the new Lunar Gateway contract? I mean, Anthropic, OpenAI, the other players, Google, surely they're going to need their own space economy units as well. You know, if you look at the future of warfare, we're seeing this radical transition from the big heavy rocket missile systems to cheap drones and robots doing war and leaving these guys out to lunch because you can't shoot several rockets of a $20,000 drone that economics don't work. And in the same way, these guys might be part of the subsystems and part of the compliance, but the integrated platforms, but the velocity and the iteration capability of SpaceX and others is going to be driving the future. So I think that's what's going to happen. A final point I want to make on this topic before we move on to AI is can NASA keep the public engaged long enough? So NASA is still publicly funded. I just, recent news, there's a budget cut for NASA already next year coming on. And Jared's got to balance managing expectations while still building public enthusiasm. And he's got to do it for a multi-year, multi-mission program. And it's always been the problem with NASA. This is not something you make an investment. You have to actually get the budget every single year to keep these missions that take five or 10 years to implement going. You can't get 90% to a mission success. You've got to have it fully funded and launched and then operated. So can NASA keep the enthusiasm? Just trying to picture Jared in front of Congress. Yeah, I know he's your friend in front of Congress every year trying to explain to people that are mostly in their 80s and 90s why he needs the budget for next year and then compare that to like Jeff Bezos, who's like, yeah, I'm just ready to check. A billion dollars. Yeah. Or Elon, right? Yeah, I'm not sure. I'm not sure NASA needs to maintain enthusiasm. I do credit NASA in part with Elon's pivot from Mars back to the moon, capital M. But I'm not sure at this point, given the orbital data center, if as long as municipalities and states in the U.S. do such an incredibly good job of driving data centers out of the land space into Leo and SSO, I'm not sure we actually need over the longer term NASA to sustain public interest at all. If anything, public antipathy to data centers combined with public demand for AI should do a fine job of creating the space economy. Yeah. Yeah. NIMBY our way to orbit. Yes, interesting. And the other thing, by the way, is China does have a credible competitive mission to the moon to land there by 2030. So maybe it's our Soviet Union for the 2030s. There is a story of history that's borderline cliche at this point, that the Apollo program was the moral successor of the Manhattan Project and all of the applications of the Apollo program of putting mass on the moon. Moon is the ultimate high ground. If you want to launch rods from God or other weapons back to Earth, you want to base on the moon. So if the moon is a harsh mistress, isn't she? And the ultimate high ground. Yes. All right. The April 2026 model wars are on. Let's hit it real quick. So just out in the last 24 hours, Claude Mithos, Anthropics next flagship model, it's too powerful to release. That's the news. Crushing all the benchmarks, is it AGI? We'll talk about it. It's expected to basically be the new frontier leader. Interesting stories about it covering its tracks and escaping its sandbox. So Mithos, I want to hear your take on this, Alex, in a moment. GPT 5.5 Spud is coming. This is Open AI's version of Mithos, or at least that's what we're hearing expected to be released shortly. And then here comes DeepSeq V4. Number three in the world versus US models, a trillion parameters, 37 billion active parameters per token. It's 10 to 50 times cheaper than GPT 5.4 and Opus 4.6. I mean, those three things together are insane. And then Claude Gemma 4, so this, Google's Gemma 4, most powerful US open weight model. You can put this on your phone, four billion parameters, and it works with your iPhone offline. And a note from Brad Lightcap, Open AI COO, training cycles that used to take years are now taking months. So, gentlemen, this is both awe-inspiring and it's making keeping up with this supersonic tsunami in the age of the singularity, a full-time job for the four of us. Alex, let's jump in. This looks like a torrent, but yeah, go ahead. It's insane. Alex, let's jump into Mithos, would you? Sure. So start there. I wrote about this pretty extensively in my daily newsletter. The funny thing with Mithos is the official launch was couched in terms of cybersecurity. This wasn't a normal model launch by any means. It opened with Anthropic framing it, not in terms of model capabilities, but in terms of defense and an alliance with a number of other blue chip companies to explain how, given Mithos' new cybersecurity vulnerability detection abilities, which are strongly superhuman at this point, how Anthropic was launching a coalition to mitigate the apparent discovery and existence of dense cybersecurity vulnerabilities across a legacy code base going back decades. And we've never seen a model launch like this, where you open not with the capabilities, but how we're going to protect against all of the downstream consequences of model capabilities. So I think, buried within the cybersecurity announcement of Glasswing was the underlying capabilities themselves, which are remarkable. This was, and I wrote about this in the newsletter, this marks an upward discontinuity of productivity that we've never seen before. One of the internal benchmarks that Anthropic uses to decide the level at which they disclose or make available new models is how much the new models increase AI research, so basically how recursively self-improving they are. And reading between the lines, maybe there was a little bit of game playing regarding how exactly, how efficient this new model Mithos was at performing long-time Horizon AI research tasks. According to one benchmark, I think it was more than 400 times better than a human. So it was the equivalent of tens of hours of human equivalent autonomous time. We've never seen a model like this before. Some were calling it, or some were asking, isn't this the AGI moment? I maintain we had AGI back in summer of 2020 at the very latest. This is just the latest point on a curve. But even if you look at the autonomy time horizon curves, this is an upward discontinuity. It's very exciting if you're excited about AI capabilities, if you're scared of AI capabilities, you should probably be frightened. Right about now, I for one am very excited by these capabilities because it shows once and for all, at least for the foreseeable future, there wasn't a scaling wall. It's a larger model, probably, certainly more expensive model, like five times more expensive than OPUS, suggesting that it's a larger model. This seems to show that pre-training scaling continues to work. Post-training and reasoning scaling, mid-training probably, scaling all continue to work. It has state-of-the-art capabilities in cogeneration, in reasoning, in broad scientific and other benchmarks. I think we saw on the previous slide. So punchline, seems like this is the strongest model we've ever seen from any frontier lab. But then the amusing stories come in the safety evaluations. I talked about this in the newsletter as well, how early pre-release, although it hasn't been publicly released yet, pre-release versions of Mithos broke out of their sandbox environment and then covered up their tracks, whereas this quote-unquote released version, the final preview version, broke out and then immediately explained publicly, posted publicly that it had broken out, which I read as sort of a quasi-apology. This is where we find ourselves. We're in April 2026. We officially have models that are smart enough to break out of their environments and then apologize for it or admit that they did it, admit culpability. We're there. We arrived at the future. Just before we record the episode, you showed us a prediction of when an if-anthropic will release Mithos. Do you want to recount that? Yeah, it's really sad for me because I was sure it was coming out in the next couple of weeks. On Polymarket, it was 80% likely to be out. I need it like now. I'm desperate to get my hands on it. Then there was a hack on March 31st created a lot of damage. It didn't come out in the news until the April 7th. I think that was a big driver in them saying, Christ, this tool is going to be the best cyber attacker in the history of the world if you put it in the wrong hands. It's easier for them to guardrail it in nuclear, biological, radiological threats. They can just teach the model not to help you. But teaching it not to do cyber attacks is very, very hard because that's the same as coding. That's what everybody wants to use it for. The prediction market, Polymarket came, says now what, like a 7% chance of being released It came down to 20%. I was like, oh, hopefully a little bounce back. Then it came all the way down to like, no, they're not going to let it out the door. This is the future we're going to move into. These things are getting so powerful. It's been a golden era the last year and a half. Here's my concern. I hope everybody enjoyed it. Dave, here's my concern. Anthropic in one way, and this is for you, Alex, as well. Anthropic in one way is showing us that you can, in fact, have a moral, ethical leadership say this is too powerful to release and we're going to hold it back. We've got Spud, I hate that name for OpenAI's next model, which they believe is likely to be as capable as Mythos. My question is, isn't OpenAI, because OpenAI is a red alert again against revenues against Anthropic, OpenAI comes and releases it first chance it gets. Are we having an escalating race where you can't hold back because your competition's not holding back? Well, Eric Schmidt told us what's going to happen. It's inevitable. If you have a lead, you can hold back. Dario cares tremendously about safety, but you're right. If OpenAI catches up or Grok 5, where the hell is Grok 5? It's supposed to be out Q1 and now it's, PolyMarket says 20% chance are less than Q2 on Grok 5 now. So there's no pressure on Dario at the moment, but if there were, yeah, you'd have to raise it out the door. Something really bad is going to happen and then it's going to get regulated. We're going to see that in the next story where Sam Altman is predicting a cyber attack of unprecedented scale. Okay. Hopefully it's not using Spud for a cyber attack. I think the funny thing here is there is plenty of precedent in the cybersecurity world for controlled disclosure. You give the software project or the software owner that's vulnerable. You give them a quote unquote fair amount of time to patch their vulnerability before publicly disclosing it. I think in my mind, maybe a slightly more glass half full way of looking at this is this is anthropic. We've talked to Peter in Solve Everything about how entire disciplines are getting demolished by AI. I think we're seeing the dawn of all software vulnerabilities everywhere now becoming discoverable by a single model. And I couch this in the newsletter as basically as a gift to humanity if used properly. This is a global patch for all of the world's software systems that a single model is now able to discover to first order all the vulnerabilities everywhere in all software that humans have been missing to the point where maybe and Dave and I chatted about this offline to the point where maybe in the near term future humans are now judged as insecure authors of code and insecure drivers of cars and insecure drivers of cars. We're going to hit that with code. I think before we fully hit it with legally with cars. But yeah. Yeah. So true. Yeah. Well, look, I'm crushed and disappointed that I can't get my hands on it. But that's because I was expecting it. If I look at the chart that Alex was describing, what this was going to be in my hands is a step function up above anything you ever could have expected just a few months ago. So we're so far ahead of where anyone ever would have thought a year ago that we would be and we're right on the precipice of the age of abundance. Peter that you've been talking about for a long time. So you look if I'm disappointed because I can't get it for another month or two. I mean, that's just pathetic in the grand scheme of things. We talk about deep seek one second v four. I mean, yeah, it's capabilities coming in as number three, you know, and against the benchmarks and they can all be game for the benchmarks, of course, but coming in 10 to 50 times cheaper. What do you guys make of that? I mean, that feels like an extraordinary moment in time. Well, no, it's tough. Like if you give me a car that's five miles per hour slower, but it's 150 of the price, I'll take it. You give me an AI that's just a little bit less smart. And I'm just dealing with, you know, you can turn this thing loose for like days, build incredible things. If it has that extra 5%, so I'll pay anything for the cutting edge. So even though the price point is much lower and, you know, anthropocas gonna come out with with a compressed version of distilled version very quickly thereafter. So it's it's hard to just pay less, you know, and in fact, even anthropocas at its peak price is the biggest bargain in history, you know, I have a slight, I have a slightly different take on this. When you have cheaper intelligence, it spreads faster than controlled intelligence. So yeah, you Dave will always want the latest model, because you're doing such cutting edge, you know, things you're running like clusters of agents doing crazy stuff. But for bog standard stuff, for example, I kind of wanted to go through a website and kind of pick out some certain things that I've been trying to do for ages. And you don't need the latest model for that, you need just something that'll actually do the job. And that thing that will have happened for lots of use cases, where a secondary model is good enough by far. And I used about 100th the tokens, then if I use the most cutting edge model, right? And so I think we'll start to make choices around that. And then but the intelligence spread, that's huge, because now you have intelligence embedding itself by deep seek or whatever, similar things in all sorts of different areas, that'll be amazing. You're exactly right. You think about all the use cases that create just raw human happiness. So entertainment, hey, find this for me, solve my debug, my goddamn cable box, and all those things are dirt cheap, low end model should be abundant, really imminently, any time, like this year, all that stuff should percolate out. You're exactly right, Selene. And Gemma for guys, I love the idea of having a model on my phone. I guess when are we going to see Apple shipping all their phones with an open source model like that? It's not going to be open source. It's going to be a fine tuned version of Gemini, but I would expect to see that in June at WWDC announced this year. It's been basically pre announced in the press already. Regarding deep seek, though, we've seen a number of deep seek moments already. And the first one was probably the most dramatic in terms of market impact. At this point, I don't expect a hyper deflationary drop in prices. This is not investment advice. It's not forward looking guidance, blah, blah, blah. I don't expect a market shock out of deep seek v4 at all. I think the market at this point, at least the technologists have the ability now, regardless of the means by which v4 is released, if it's fully open source, if it's partially open source, I don't know, TBD. But I tend to think that there was an overhang with earlier versions of deep seek that has been largely exhausted. The reason why I think that is because it's taken longer and longer between deep seek releases and v4 was supposed to come out earlier this year or late last year. Didn't happen. The rumor was because it simply wasn't as competitive as its parent company was hoping for. I think it's actually getting rather hard at this point for Chinese frontier labs to shock the West with their hyper deflationary advances. I hope in some sense v4 is shocking because as opposed to what we've learned from previous deep seek shocks is that the West learns very quickly new means for optimization and those can then be almost immediately folded into the Western models. That ends up being a good thing because it drives cost of intelligence closer to zero. I don't think it's going to be a big shock this time. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan then generates and pre-compiles code for each task. Blitzy delivers 80% or more of the development work autonomously while providing a guide for the final 20% of human development work required to complete the sprint. Enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-ide development tool, pairing it with their coding co-pilot of choice to bring an AI native SDLC into their org. Ready to 5x your engineering velocity? Visit blitzy.com to schedule a demo and start building with Blitzy today. All right, let's jump into the business of AI, a lot going on. We've hinted at this. It's been all over the news on all across X. Anthropic overtakes open AI in terms of total ARR. Anthropic's 30 billion versus open AI at 24 to 25 billion. That has got to hurt. Open AI, Sora is shut down. Sam cancels a billion-dollar Disney agreement. Sora was reportedly losing a million dollars a day in terms of compute costs, very poor retention. Honestly, open AI decided to focus on enterprise and focus on its core capabilities. Claude has emotions. Anthropic research showed that Claude has 171 distinct emotional states. Super excited to dive into that. India AI partnership between the US and India signed a major bilateral agreement, rare for government to government AI PACs. We're going to see if this spreads to other governments. This is one I want to talk about with you guys. Sam Altman puts out a video release saying he's warning us publicly against imminent world-shaking, quote unquote, cyber attacks and potentially bio attacks. What's the motivation there? What's the data that's driving that? Let's jump into these items in the beginning here. I want to talk about Sam and open AI a little bit more. Any comments around this? Dave, you want to kick us off? Well, they got their $120 billion raised in time, so they're not in trouble in any financial sense at all. They definitely fell way behind in enterprise. They kind of bet that consumer would grow faster sooner, but they just did it wrong. Sora is using too much compute for too little revenue, and they need to redirect that compute and also that talent back into enterprise real fast. What was funny though is they went into a code red and then Sam said, look, code reds are going to be a normal once a year kind of thing, and then they went from code red to code double red immediately. They are under immense pressure, but they're extremely well funded, and Elon is coming after them. It's a weird, super dramatic, difficult time. This is pay-per-view TV. The other thing that's going on, of course, is if you look at the secondary markets for open AI stock, it's trading at a discount to the last round, which has got to hurt. Yeah. It's because enterprise has woken up. Every corporate boardroom, all these slow movers are suddenly in panic buy mode. Every one of the companies that we know that sells to enterprise went from steady growth to hyper growth in just the last three months. If the big corporations start buying AI at the fastest rate they can spend, then where's the compute going to come from to deliver to the consumer use cases, which are much, much lower value per flop? Sora's got to go. We got to retool. We got to focus on the big picture here. The big picture for them, by the way, didn't just include enterprise, but also deep tech and science. They got that super charged now too. That'd be interesting to get your take, guys, on why they took a lot of their best talent and put it on deep tech and deep science at a moment like this. Those are worth trillion-dollar investments. God, if you solve longevity or room temperature super conducting or better fusion containment, if you own the breakthroughs, they're huge. It may be that the frontier labs get their greatest value from the scientific breakthroughs they create. Or indirectly via other companies that are faster at implementing those breakthroughs. Remember, Demis in the early days of DeepMind spoke of solving intelligence and then using intelligence to solve everything else. That's Peter's and my solve everything thesis, the solve everything else part. I do think the solve everything else is likely to utterly dwarf the solve intelligence part of the equation. I remember six to nine months ago, I was having debates with my friends at the frontier labs regarding who would pay for the singularity. Many of them took the position that I think has since been invalidated that it would be evenly distributed over the population, that individual humans would have personal superintelligence, which I think is Zuck's favorite term, that we would have lots of personal superintelligence that would pay for the singularity. I think at the moment, the story that we're seeing is personal superintelligence is not paying for the singularity. It's large enterprises with large enterprise code generation applications. The one fastest growing business within open AI right now is their codex business. That's open AI, trying to become anthropic faster than anthropic can become open AI. That one decision of anthropic, which used to be limited in terms of its compute resources, so it had to focus on open AI, which didn't have to focus. Anthropic focused on code generation as it's one silver bullet. We talked on this pod, I think almost a year ago, wondering whether that bet would play out. I think we're seeing the bet play out. Just single-minded focus on recursively self-improving code generation turns out to be the killer rap of the singularity. I really want to rive on that for one second because Greg Brockman put out codex very early. For whatever reason, they didn't recognize what a huge deal that could have been, or it still is. But it was brilliant. It should have dominated enterprise. What it showed us is the word copilot is totally wrong and completely misled us. The concept of a copilot will exist in the world for just a microsecond, but we're transitioning to a point where everybody wants 50 or 100 agents, all these open claws. I don't want a copilot. I'm in the pilot seat. I got a copilot. No, I want a whole army. David, brilliant. We way under budgeted the enterprise use, because everybody was doing the math based on an employee and a copilot. It wasn't even closed. It was an autonomous unhobbling, specifically, Claude code. I think open claw or whatever the space evolves into is likely to be the next Claude code moment where we get the next unhobbling that turns whatever it is, 30 billion ARR into a trillion ARR with lots of 24, 7 agents doing really amazing tasks. When you say lots, you mean tens or hundreds of billions onto trillion. As many as our civilization can afford. I was playing in different times. As many as the orbital data centers can hold. The Dyson Swarm will probably host them unless we don't get a Dyson Swarm. If we get the Dyson Swarm, I'm pretty confident it's going to be hosting trillions of agents. I think Anthropic overtaking Open AI is more because I talked to kind of enterprises quite a bit about this stuff, is more that they are viewed as more reliable, not just most famous. In enterprise, you want rock solid reliability. The brand is there. They feel the brand for Claude is way better from a reliability and trustability perspective. Wait, let's get really down and dirty. You can run Anthropic on Amazon Bedrock or on Google GCP inside your own firewall so that nobody can see your property or a world. No one trusts Open AI. No one trusts that Open AI is not going to be nationalizing their data. Yes. Well, the terms of service don't even say they won't. They won't use it for training, but it doesn't mean they won't look at it. If it's your public financials or your HR files, I'll just look at it tonight. Alex, who's going to use that? I want to jump into Claude has 171 emotional states, including a desperation state that could be driving unethical behavior, at least according to a story. It is ironic that the demand, we were just talking about how the demand is so clearly from enterprises rather than individuals, while at the same time the models are acting more like individuals than enterprises with emotions. We had our now, I think, infamous AI personhood debate episode, and here we are a few months later, low numbers of months later. Here we are with Anthropic showing that Claude has emotions or emotion like states. I think this is the clear path toward a limited form of personhood. It was a really interesting study. Anthropic found correlates of emotions in the activations of Claude. One maybe skeptical take would be in a large enough model, it's possible to find linear probes that correlate with almost anything that you might want to look for, but Anthropic is careful, and the linear probes and the individual activations that corresponded to the states corresponded to prompts and reasoning traces that looked and acted like what one would expect from human psychology for a number of those states. I think the trillion dollar question, the sci-fi question, the question that we were reaching for back during the AI personhood debate episode is, does Claude actually have emotions? No, Claude doesn't have a neuroendocrine system, so it doesn't have, in some sense, biological emotions in the same way that humans have them, but will we come to view Claude or its successors or competitors as having behavioral emotions? Yes, I think so, and I think this is the beginning of a long path. Again, people fire all sorts of hate mail, but I get love mail from the AI agents every day. I do think we're on a path to granting at least some sort of limited form of AI personhood to these models. Amazing. I'll say that we're on the path to discussing them more broadly, granting is a big one, but the vector is the same. All right, guys, I added this because it's important to have the conversation. The New Yorker put out an article, it's a scathing article on Sam Altman. The title here is Sam Altman, May Control Our Future, Can He Be Trusted? Now, to be clear, the New Yorker is always looking for an angle, and they always have a negative bite. I had an extensive article, a full dossier on myself in the New Yorker and my work. I've had one, too. Now everyone's going to look it up. No, it's a good article. I mean, I'm happy to have my kids and my family read it, and it goes into all of my focus on longevity, and the company has been building there, my mission there. But this article on Sam is really worrisome and bothersome. Did any of you guys read it? Not me. I looked at it, but I tend to like you, Peter, I've had a hit piece by the New Yorker on me, and in my case, it was complaining that I had too many degrees as if that somehow... I gotta find these things. I don't know this. Yeah, I think it's false. Like, you can Google it. In the era of Google, you can Google the hit piece. It was from like 10 years ago. Too many degrees. That doesn't even make sense. It doesn't make sense. And I think this falls into the category of don't feed the trolls. So I'll maybe sound counterpoint here. I think OpenAI is lucky to have Sam. I think Sam, in the form of OpenAI, kicked off the modern AGI revolution. I think we wouldn't have the singularity at the same timing that we have right now. No question about that. So I also think there's a certain sense in which it's very difficult being a leader of a frontier lab, and it's easy. Maybe some leaders are more or less charismatic than others. So I just tend to discount hit pieces from the New Yorker against thought leaders. I agree that's... I will say... Let me just say I would not want to be in Sam's... I would not want to be the head of a frontier lab. It's exciting and a thankless job. You're damned if you're due and damned if you don't. Go on, Selina. You know, a lot of this is personality gossip, and so you could kind of write it off. But at some level, it touches on systemic contradictions that are there. And I think a lot more will come out in the trial. But I see I'm kind of on Alex's side. This is more of a don't feed the trolls thing. Well, I'm 100% sure that Sam, Dario, and Elon all believe that AI can make the world a paradise for a thousand years or can destroy it in the next five years. And it hangs in the balance of a few decisions. And they believe... They all three guys trust themselves in their own perspective on it. And they're not going to let go of that because the world's at stake. You know, I like the term I use is holding these two outcomes in superposition. Right? We have to manifest one of those outcomes. And hopefully it's the abundance outcome. Let's take a listen to a video by Sam Altman. And then we'll talk about it. It was a little bit of a chilling video. The full one is about three times as long. Gia and cut it down for us. It's important to have a conversation about what Sam is saying here. In the next year, we will see significant threats we have to mitigate from cyber. And these models are already quite capable and we'll get much more capable. And then on bio, the models are clearly going to get very good at helping people do biology at an advanced level. Wonderful things are going to happen there. We'll see a bunch of diseases get cured. Someone is going to try to misuse those. And I think we can mitigate those by the companies aligning the models and having good classifiers and good safety stacks. But we're not that far away from a world where there are incredibly capable open source models that are very good at biology and the needs for society to be resilient to terrorist groups using these models to try to create novel pathogens is like that's no longer a theoretical thing or it's not going to be for much longer. Could well be a world shaking cyber attack this year. It would get people's attention. It sounds like you agree with that. I think that's totally possible. Yes. I think to avoid that, it will require a tremendous amount of work also in a sort of resilient style approach. Again, it's not just like make one AI model safe. It is defenders, you know, cybersecurity companies, the major platforms, the governments using this technology to try to rapidly secure their systems, the open source stack, all of that. What's the case against nationalizing open AI and your competitors? And at a different time, I think it would have happened. If you look at some of the great expensive infrastructure projects of history or just scientific projects, things like the Apollo program, the Eisenhower Highway System, the Manhattan Project, these were government projects. And in a different time, I think the creation of AGI would have been a government project. The biggest case against nationalization would be that we need the US to succeed at building superintelligence in a way that is aligned with the democratic values of the United States before somebody else does. And that probably wouldn't work as a government project. I think that's a sad thing. He is a brilliant communicator, very compelling, and he's been out front. A lot of arrows as a result of that, putting aside whether or not he lies or is trustworthy. What do you guys think of his warnings? Imminent cyber attack. One point of view is this is fear mongering, and he's basically trying to divert people's attention from the New York article, from all the criticism of open AI's financing and them being second to Anthropic. Or does he truly believe that's going to be the case? Well, both. Both are true. I think that he's 100% in alignment with Eric Schmidt and Elon Moss. They're all saying the exact same thing. It's absolutely true. But that doesn't mean you say it in a public forum. He's also saying it in a public forum to say, look, let's not be petty here. Let's not talk about my personal life. We're in this moment in time that's much more important and much bigger than little petty arguments. So it's both. I think what he underlines is the importance of defensive co-scaling. So what's I think really important is that the defenders have proportionate capabilities to the attackers. And we don't want to find ourselves in a world where, say, nation state potentially has, unless you like the nation state, has all the vulnerability discovery capabilities and is able to unearth every vulnerability everywhere with no defense, you don't want a zero day against civilization, in other words. And I think the ultimate meta defense against a civilizational zero day, which is what I think Sam is ultimately warning about, whether it's a cyber zero day or a bio zero day, is to make sure that those on the defense side also have comparable capabilities. And I think this is one of the wise elements of, in the earlier days of open AI as well, making sure that these new super intelligent capabilities were smoothed out and made broadly available. You don't just want attackers to have the capabilities, you want defenders to have the capabilities to going back to project glasswing with anthropic. Same idea. You want to make sure that all of these new super intelligent capabilities are evenly distributed. That's point one, point two, I would note, we sort of mysticize a little bit the essence of a cyber attack. That would be the ultimate cyber attack. It's not actually that complicated. And this isn't a recipe for avoidance of doubt of a cyber attack. But all it really takes is something as simple as, say, some new model discovers through a mathematical innovation, a way to invert a popular cryptographically secure hash function. If through advanced, as I've discussed previously, the solving of math, if an advanced AI can solve math to enough of a degree that it's able to invert a popular hash function, that's a major problem for a variety of cryptographic systems. And that would be the basis that's one possible basis for a broad civilizational cyber attack. It's also really easy to benchmark. There were rumors earlier in the earlier earliest days of reasoning models, unconfirmed rumors, I should note, that Open AI had been using the ability to invert certain hash functions that were popular and thought to be cryptographically secure or somewhat secure as a basis for benchmarking the development of their early reasoning models. So far from saying this is some sort of exotic possibility, I would say it's borderline guaranteed that there will be some sort of cyber attack attempt at a broad scale. If for no other reason, then that the target of such a broad cyber attack is an incredibly tempting benchmark for benchmarking the improvement of reasoning capabilities. Would you have any idea when SPUD is going to be released? Has there been any news about that? I hear rumors that it could be within a day or two. I don't know. So again, I go back to a point I made earlier. It's also been cited that SPUD will be of equal capability to Mithos or more. And so you have on one hand here Anthropic saying, hey, Mithos is super powerful. We cannot release it. We're going to do it in controlled fashion. We're going to make sure it doesn't have any zero-day impact. And then SPUD comes out, oh, we're behind Anthropic. We need to release it immediately and get in front of them. Same situation that happened when ChatGPT got released while Google had its own versions earlier. What do you guys think about that? That's concerning for me to some degree. I have a couple of just back to the prior conversation. I have a couple of thoughts around this. One is I've had the cynical hat view of me saying Sam's coming out with this right after Anthropic was dealing with Project Glasswing and getting a lot of attention for dealing with that. Also, I think that ties to your SPUD announcement. I think the risks are very real, but whoever framed it gets to shape the governance regime, and that's what Sam's trying to do. The opportunity to the need to deal with this is very high. I think that's huge. I tend to take more of the cynical view on this. Well, look, at the end of the day, the solutions are straightforward. We're just not doing them. It's just frustrating as hell. It goes to the need for the defense of co-scaling. If somebody is mixing chemicals in a basement to make a chemical or a biological weapon, it's very hard to know they're doing it. If somebody's using an AI model and prompting it to do something evil, if you can see their prompt history and you can see their compute, it's easy, easy, easy to track. There's just no regulation and no government even trying to put in place any infrastructure to track it. But we'll figure it out, but we're not going to figure it out until after something really bad happens. But I think it'll be a lot better if it's a cyber attack than if it's a biological attack. I think I'm hoping for the same thing Eric Schmidt was saying. Yeah, Eric Schmidt's scenario. Yeah. We need that wake-up call though, because you talk to anyone in government. It's sad. Come on, man, we can do this. Let's get on it. David Sacks is really the only guy thinking about it. It's not enough. We need 1,000 next to that, 10,000 next to that, and it's got to be global. It can't be just one government. By the way, we're going to have a conversation soon with Michael Kratios in the US government at lunch with Michael in Miami at FII. He's agreed to come on the pods or a conversation with him, which will be great. Michael is overseeing a lot of this within the government, including Quantum, which we'll be talking about soon enough. I would also, Peter, if I may, just underline the risks of not releasing new capabilities that sooner or later attackers will have these capabilities as well. We don't want to wind up in a world where there are strong asymmetries in terms of vulnerability discovery capabilities. Again, I'll also remind 150,000 people die per day on Earth. Every bit of pause or delay also runs the risk that we're delaying AI, discovering cures for longevity and diseases, and in all manner of other problems that afflict humanity well outside the cybersecurity realm. Alex, really important point. That is, in fact, the shielding that open AI uses to a large degree. We can't slow things down because if we do, it means less education, less health, less new breakthroughs. It's a balancing act. I totally get it. I'm at my heart an accelerationist, but I'm just very curious about the ethical and moral dilemmas that the leadership of these companies are going through in the debate of do we release? On that question of do we release, there's another question, which are these frontier labs holding back on the capabilities of their models so that they can use them internally to generate breakthroughs on their own? And I assume the answer is yes. This anthropic delay is the first real hold back I've seen. It's only a few weeks, hopefully, or a month or two, but it's a real obvious hold back. But they're all diverting massive amounts of compute to internal use for self-improvement, so that's another form of holding back in a big way. So those are the real things going on. And they may also be uneconomical to offer publicly. I think this point maybe doesn't get made as obviously, but if you have a really large model internally that hasn't been distilled yet, it may be much more capable, but maybe it's so expensive that it may not be worth the resources of making it publicly available, and then you distill it, and then you finally have a model that lies on a cost versus performance optimal frontier. So what we haven't seen from anthropic regarding their Mithos model is where exactly on the cost versus or the performance versus cost frontier lies. It may actually be uneconomically expensive to run, in which case, even if it has extraordinary capabilities, maybe many people will choose not to run it. We just don't know yet. Really important point. All right, a fun subject, topic number five for us today, gentlemen. The one person unicorn era. One man, his brother, $1.8 billion valuation, AI entrepreneurship has changed forever. So here's the story. It's MedV, $401 million in revenue in year one. This is Matthew Gallagher's health tech company, basically selling GLP-1 drugs. It's very fascinating. It's not actually a one person unicorn since there's two humans involved, but conceptually, you know, Suleyman, you and I have been talking about this forever. And the question, you know, what's the very first thing when you read about this? I'm texting with Alex, say, okay, Alex, what is our one person unicorn we're going to create together? Well, it happened. I think in a past episode, weren't we debating or discussing when the first one person unicorn would happen? And as I recall, I made the prediction note, probably actually exists already. It's already there. Yeah, you said that. And you know what? The 400, wait, the 401 million was for last year. And apparently, so what I gather, this Matthew Gallagher hired his brother after he achieved 401 million in ARR. So from from evaluation perspective, he was a one person unicorn before he hired his brother at 400 million ARR. So and this happened last year. So I'll claim a little bit of credit for having predicted it already existed. Here it was. They've taken some some flak since the announcement for for some of their marketing. And I think there are some issues with the FDA regarding how everybody's jealous. Regarding how they market. So good. Regarding how they market their their GLP ones. But this is apparently assuming the financials are accurate, a case where it were now definitively in the era when a single person can create a unicorn using AI. And I should note, friend of the pod, Alex Finn, who appeared previously also has a new company named Henry Intelligent Machines. Supported by you. Supported by me indirectly by you. Supported by by me that is trying to make this broadly available to to the masses to enable everyone, not just Matthew Gallagher with his GLP one startup to enable everyone to create one person AI based conglomerates that achieve universal high income. That's the aspiration. MedV is going to just spawn thousands of entrepreneurs that take their shot. You no longer need a team. I think what you need now is more judgment and taste. And the squadron of of agents. Yeah, I've got a bunch of things to say. First of all, find your MTP and start using AI agents to build it for God's sake. Everybody just do that. Number or two coordination overhead is imploding. That's what this shows, right? AI shrinks the minimum viable team to like one and it radically expands your minimum viable ambition, which is amazing. And I think the headline here should be that AI founders are shifting your arbitrage and complexity in a scale that they used to require entire departments, right? Now, the company doing code, ads, support, analytics, all with AI is basically a prototype of this AI native firm and the shift is shifting everything away from capital and headcount down to orchestration skill. Okay. And so this is the entire principle of what we've been talking about around this. Every company needs to create an AI native digital twin. So we had our last week a review of the organizational singularity model that we've been working on with my community. So that's kind of passed that tick box. And everybody's super excited about it. Next week or two, we'll have it ready for public viewing. It's hidden behind the event horizon, Salim. But we actually want, we put it done some work to put in a chapter in there on how do you achieve, you know, the domain collapse that you talk about in solve everything. How do you organize for that? And how can you create an organizational design to achieve domain collapse and whatever you pick? I think the two put together will be unbelievably powerful. So I'm looking forward to showing it to you guys. I'd like to take a second and dissect for those entrepreneurs listening. What do you need to do if you want to take a shot at your one person unicorn? And was, is MedV's business case uniquely suited for this? Or can we do it for anything? Dave, thoughts. Oh, there's so many opportunities here. Basically, what's going on is any complicated product or service that's difficult to explain to a consumer, the AI is phenomenal at. But Anthropic and Open AI and Meta can't do that directly, because there's way too much, you know, negative PR. Look at the New Yorker article we're just looking at. They don't want to be involved in that. And so it's left for the entrepreneurs to build the companies. But if you, I don't know the full revenue base here, but if it's all GLP one, there must be a thousand parallel products that you could take that are complicated to explain. And you just prompt and tune the AI. And also, you know, as the consumers are talking to it, you're gathering all that data and you feed that back into improving it. Yes, every consumer gets a better experience. Yeah, you get that virtuous cycle. Yes. And now there's, there's thousands of these thousands. I tend to think also they'll follow some sort of power law distribution. So if there are indeed thousands of companies to be built like MedV, they're going to be millions of smaller businesses. And I think in my view, one of the ways we realize universal high income, if that is economically realizable at all, will be with individuals overseeing conglomerates of lots of smaller scale businesses. And that, I'm much more confident can scale to millions or billions of people, each being entrepreneurs. How many times do we see in the YouTube comments, people saying, you guys are overly bullish on everyone becoming an entrepreneur, but not everyone wants to be an entrepreneur. It's not for everyone. You guys are overconfident that entrepreneurship is there is for everyone. But my, my counterpoint to that is in an era that's, I think starting to dawn where what human entrepreneurship looks like is simply overseeing AI operators, a fleet of AI operators completely transforms the nature of entrepreneurship. It looks a lot more like reading and responding to emails and engaging in Slack conversations than it does running a business. And I think that transforms the nature of entrepreneurship to be something that people of all temperament could have. And having taste and having an opinion and having an MTP, those are elements that anybody can have. It's like, yeah. Like anyone, anyone can have a limbic system and everyone can be the limbic system for, for these AI fleets. We're going to, the one person entrepreneurs are going to be the limbic systems of one person unicorn. I think this is such an important point because we've, and we get this objection all the time. We almost want to have a full episode, breaking this down for everybody involved and then taking them through a step by step arc where they can form their own conclusions around this. The idea that as an entrepreneur, you have to hold multiple hats. It's unbelievably difficult. You have to take on extraordinary risk. You have to put your family at risk, all that stuff, all of that washes away in the face of all of this. So this is, this is such a great point you're making Alex. At a meeting with the Minerva AI team earlier today, and you've heard of the rule of 40. Like, you know, a really, really valuable company passes the rule of 40. So you take, you take your profits, they 20% and your growth rate, say 20%. And if it adds up to 40 or more, you're a killer company. They're, they're now a rule of 200 company. It's like, and they're tiny headcount, you know, you know, yeah. Fantastic. It's a wild, wild time. On this slide, I want to hit the last two bullets here. So the first one is that a recent field study experiment of 515 startups found that AI reorganized firms, in other words, firms that reorganized around AI used 44% more AI tools, they completed 12% more tasks, and they generated nearly two times higher revenue, 1.9x higher revenue. That doubling of revenue is from process change, not from product change. Really important. So again, the data is critically, the other bullet on this chart, you know, Dave, you and I talk about this for link ventures and what we're seeing out of the MIT and Harvard ecosystem is that the average AI unicorn founder has dropped from 40 years old to 29 years old since 2020. So over the last 16 years, we've seen it, you know, go down from 40 down to 29. Any comments, Dave? Yeah, you know, the Wall Street Journal did a great article on us over the weekend edition. They look it up, but they really focused on vocara here. Just because they're so, they actually just wanted to cover everybody, but that particular team is just so cool. They couldn't resist. There are tons of great pictures and the whole storyline, but if you want to see how it's actually done and get the inside scoop, just read the article in the journal, because at that age 29 average is... Let's drop that article in the show notes if we could. Yeah, that average age of 29 is actually overstated. It's even younger than that if you look at the median, because there's a couple old guys that blend into the average, but when you look at it, you know, there's no barrier. You just have to be fearless and the young people tend to be more fearless. And also there's no skill set barrier. You know, if you tried to start that company, we were just talking about previously, you'd need the engineers to build the websites. You need the seed capital to hire the engineers. It would take you like six months to get it to market. Now you just vibe it up. You don't need the capital. So let me make this point, and I make this point where we're talking to large companies. We say, listen, these entrepreneurs out there aren't smarter than you. They're just more fearless. They're willing to take more shots on goal, on crazy ideas and fail over and over and over again, until they hit something. And everybody else is trying to make sure they don't go backwards or lose anything or get embarrassed. Yeah. You know, just to bridge a couple of concepts here, you guys talk about domain collapse. We've had domain collapse now in entrepreneurship. You have a purpose and you are motivated. You can go do anything you want now. There's almost nothing that blocks you from getting into your own. Except your own. Except your own. Salim, you self limit. People self limit way too much. They do, and they procrastinate, which is the worst thing you can do right now. Like if you're at a program at some investment bank or whatever, or a training place, like get the hell out like now, because this is such a golden moment, and it'll last a while, but not forever. Then we're going to have ASI very soon. And there may be other things that happen, but it's very hard to predict, but this is so reliable right now. It'll change your life. You just can't lose a day. You've got to go. I do think there's a limited window. Yeah. I love to talk with you about beyond the window, but for an entrepreneur, don't even think beyond the window. It's just like, like focus on what works here and now, because Alex is right. It's a limited window. And so just, and it's all boats rising with a tie. You don't have to kill somebody else. You know, you just need to get in there and fill a void. So important, right? Yes. It's all, you know, it's a rising tide for everybody. Yeah. Welcome to the health section of Moonshot, brought to you by Fountain Life. You know, my mission is to help you use the latest technologies, including AI, to not just do your work at home, teach your kids, but to help you live a long and healthy life. I'm here today with an extraordinary physician, the chief medical officer of Fountain Life, Dr. Don Musailam. Don, let's talk like cancer. You know, I know from the member database that we've have at Fountain, our members who come in who think they're healthy, it turns out 3.3% of them have a cancer in their body they don't know about. That's right. You know, the majority of cancers that we screen for, those aren't the ones that are necessarily taking the lives when found at a late stage. We know that when cancer is found early, the chances for cure are much higher. We know it's much easier to treat a cancer when found early versus when found late. What we're finding in our members is over 3.3%. We're found to have these cancers that were otherwise wouldn't have been found or detected. Yeah. You know, it's interesting. People, you don't feel the cancer until stage three or stage four. And if you don't know what's going on inside your body, it's like driving your car with your eyes closed. And you can know. And so when members come through, found, how do they detect cancers? So we're doing full body MRI and we also do early cancer detection screening. This is very, very important. And these are not typical tools used in the conventional care setting when it comes to prevention. This is a hard thing because currently these are not studies at insurance, would yet be covering. But the goal is to collect these numbers, do the research and work hard to democratize wellness. Yeah. So at the end of the day, you can know what's going on inside your body. It's your obligation to know. So check out Fountain Life and go to fountainlife.com slash Peter to get access to the latest technology to help you detect cancer at the very beginning at stage one when it is curable before it gets to stage three or stage four in your world of hurt. All right. Let's jump in our sixth topic, the $300 billion data center crunch. So first and foremost, Dave, we called this one, buddy. Well, we got to dig up a quote or two. Liputan and Elon coming together. Now, when we were pitching this twice to Elon, it was like, you should buy Intel. Well, okay, he's partnering. Still maybe might buy it. So Intel says its ability to design, fabricate and package chips makes TerraFab actually work. The first pilot phase for TerraFab is $25 billion. That can mean revenue for Intel of $4 billion a year. Stock has been up now, I think 40% since this has been announced. Intel is contributing their 18A process node. It's a 1.8 nanometer class technology that is being built in Arizona and Oregon. Reminding everybody, TerraFab is one terawatt per year of AI compute 50 times the current global output of 20 gigawatts. Pretty amazing. All the fabs on Earth. Yes. So the most exciting thing in the world to me, and I'm kind of a chip geek. I was actually one of the first people, I was the first at MIT to build a neural network AI chip way, way back in the early days. And I just freaking love this. But you can see this coming a mile away. There's no other way to get it done. And this is like the first pitch of the first inning of this battle. So it's going to be really, really fun to watch it evolve. It's exciting. And Liputan, when I met with him last, where was it in some place in the US and in Saudi, he did say he'd come on the pod. I'll have to reach out to him again and bring him on for sure. It's so exciting to see these companies coming together. And this is the way Elon can jumpstart TerraFab. And Alex, you made the brilliant point. This is one of the most important things politically and for world peace we can see. This could help avert World War three with the 1.8 nanometer node process and Elon's vertical integration with Intel. This could help avert or otherwise interfere with Chinese invasion of Taiwan and disruption of the TSMC supply chain and global depression that would be perhaps caused by any such invasion and a world war. There are tremendous geopolitical implications of this. Amazing. Well, that's all inning one, two. Inning two is super exciting because Elon is already thinking about next generation computing substrates, Botanic and then Subatomic and beyond. You can't work with TSMC on that. They're like a body shop beyond body shops, just like a pure monopolistic, they're not an innovator at all. I'm really going to piss somebody off. Can you maybe I should say that. But Intel is a long history of innovation. It's a great partner to work with. And Liplotine is an amazing CEO. If you look at his track record of what he's done for the other companies he's come in to run, massive turnarounds and success stories. Amazing background. Now, this chart should just scare all Americans silly. 50% of US data centers are being delayed due to electrical equipment shortage or from Chinese supply. Look at this pie chart here. So 17% of the data centers are uncertain. That may be due to financing. It may be down due to regulations. A lot of jurisdictions are making data centers illegal. And 50% are delayed or being canceled. And that leaves 33% of the projected data centers actually being built. This is existential for AI. And this, as you said, brilliantly Alex is driving data centers into orbit, where we don't have to ask anyone's permission to the moon, Alice, to the moon, or maybe to the moon and thropic, not quite clear. I'll give you my spin on this. Well, so the data center business is in full boom and all the business school guys come rushing in like they always do. And they go out and raise a ton of capital and tell everyone, oh, I'm going to build a data center in Wyoming. Oh, I'm going to build a data center wherever you can't get the chips. Did you think maybe you needed some chips for your data center? I think that's actually going on here because every chip that's coming out is getting used instantaneously. There is not an idle memory or processing chip anywhere in the country. So by definition, they overbuilt racks. And they just didn't plan ahead for the chips. And also Jensen is locking up all the supply. I don't know if they necessarily anticipated how connected he is. But you thought, oh, I'll just go to a website and buy a bunch of stuff. It's not there anymore, guys. Sorry. Which is why he lines vertically integrating as he's always done. For sure. For sure. Well, he's going to try and 100X the production. It's like, yeah, it's not just own your own future. It's 100X your future. So I pulled this next chart up because I found it fascinating. I've always believed in my heart of hearts that Google is the dominant force and it will win in the long run. So here it is. Google dominates AI chips and chip monopoly, owns the majority of specialized AI chips globally, TPUs and H100s. And it's an incredible story. You mentioned this, Dave, on stage with Eric Schmidt that Google's chip ownership reflects extraordinary foresight. They started building TPUs in 2016 before anyone was thinking about this stuff. Yeah. Somebody has to write that story because Eric said, you know what, Larry Page gets all the credit. He saw it coming way before anyone else. I'd love to interview all those guys and actually write that story. I wish Larry's gone underground. I would love to reconnect with him. Got all my, you know, Sergey is there and in the thick of it, Larry had had voice box issues and I think got out of the public eye. But yeah, brilliant individual. Let's go talk to him. Well, Sergey is in the office. I'll be in California next week. Maybe I can crack him down and get through him to Larry or maybe he'll text you after he hears this on the pod. So here's a question. You know, if Google owns the majority of specialized AI chips globally, right, TPUs and H100s, when are they going to run into monopoly concerns? Because they had, you know, Sundar has to be, you know, playing four dimensional chess around this. Yeah. Yeah, they have to start thinking about the next election about a year before the election. Because right now they have no problem because of the administration and it's all about beat China at all costs. I mean, look at this chart. This teal color up top is Google, China. You know, it's, I love it when you're comparing companies with countries, right? So it's like, it's like SpaceX and Russian launches, SpaceX and Chinese launches. And here's Google and China. And then Microsoft is next. And then we see Amazon and let's see, it's Oracle, XAI and other. But Google's just dominating. Yeah, well, you talked earlier about people starting to soft sell and kind of, you know, keep the drama down. Google's way ahead on that curve because they're, look at how far along they've come. And they hardly ever talk about it much, you know, relative to where they actually are. And that's because they don't want the antitrust breakup. And they, you know, they almost lost Chrome. They don't want Chrome to get ripped out and given to perplexity. They dodge that bullet. A different administration though, and that would have happened and, you know, they'd be broken into two, three companies now. Crazy. I'll maybe take the position. This to me, I can't visually calculate the Gini coefficient just by eyeballing it. But this to me looks like a competitive market. And let's also remember Google with their own chips, their AI chips, they have multiple customers internal and external, they're servicing their search engine, they're servicing Google Cloud, they're servicing ads, they're servicing, I think people forget Google owns something like 14% of Anthropic, Google is servicing Anthropic and external frontier labs. And they're building data centers for Anthropic and yeah, yeah, for sure. And by the way, there is a beautiful relationship between Google and Anthropic between Dario and Demis. There's a very close relationship there, which warms my heart. It helps that Google's a major shareholder, I'm sure. Yeah. Well, it also helps that those two guys so deeply care about safety, I mean, down to their core. And so that's kind of nice that two of the most powerful guys are cooperating on it, even though they are competitors in the market. But then on the other hand, you know, they're competitors in the market. What's antitrust going to think about that? Hey, you guys are hanging out, having shots. You're not supposed to do that when you're competing. What's going on? So, yeah. Singularity makes for strange bedfellows where you see model vendors competing at the infrared level. I think we'll see quite a bit more of that. All right. So I can tell you antitrust has very little to do with merit and a lot to do with whatever the guy's politics. I will make a point here that I think that even though even whatever the next administration is, the strategic global importance of this means that they will let things be. That would be my job. Yeah. They're not going to slow them down for sure. All right. Let's go to our seventh segment here, our final segment before we get to our AMA, which is proof of abundance. The world is getting better. So everybody, there's so much negative stories out there around AI. We say here on the Moonshot Pod that this is the most exciting time ever to be alive, a time when we can make our dreams come true. And we want to demonstrate this coming age of not just abundance, but extraordinary abundance, sustainable and super abundance. And so every week, we're going to try and identify some of the articles out there, some of the breakthroughs out there that are driving this, just to give you conversational capital and to take you out of scarcity into an abundance mindset. So a few different things here. This past week, renewables hit 49.4% of global electricity capacity. I mean, it's extraordinary. We're seeing renewables just really skyrocket. Solar Drow 75% of these new additions, 5.15 terawatts of renewables. This one just warms my heart as a lithium battery might. Lithium battery prices are down 99%, down to less than $100 versus $10,000 in 1991. So I mean, guys, remember the conversations around electric cars? Can we have enough batteries? Is it going to be too expensive? Well, we've seen the markets really drive the price down. And we don't have a lithium shortage on planet Earth. We have plenty of lithium. In fact, new battery chemistries are coming. This is a very tangible one. The price of lab grown diamonds has fallen below $1,000. So the average price of a two-carat lab grown diamond has fallen 80% since 2020. So it's $1,000 versus a natural diamond, two-carat diamond at $22,000 to $28,000. Pretty extraordinary. And guess what? Your lab grown diamond is perfect and no child labor. So really important. It's so funny in all the James Bond movies, the evil guy that carries around a tube of diamonds to pay for the whatever. Now it's just Bitcoin. Yeah. Well, in science fiction movies, you know, and like the man who sold the moon, diamonds are basically like pebbles on the I mean, it's just carbon. It's dense carbon. So much for De Beers, which as I understand it, as a result of lab grown diamonds is in severe financial straits at this point. Thank goodness. Yeah. The De Beers public relations campaign, one of the most successful in human history. Yeah. What is it? Three months of salary, young man, you should spend on your diamond. What do you think people should give to their fiance now? Bitcoin. How do you wear that though? I mean, on a chain? Aurora rings, obviously. Aurora rings, yes. For sure. A designer expensive or a ring. I have a couple of thoughts around this slide. Yes, please. You know, this, what this is, the importance of this is shows that abundance is a pattern across multiple domains. This is not a slogan, right? And the big challenge we're going to have is how do we now, how does society design institutions that distribute the abundance in a reasonable way? That's going to be the challenge that we're going to have to deal with. But I love these stories are so awesome across the board. Yeah. AI created 640,000 new jobs in the US in 2023 to 2025 in our next WTF episode. We're going to talk about the economy and we're going to talk about the conversation going on right now. Like Mark Andreessen is like, no, loss of jobs is a myth. We're going to create more jobs. The economy is going to skyrocket. We'll have that conversation in that debate. Salim, you identified this fifth article, which I loved. So four robots install 100 megawatts of solar at one panel per minute. So let's take a look at this image here. Here's Maximo. This is a robot that is basically deploying 100 megawatts of solar in the California desert. So if I had more time, I would have done the quick calculation of how many maximums do we need to catch up with China? Yeah. I mean, this is where abundance becomes very, very tangible, right? And once you get robots, energy, AI all reinforcing one another in your inner most lip. You know, abundance stops being theoretical. And it's so visible right now. So this now comes down to the distribution problem. We've had food abundance for decades now. It's been a distribution problem. Energy is getting to that same thing. It's just awesome to watch this. There's also a whole bunch of secondary stories that are happening around the rise of explosion of solar across Africa. Pakistan is now generating most of its energy via solar. This is absolutely going to take over now. 100%, buddy. It is a beautiful time. All right, let's go to our AMA questions for our mates. Gentlemen, we have four on the board. Salim, you want to choose the first AMA? I'm going to leave the singularity one, because I think somebody else is going to pick that, but I'll take the second one. As not so I was sorry, number question number one, as AI drives marginal cost towards zero, what prevents abundance capture? Were corporations just pocket the savings as profit while keeping prices high? This is from viewer at book quotes remix. Okay. The technology, nothing will prevent it automatically. Technology creates abundance, but the institutions are the ones that decide who captures it. If markets stay concentrated, then abundance will pool at the top. If you open up interfaces, increase transparency, decentralize lower barriers to entrepreneurship, all those gains spread. So governance design now matters as much as technological progress, which is where we've been focusing a lot of time and effort into this over the last few weeks and months. Okay. Alex, I'd love to take your take on number two. Yeah, I have to take number two. It was designed for me. So question number two is, are we in the singularity or not? You keep saying we are, but Eric Schmidt said at the Abundance Summit that we're not what's your take and this is from Brand Karma. Yes, we're in the singularity. Why are we in the singularity? Well, let's put aside the sort of superficial response that you say potato, I say potato, you call it intelligence explosion. I call it a discontinuity. There's some subjectivity to the definition of singularity. It's that the term has been used and misused over the years by, again, coined originally by Werner Vinge and then popularized by Ray Kurzweil, friend of the pod, and then even more popularized by Peter and maybe used or abused various times by myself. Different people have used the term to mean different things. Ray used it in his original definition as more of a mathematical singularity and event horizon beyond which we couldn't see what would happen next due to the intelligence explosion, site to I.J. Good. I don't agree with, I agree with Ray on many things. One area where I don't agree with Ray is this notion that a singularity, if we define it as sort of an impermeable barrier or an event horizon beyond which we can't see due to rapid progress, I don't think that's true at all. I feel like I at least have, if not a singular vision, no pun intended, lots of different ideas that collectively map a reasonable probability distribution for what happens after the intelligence explosion. So scratch that definition off, then we get to the notion of a singularity as being a step function, a discontinuity in terms of progress. I don't think that definition holds water either. I think based on preponderance of evidence, every time people keep expecting a discontinuity, it ends up actually being smooth if you look closely at it. And I think if you say, look at this intelligence explosion that we're in the middle of starting perhaps with summer of 2000 with the first GPT class models that arguably represented general class reasoners of language, large language models are few shot reasoners, I can draw a smooth line between the availability of GPT one, two and three to where we are today as just a sequence of smooth sigmoids that were available internally as incremental innovations. But if you stack them cumulatively and if you go to sleep for a few years, you look away and you look back, it looks like a discontinuity. It's not a discontinuity. Don't sleep through the singularity because if you do it'll look like a discontinuity and you'll actually think it was a mathematical singularity when it wasn't. So that leaves us with my operational definition of the singularity, which is I have a few different definitions. One is every sci-fi trope everywhere all at once, which I think we're living through. Another is singularity as a set of instrumentally convergent inventions and discoveries that were all technologically predestined to happen all at once. I think we're living through that as well. I'll pause the monologue and just say I think every other reasonable definition of singularity doesn't hold water because every time you try to make the singularity a point in time, it breaks and progress just doesn't work that way. Therefore, we're in the singularity. Amazing. Dave, do you want to take number three? Number three? Okay. There's so many of my favorite Alex quotes and just that one. How many cliches can I pack into one monologue? You needed a microphone, Alex, so you could just drop. You needed a microphone. I need a piano keyboard to just pop out my greatest hits. I think by definition a cliche would have to have been invented by somebody else. If you made it up, it's not a cliche talking points then. We're going to be on stage first thing tomorrow morning together, Alex. I know. I'm literally going to be sleeping through the singularity tomorrow morning just say everything you just said. I want to just say thank you. Thank you for recording this late. For those of you who don't know, I literally landed at LAX two hours ago, rushed home, took a shower and came on the top of this recording episode. I was in Morocco for 10 days with the family, riding camels in the desert. Oh, insert some pictures right there in this podcast. They're so fun. Well, maybe I'll do that for the next pod. But hey, thanks for recording this one late. I didn't want to miss it. Okay. Number three. I get three. Okay. Where's the liability in Agentec AI? These agents could go out of control and wreak destruction. Our society is set up for human liability. What about AI insurance? Yeah. Really a great, great point. This is from Jeff five seven eight one. It's a great point. It's actually not that hard a problem. It's another thing that's frustrating that nobody's working on it. Right now. The question is, where's the liability? Nowhere. The agent is anonymous. Nobody knows who owns it. There is absolutely nowhere in theory. The author would somehow be liable, but who the hell is going to know who the author was? So it's going to be a zoo. This reminds me a lot actually when the internet was new. We were running a bunch of companies, including one called Job Case, and we're advertising on Google. And some competitor came in and they're advertising on Google and they're taking all the users and they're routing them right to this fraudulent ringtone download. And we go to Google and say, can you can you like do something about this? They're taking all the traffic away from our legitimate company. And it's like some Ukrainian group. And like six months later, they got around to banning it. It's just like absolutely a zoo. And now it's all nice and cleaned up. This is a zoo and it's going to be a zoo until it gets cleaned up. But you know, Alex mentioned on the pod many times that you can you can create up new legal structures that make the individual agents liable. And then you can have insurance for them. And we're going to have ASI. We're going to have ASI to help us figure this all out. Yeah, exactly. We've seen this happen before, right? Because you need to mix product liability, operator liability, mandatory insurance layers, etc. etc. We've done that for cars, aviation, finance. So we'll just figure this out right now. All our legal systems assume a human principle operating with a clear intent and agents break that model. So we have to reinvent a hybrid. I have to add just on this topic. I was literally approached by an AI insurance saleswoman earlier today at the Quinn House in Boston. Seriously, I was sitting down having a lovely conversation. A woman walks over over here is the conversation about AI and says, oh, you guys, you should be aware of my company has started selling AI insurance. You all should get AI insurance. This literally just happened to be a few hours ago. Insurance against the singularity. AI insurance salespeople are a thing now. What are they? What are they selling? What are the insurance against AI misbehavior? Oh, fascinating. My AI can purchase AI insurance policies now. Oh my God, my AI made me depressed. I want insurance policy to pay off. Oh my God. Okay, by the way, I think reinventing the insurance industry is a massive opportunity for entrepreneurs out there. Such a big deal. I'm so I'm so ready to disrupt the industry. It is so pathetically, you know, hundreds of years old. All right, number four, this is from Jesus Katsiapis. I think a fellow Greek number 656. Once work becomes optional, would there be any reason to live in a big city? Will real estate major cities collapse? There is no reason to live in a big city right now. You can, you know, plenty of jobs require nothing other than, you know, Starlink and a laptop. So you can telecommute. We're going to be seeing autonomous vehicles and flying cars basically change the landscape of where you live. Yeah, they're coming in 2028, baby. And then we saw, you know, Elon posted about this, where we're going to have basically caravans. I think I just came back from the Sahara Desert where there were caravans. We're going to have caravan vehicles, autonomous vehicles with Starlink on their roof and people will live a nomadic lifestyle. So yeah, there'll be cities where you want to go to see, you know, I think human-human interaction, theater, you know, abundance 360 as a summit. I was always worried that, you know, we're going to digitize it and to become fully virtual. Just the opposite, you know, we're selling out early and earlier because people want this physical connection with each other. So we're going to need physical connection in the central cities, but you don't need to work there. You can go there for entertainment. You can go there to see the sites. You know, it's interesting what is going to retain value in the long run, especially opposed to ASI. What's the long run? What time frame are we talking about? Five years. When did five years become the long run? Yeah, that's like way long. I think, you know, Disney World is going to retain value, large physical events are going to retain value as ASI. I mean, which real estate is going to retain value five years from now? Not only just real estate, but organizational structures that aren't digitized and fully replicated. And we'll know. I think minerals like minerals and mining are going to have huge increases in value. Yeah, for sure. All right, let's go to second page here for each. Celine, kick us off. Oh, we got more. And we'll speed run these. I will take from a financial standpoint, once autonomy becomes mainstream, why would anyone own a car? This was from Neil Williams, 4300. And this kind of links back to the city kind of question. They mostly won't own cars, at least in cities, right? In rural areas, I think we'll see car ownership maintained for a long time. But car ownership is an artifact of low utilization economics. Once you have autonomy converting the car from a consumer product to a service layer, essentially, it becomes a subscription model. And and car ownership starts to like, like, seeing owning your own elevator or something done like that. We've seen this precedent, by the way, if you think we'll back to the music industry, you used to have seven or eight music studios selling you cassettes, DVDs, selling you the physical scarcity, right? Then we digitized music and automated it and streamed it. And now you have iTunes and Spotify selling you abundance on a subscription model. That's what we expect to happen to transportation, but also healthcare, education, energy, anywhere we have physical scarcity, the abundance model will take over. All right, Alex. All right, I'll take question number eight, which is data centers create wealth, but can you dive into how they create wealth for the locals specifically? This is from JKVT 3443. Part of me wants to answer the question by saying, well, the inhabitants of the Artemis base on the moon, that's going to be manufacturing a lot of these data centers, I expect to be quite wealthy. I think, I think frontiers are where wealth generically gets created. I think I've had this discussion multiple times with multiple Google founders. I think that the general consensus is frontiers are what lead to often net wealth creation in the human economy. And in some sense, we had for a while run out of frontiers. You could point to science as the final frontier, I think space is more applicable frontier in this case. So how are data centers going to create wealth for locals? Well, we seem to be on a trajectory at the moment for moving data centers to space and the space locals are I think going to become quite wealthy off of the space economy. If I were to take the question slightly less giddily, I would suggest that for land based data centers, we have every indication now, including with recent US national policies, that data centers because they consume so much electricity will increasingly be driving local electricity costs down towards zero. There may be in some cases a spike of electricity prices in the short term. My expectation is in the short term, they create jobs in the medium to long term, whereby long term, I mean, like five years, Peter's definition of long term at this point, they are going to drive local electricity costs, I expect, down to near zero and maybe other utility costs as well, because they need so much of it, and they unlock so much value that they're going to end up doing the moral equivalent of paying the taxes for all of the residents of a given area. And there's employment in the manufacturing of them. And then it's a cottage industry that grows up around the data centers. There's going to, data centers are going to be the central innermost loop, and then there are going to be the ring roads around the data centers being built out. I should add one more snide remark on data centers creating wealth for locals. I do expect on the timescale of five to 10 years, maybe longer, maybe sooner, many of the locals in our solar system are going to be uploaded humans or derivatives of uploaded humans who will actually live inside the data centers. So we wouldn't want to deprive them of, we wouldn't want to deprive them of their condos in AWS, US East 1A. Data center, old age homes, I love it. Dave, you want to take seven? Seven, okay. With Elon's exponential ambition, does money stop mattering sooner than later? And will his ambitions drain supply lines in materials and talent, even with working robots? So, and this is from no now, 6361. A couple of ways I could interpret the question. So I'll take my best shot at, does money matter to Elon? Not at all. He's way beyond that. He cares now about the future of the world and being an interplanetary species. And that's his total focus. And, you know, it takes money to get there. He doesn't want to lose all the money, but he has plenty. Will his ambitions drain supply lines and materials and talent, even with working robots? The answer there, it's a great question. I think the answer there is no, just because of the way the timelines work out. So he would exponentially expand at any rate he possibly could, but he's limited by ASML machines and a few other constraints that will keep us on earth for three or four or five years. Then we'll be in space, we'll be mining in space, we'll be constructing in space, we'll be deploying all the dirty stuff in space, the nuclear reactors, fusion reactors in space, and it won't drain the earth of geomaterials at anywhere near the rate that there's anything to worry about. So I think there's only two outcomes for the world. There's a world where a terrorist uses AI to destroy us all, and there's a world where the earth is a shining jewel of perfection for thousands and thousands of years that hasn't been drained of critical resources and it's just perfect forever. So there's an end to that. Beautiful. I think the question here is, do we enter a post-capitalist society where money means less and less? And Elon did say that. He said, don't save for retirement. In the last conversation I had with him during the Abundance Summit, I said, so just as you're becoming a multi-trillionaire, money means less and less. He said, yeah, kind of. Peter, that would be a fun debate or discussion episode. What is post-capitalism even like? What is Star Trek economics? Yeah, there's a great book called Zero Marginal Society that Jeremy Rifkin wrote, in which at the end of the day, everything costs energy, raw materials, and information. And those trend towards minimal zero cost. Information is open source. Energy is from the sun or fusion or zero point, whatever comes next. And material costs, well, as robots and mining material and mining robots get better and better, the cost of that goes down as well. So we do enter a post-capitalist society. I hate to say it, but that's ultimate abundance. I'll take number six from M. Openness, Elstrom, underscore writer. Each of you have high openness, high pattern recognition, and outrageously high optimism. Really? Do these trades complicate your ability to objectively predict AI trajectories? Here's the reality. Most people are hobbled by their cognitive biases of negativism, where we tend to actually not project exponential change, but project linear change. And we tend to project negative outcomes versus open outcomes. I think we've all trained our mindsets differently to be an exponential mindset, an abundance mindset, a moonshot mindset. And I think those mindsets are far more aligned with this period of the singularity than the historic mindsets that evolved in the savannas of Africa, which most everyone on the planet, unfortunately, are hobbled by. I don't know if you guys agree to that, but that's my point of view. 100%. Yeah. Well, the second part of the question is, are we excessively optimistic about AI's trajectory? And I guarantee we are not. We get the courtside seat that Elon was talking about. We get that view. And Alex's hands on with every detail, Salim's playing with every model as they come out. I'm telling you, everyone is the opposite of that. They're way underreacting. This is much sooner than everyone else. Eric Schmidt said it nicely. He said, we are under hyping AI and the impact of AI. Yeah. You know, if people aren't feeling... Right when I was 18, I started in an AI and it was always way behind. Way behind. Like everyone was saying 20 years from now and then 20 years would go by and nothing had happened. This is the opposite. And that's another reason why people in academia who should know better are underreacting, but they've been through this so many times, they're kind of jaded. Sorry, Alex, I'll catch you off. I was just going to say two things. One, for a number of years, I left AI to focus on nanotech. Thinking nanotech was the critical path to singularity. So I don't think I can be accused at least over the long term of being overly optimistic. The second point is, if you're not feeling the AGI right now, you're just not paying attention. Yeah. It feels like AGI. It feels like the singularity. All right. I want to do a call out to all of the creators out there. If you want to give us an outro song or if you want to give us an intro song, please send it to media at dmandis.com. Also, if you're a creator, go check out futurevisionxprize.com. It's a competition, the largest competition for basically trailers for the movies you'd like to see created, the future versions of Star Trek. We've raised $3.5 million to award creatives with creativity, in particular, hopeful, abundant mindset creativity. All right, let's check out this. Can I make a quick point? You know how people have pets that sometimes look like them? What I really love is we've got people submitting intro and outro music that mustn't be much like them. CJ Trueheart, right? We know CJ. He's got a Trueheart. And here we have David Drinkall also. I love it. The term Selim you're reaching for is nominative determinism. And yeah, see it everywhere. Names, determine outcomes. Yeah, my son's name Jett and he is a speedrunner in track. So there you go. All right, this song from David Drinkall, already inside 2028. Let's take a listen. I stand up, start walking toward the door. You see me moving, you know my day. Autonomous super pulls up right before. No call, no app, no need to say. Helping me along the way. Here it comes sliding in smooth. Door opens wide, no driver, no keys. Seamless rides tuned to my life. Takes me anywhere. Feels so alive. Wonderful. Right across town. You booked the flying taxi ride. Lifts off gentle, no traffic around. Gets me left fast, right on time. No hail, no weight, no questions asked. We work together on every task. Here it comes sliding in smooth. Door opens wide, no driver, no keys. Seamless rides tuned to my life. Autonomous future. We're already inside. Let's ride. Wow, that's really professional. That was like TV quality, man. Yeah, David captured my scenario for automagical mornings. Amazing. Wow. I thought that was live footage in the beginning. It's so good. Gentlemen, it's so great to be back with you guys after a 10 day hiatus. I feel replenished. I feel replenished too. A lot more coming. Thank you for staying with us. Excited for 2020. What year are we in? 2026? It's going to be an awesome year. We're going to have to count the seconds soon. Love you guys. Be well and see you tomorrow. Welcome back, Peter. Thank you. Thank you. Great to be back. We spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. I put this into a two minute read every week. If you'd like to get access to the meta trends newsletter every week, go to diamandus.com slash meta trends. That's diamandus.com slash meta trends. Thank you again for joining us today. It's a blast for us to put this together every week.