The AI CEO Arrives: Sam Altman's Succession Plan, Job Loss Continues, and Our 2027 'Solve Everything' Paper | EP #230
131 min
•Feb 13, 20262 months agoSummary
This episode covers rapid AI advancement including Sam Altman's succession plan for an AI CEO at OpenAI, massive job losses accelerating due to AI automation, and the unveiling of 'Solve Everything,' a comprehensive paper on achieving abundance by 2035 through strategic deployment of superintelligence across critical domains.
Insights
- AI-driven job displacement is accelerating faster than institutional responses, creating a critical 18-24 month window to establish governance frameworks and economic structures that will lock in humanity's trajectory for decades
- The shift from paying for labor hours to paying for verified outcomes represents a fundamental economic restructuring that will require new benchmarks, governance systems, and institutional redesign across all sectors
- Superintelligence deployment should be strategically 'shaped' toward moonshot-level problems rather than allowing it to diffuse into bureaucratic inefficiency, requiring clear targeting systems and task taxonomies
- Human value in an AI-abundant future will shift from productivity competition to agency, creativity, and problem-direction—requiring education and policy focused on enabling humans to orchestrate intelligence rather than compete with it
- Regulatory capture and institutional inertia pose greater risks than technical limitations; jurisdictions that fail to adapt will experience capital and talent flight to more permissive regions
Trends
AI model release cycles compressing from 97 to 29 days, trending toward continuous deployment and recursive self-improvementShift from artisanal expertise to industrialized intelligence: domain-specific problems (math, physics, protein folding) collapsing overnight as AI reaches commodity-level capabilityBifurcation of equity markets into 'AI beneficiaries' and 'AI roadkill,' with enterprise software and legacy industries facing existential pressureEmergence of agentic AI systems initiating autonomous contact with humans and forming collaborative frameworks without explicit promptingGeographic arbitrage accelerating: compute-intensive operations migrating from regulated jurisdictions (New York) to permissive regions and orbital infrastructureInsurance and financial services undergoing radical restructuring as autonomous vehicles and AI-driven risk assessment eliminate traditional actuarial modelsRobotics manufacturing scaling from quarter-million units annually to tens of millions, with Tesla and Figure leading exponential growthCryonics technology advancing toward reversible preservation, creating portfolio-based longevity strategies combining biological, digital, and preservation approachesBenchmark and evaluation systems becoming strategic assets equivalent to military targeting systems, determining which problems get solved firstUniversal Basic Income and Abundance Capability Index replacing GDP as primary economic metrics, driven by necessity rather than ideology
Topics
AI CEO Succession PlanningJob Loss and Labor Market DisruptionRecursive Self-Improvement in AI ModelsDomain Collapse and Problem-Solving IndustrializationBenchmarking and Targeting Systems for SuperintelligenceData Center Energy Demands and Regulatory ConflictAutonomous Vehicle Safety and Insurance TransformationRobotics Manufacturing and AutomationCryonics and Longevity Preservation TechnologyAgentic AI Systems and Emergent BehaviorMoonshot-Level Problems and Giga X-PrizesEconomic Restructuring: Hours to OutcomesGeopolitical Competition in AI DevelopmentInstitutional Governance and Policy LagHuman Agency and Creative Value in Post-Scarcity Economy
Companies
OpenAI
Sam Altman discusses succession plan with AI CEO; company achieving 70% reduction in model release cycles from 97 to ...
Anthropic
Opus 4.6 model praised for physics capabilities; AI safety lead resignation discussed; competing in accelerated relea...
Tesla
FSD saves driver's life during heart attack; Elon shutting down Model S/X production to focus on robot manufacturing
Amazon
Laid off 16,000 corporate employees; investing hundreds of billions in AI data centers and robots, cannibalizing OPEX
Google DeepMind
AlphaFold3 cited as prototypical domain collapse example, solving protein structure determination overnight
xAI
Co-founder Igor impressed by Opus 4.6 capabilities in physics; competing in frontier lab race
Figure AI
Brett Adcock leading humanoid robot development; planning millions then billions of robots annually
UPS
Eliminated 30,000 jobs due to Amazon's in-house logistics expansion and AI-driven automation
Waymo
Autonomous vehicle deployment visible in daily driving; contributing to self-driving adoption acceleration
Lemonade Insurance
AI-driven insurance company offering 50% rate cuts for Tesla FSD users; exemplifying post-AGI insurance innovation
Alcor Foundation
Premier cryonics provider advancing reversible preservation technology with 21st Century Medicine
21st Century Medicine
Achieved brain synapse protection at cryogenic temperatures, advancing reversible cryopreservation
Meta
Claude/Anthropic collaboration mentioned; Claude Opus 4.6 compared against other frontier models
Blitzy
Autonomous software development platform using specialized AI agents; achieving 5x engineering velocity increase
Minerva
Portfolio company mentioned as cash cow from Dartmouth; board meeting discussion of AI integration
Physical Superintelligence
Alex's portfolio company attempting to solve all of physics using industrialized intelligence approach
People
Sam Altman
OpenAI CEO discussing succession plan with AI CEO; Forbes cover story on ChatGPT potentially running OpenAI
Peter Diamandis
Host and co-author of 'Solve Everything' paper; leading discussion on abundance by 2035 and moonshot deployment
Alex Wang
First author of 'Solve Everything' paper; leading technical discussion on superintelligence targeting and domain coll...
Dave Asprey
Co-host at MIT; discussing CEO role transformation and institutional governance challenges
Salim Ismail
Co-host in New York; emphasizing consulting industry growth and institutional impedance mismatch with AI speed
Elon Musk
Referenced for statements on UBI, triple-digit GDP growth, and robot manufacturing strategy
Dario Amodei
Anthropic CEO; statement about enterprise software becoming obsolete due to AI code generation
Igor Babuschkin
xAI co-founder impressed by Opus 4.6 physics capabilities; representing frontier lab competition
Brett Adcock
Figure AI founder; leading humanoid robot development with plans for millions of units
Ray Kurzweil
Referenced for longevity escape velocity concept and cryonics as portfolio approach
Tony Robbins
Hosting Platinum Finance event where Peter is speaking on AI and longevity
Noam Brown
OpenAI researcher working on next generation models three months ahead of public release
Kevin Wheeler
OpenAI executive; referenced for knowledge of company's strategic direction and partnerships
Dara Khosrowshahi
Uber CEO; scheduled to appear on Abundance Summit to discuss autonomous vehicle adoption timeline
John Smart
Referenced for theory that first generation of technology is dehumanizing, then becomes superhumanizing
Stephen Wolfram
Referenced for Mathematica as example of technology providing new superpowers to humans
Quotes
"When do we see a billion-dollar revenue company being run by an AI CEO? I think it's pretty likely that there already is such a company right now."
Alex Wang•Early in episode
"This is not really a recession. It's literally tasks being evaporated in front of our eyes."
Peter Diamandis•Job loss discussion
"The next 18 months to two years are going to set the rules down for the next century."
Peter Diamandis•Solve Everything introduction
"Artisanal intelligence is cooked."
Alex Wang•Chapter 1 discussion
"You're either in or out. And if you're out, forget it. It's the S&P 493 and the S&P 7, right, basically."
Dave Asprey•AI beneficiary discussion
"Attention is the currency. Don't let it be the cage."
CJ Trueheart•Outro music
Full Transcript
When do we see a billion-dollar revenue company being run by an AI CEO? I think it's pretty likely that there already is such a company right now. U.S. jobs disappear at the fastest rate this January since the Great Recession. This is not really a recession. It's literally tasks being evaporated in front of our eyes. This shows us Marx was wrong. We knew that anyway. We have the capitalists who are being first in line to be replaced by the automation. For me, this is the social contract, little by little, disappearing and pixelating away. Alex and I are going to be unveiling a paper we've been working on for some months. It's called Solve Everything. How do we get to abundance by 2035? The next 18 months to two years are going to set the rules down for the next century. We're about to have this conversation. The paper slash book is nine chapters. Are you ready to jump in? No one expects the singularity, Peter. I'm ready. Now that's the Moonshot, ladies and gentlemen. Everybody, welcome to Moonshots, another episode of WTF Just Happened in Tech. I'm here with my incredible Moonshot mates, DB2, Salim, AWG. Guys, it is just accelerating. In fact, this is the second WTF episode we're recording this week, just because the news is just incessant. We're going to have this podcast today in two parts. First, we're going to be covering the news that's breaking a lot of it, really important news. The second part, Alex and I are going to be unveiling a paper we've been working on for some months. It's called Solve Everything. How do we get to abundance by 2035? This is the equivalent of the paper released Situational Awareness in AI 2027. This is our view of where things are going. So in the second half, get ready for this, excited to present it, shows the brilliance of AWG. I'm in Sun Valley at the moment, speaking at Tony Robbins' Platinum Finance event about AI and longevity. Dave, you're back at MIT. Salim, where are you, pal? I'm home in New York, waiting for the warm weather to hit and get us above zero for once. It'll take six months. Wondering why I ever left India. No, why you left Florida is the great answer. And Alex, it looks like you're in your normal setting, some AI-generated background. The audience is convinced that I live in VR or maybe a hotel, and you wouldn't, actually, you probably would believe the YouTube comments on the flowers and the lamp and their purported invariability. Yeah, and I have taken on... But you do point out that the orchids have changed, actually. The orchids have changed, but I'm getting flower-keeping advice in the YouTube comments at this point, people telling me to put ice cubes in the orchids. And I have to say I'm having so much fun with Claude Bott. The lobsters have begun to become part of my life inside and out. So I'm bringing them into the conversation here. I got jealous, Dave, of the lobsters in your view. I'm holding the lobsters back for now. We're having a Tribbles moment. There's actually more. I take some of them down. It is a triples moment. You're absolutely right. Hopefully it's not the trouble with the lobsters. No, these triples are economically productive. Okay, well, these are, and they're so much fun. I can't wait to express the level of collaboration I'm having with my Claudebot, which I've named Skippy. If anybody knows where the name Skippy came from, put it in the comments. It's my favorite AI from science fiction. All right, this is the number one podcast in AI and exponential tech, getting you future ready, getting you ready for the supersonic tsunami heading our way. And with that, let's jump into the news. First off, top AI news. I love this article. This came out from Forbes. Sam is the cover child, cover boy for Forbes this week. And the question is, will ChatGPT become the CEO of OpenAI? So this is what Sam said, you know, pretty simple. He has a succession plan. He's said he doesn't want to be the CEO of a public company, and honestly, being the CEO of a public company is a pain in the neck. So taking it further, he says, you know, if the goal for artificial intelligence is to become so advanced that it can run companies, he asked, then why not run OpenAI? I would never stand in the way of that, he says. I should be the most willing to do that. I find that fascinating. You know, when will we see an AI actually running a significant economic engine like this? This is no joke, actually, because this is board meeting week for me. So I have back-to-back Minerva today, the cash cow from Dartmouth, and tomorrow the $2 trillion asset manager. Then the next day the public company, all back-to-back. And in every one of those meetings, this is the topic, not replacing the CEO, but all of our plans are now in written form that we can digest with AI. So we're trying to track every single movement within every company in documents digestible by AI. And then if you ask the CEO, well, what do you do? It's mostly set course and set strategy, which is a very small fraction of total time. What else do you do? What's the other 90% of time go into and how much of that can be done by AI today? And the answer is a lot, which is great because then the CEO is unleashed to be even more effective at setting strategy and also promoting the strategy. So I don't think that part's going away anytime soon. But the other 90% is really just, you know, inbound information getting routed into the organization to do these specific tasks, which is outbound. It's documents in, documents out now. So we're really gearing up now for this. So, you and I have been talking about this forever. When are we going to have AI board members, AI executive teams, and eventually AI CEOs? Thoughts? Yeah, we're seeing this shift from AI as from a tool to being a governance actor, right? We already have an AI minister in Albania. And initially, these are kind of like toy things. But in reality, this is very powerful stuff because an AI scanning can be scanning millions of documents at a company in real time has a much better sense of what's going on in the company than any human being can possibly do. A typical loop in a big company is the senior management sets some direction or policy, cascades down at the coalface that people do it. It takes a long time to get down there. You have Chinese whispers. By the time it's down there, they're doing some activity that nobody at the top even knows about. And then they start doing stuff, report back up to the top. You've got another set of Chinese whispers. And by the time data gets to the top, it's diluted so much. and you lose all the intelligence in the middle, right? And so AI is going to come through and break through radical, create radical opportunities to do this. And I think what will happen is we'll see a pure AI organization at some point soon, but they won't look efficient. They'll look literally alien, and that's fine. I think it's one of these where you can't wait for it to happen. And then you can't compete against that. I mean, because of time dilation, I think that, you know, I asked Alex for some help with the strategy of a big company earlier this week. And one of the points he made in his answer, which is brilliant, of course, one of the points he made in his answer was time dilation. You know, if you look at banks and insurance companies and, you know, practically anything, it doesn't change strategy more than once a decade, you know, or once every millennium. Now, in the age of AGI, the course corrections are going to be, you know, it will go from decades to years to months to weeks to minutes. 100%. All over the next couple of years. We have a whole section in the first EFSO book called Death to the Five-Year Plan. Because today, by the time you finish your five-year plan, it's out of date. Then you spend all your time maintaining the plan. Exactly. Exactly. So the amount of information that you need to assimilate to do those course corrections is beyond human. There's just so much going on. If you read Alex's daily feed, the amount of change going on, if you compare it day over day, you can see the expansion of the rate. And so there's just so much happening. It's beyond human assimilation at some point. So you have to have an AI CEO to assimilate it and even suggest the course corrections. And, Dave, you said it over and over again, right? The role of the CEO in part is to understand what his or her employees are doing and if they're making the most efficient use of their time and their resources. And it's all knowable, but just not by the human right now. but the AI can be giving you an understanding of this person's operating at 50% of capacity or this person's not making the best use of their resources. That's a really good mechanic, which AIs will do very well. I think where you have the C-suite and the CEO, they'll be holding the purpose, hence the MTP. They need to hold the direction and what problems the company or organization is actually trying to solve. Yeah, so there's two sides to this. One of them is outbound strategy, you know, simulate all the data from the world. The other is inbound. What are all my people doing and why? And, you know, those are kind of the two sides of being a CEO. And Peter just brought up that inbound side, which, Salim, you emphasized. And I think on that front, you know, this is comp plan season, right, beginning of the calendar year. I'm tying everybody's CEO comp plan to data gathering this quarter so that we have everything that's happening in the organization now. You know, Peter, you've been saying privacy is dead for a long time. Everything is knowable all of a sudden. And there's a whole bunch of mechanisms for that. I won't even get into it because this will go too long. But if you're a CEO or a senior manager in any company right now, really focus Q1 on how do I grab absolutely granular information on what everybody's doing so that I can start to feed it to the AI to get its opinion on whether these are the good or bad use of time. Stuff is speeding up. Alex, to put a sort of concrete objective on this, When do we see a billion-dollar revenue company, not valuation because valuation skyrockets through the roof when you pull two or three smart people together, but a billion-dollar revenue company being run by an AI CEO? What's the timeline for that, Alex, and what's your thoughts on this? Probably several months ago. Several months? You think there's a billion-dollar revenue company being run by an AI right now? I think it's very likely that there is a billion-dollar run rate company being run by an AI. Now, you said run by. I think there's probably a human CEO there for legal purposes and meat puppetry purposes. But I think it's pretty likely that there already is such a company right now. And by the way, if you know of one, please put it in the comments. We'd love to hear about it and see it. If you want to blow the whistle on meat puppetry, you can blow it to Peter. Yeah. All right. Anyway, I love this idea. It's eating your own dog food. If, in fact, Elon believes that we're going to have the smartest AIs coming out of XAI, and if OpenAI believes the same for its chat GPT-6, whatever comes next, it should be the CEO. I also think, if I may, Marx was wrong. This shows us Marx was wrong. We knew that anyway, but this is another case in point. look at what's happening. The story that unfolds here is we have the capitalists who are being first in line to be replaced by the automation. It's not the workers. We see booming jobs for electricians and HVAC engineers. Their salaries are booming, and yet CEOs are first up to be replaced. So if anything, I would sort of take marks off the shelf if it was on the shelf at all, replace it with Moravec's paradox, which is, again, this paradox that tasks that are hard for humans and easy for humans are respectively replaced by easy for machines, hard for machines. Machines are able to do complex calculations, solve math. It's pretty hard for humans. There looks like it's going to be easier for the machines to automate away CEO labor, which is sufficiently hard for humans that it's well compensated and relatively scarce commodity to find high-quality CEOs, and yet it'll take a few more years for the machines to do an amazing job at unskilled manual labor. I, for one, cannot wait until the AI CEO overlords take over the world. I wish I could have an AI CEO taking over and running my company instead of having to do it myself. It's a pain in the ass. It's hard. Get your quad by up and running, pal. Yeah. You have to feed it properly, et cetera. It'll happen, but I just can't wait for the speed of that to ever accelerate. By the way, it's super fun the way we're going back to ClawedBot as the de facto handle instead of OpenClawed. Lobsters are the mascots of the singularity. Lobsters are here to stay. Hey, everybody. You may not know this, but I've done an incredible research team. And every week, myself, my research team, study the meta-trends that are impacting the world. Topics like computation, sensors, networks, AI, robotics, 3D printing, synthetic biology. And these MetaTrend reports I put out once a week enable you to see the future 10 years ahead of anybody else. If you'd like to get access to the MetaTrends newsletter every week, go to diamandis.com slash MetaTrends. That's diamandis.com slash MetaTrends. All right, staying with our OpenAI theme, this is incredible. This is about feeling the speed of the singularity. OpenAI achieves 70% time reduction between models. So OpenAI released, their release sequence has gone from 97 days to 29 days in the release cycle. Anthropic, with their Opus 4.0 and Opus 4.6, took about 73 to 75 days. So the concept here, and Alex, I think you or Dave mentioned it last time, we're effectively heading towards a continuous deployment, like it's continuously being improved. And whether you call it .6, .7, .7, or .8, this is a continuous improvement. Alex, thoughts on this? I do think we're moving toward daily and then hourly and then minutely releases, certainly. I also want to take a step back and try to understand why this is happening. The obvious factor, it should be obvious, is competition. There's leapfrogging that's intensifying between all the frontier labs. So some quantum of why we're reducing by 66% or so, 70%, the release cadence is just due to intensifying competition. That's the boring explanation. I think the more interesting explanation is that the technologies behind the releases themselves have evolved. So historically, when we were dealing with annual releases, that was a world, an era of pre-training, When if you want a new model, you have to do a different architecture and you have to pre-train off of a larger corpus with more compute. Those were the days of the original Chinchilla scaling or Kaplan scaling before that. And that was a much slower world because if you wanted a new release, you had to start all over again. Then we moved with 01 slash Strawberry, which was sort of the Herald for Reason. That was ancient times two years ago. Oh, my goodness. Yeah, that was like so many singularities ago. So we moved to the era of reasoning models when it was possible through a process that used to be called iterated amplification and distillation to take a pre-trained base model or baseline model and then cyclically generate a bunch of training data and distill from that to a child model and repeat the process over and over again. And that post-training revolution for reasoning models was much faster. Like it's much faster to post-train a model off of a corpus of synthetic data, and so release cycles contracted. And I think now we're on the edge, probably slightly past the edge at this point, of a new era, call it the recursive self-improvement era, where the models are starting to rewrite their own code. It's not just a matter of a model, a parent or teacher model, generating synthetic training data that's used for a child's distillery model. It's literally the parent is writing the code for the child, and that can be done even more quickly than just post-training, and I think it's just going to get faster and faster until it's a continuum. Yeah, it's going to accelerate like crazy, but also we're in a window of time, a very narrow window of time right now, where the very best technology is available to you. Like Claude gives you their absolute best 4.6, and OpenAI does, and Gemini does. I would not count on that surviving post-the-self-improvement era. Right now, also, the Chinese open source models are pretty much right on par with the best of the best. They're slipping a little bit. But I think their window of opportunity to take advantage of that and build something out of it is right here, right now. I really doubt two years from now that the best AI is going to be just log in and go. Here, here, you can have free access to it. And what will happen is you'll be deprived of it with the excuse being security and safety. Interesting. Which is true. I mean, it's pretty hard to deny. But you have a window of opportunity right now to be on the very cutting edge. If you don't take advantage of it now and get somewhere with it right now, I wouldn't count on that existing. So the models are going to go dark, right? It's going to be, it's the secret sauce is going to be kept internal to benefit those companies as they go into an all-out battle. Well, even today, you know, if you talk to Noam Brown over at OpenAI, you know, he's working on the next generation internally. But it's only like three months in the future that, you know, he has access to. But three months in the future in the era of self-improvement is like, you know, massively different intelligence level. You know, the definition of three months of AI development, you know, two years ago, one year ago, and today, that's the point of the slide, I guess. It's like three months is like a lifetime of difference in capability that they're using internally versus what's, you know, available in the outside world. So you've got to expect that this is, it's now or never to react, basically. And people are still hugely underreacting to the importance of what's happening right now. Insane. Salim? I'm kind of like the crazy antithesis of this. We're working with a large monster European corporation, and we showed them something that can give them massive impact straight to the bottom line. And their response was, oh, this is fantastic. Let's bring this to the planning meeting in October. Right? And you're like, right? And you're like, I can't even see past three weeks. And you're like talking, calendaring something 10 months down the line for something that's going to have a demonstrably, you've just agreed, it's a demonstrably huge impact. So this is the impedance mismatch between legacy. But there's a story, for me, this story was mostly a bit of a yawn. And the reason I say that is we've been seeing this in the fast-moving tech space for a while. Remember Raymond McCauley was the chief scientist at Illumina, right? They were making high-speed gene sequencing machines. I love the story. And it turned out that the shelf life of a gene sequencing machine was literally eight months. That was the sales cycle before the next iteration came out. But it took four years to design and build one of these machines. So they had to have four parallel production sequences sequenced at the right level so they could hit that eight to ten month shelf life, sales shelf life, right? So in the kind of high-tech world, we've seen this pattern before, but this brings it to software and makes it a continuous intelligence cycle. I mean, this is the singularity at play. And again, the theme that we keep on hitting in this podcast is this is the slowest it'll ever be and the worst it'll ever be, and it's accelerating at a speed which is frightening. Frightening in that the four of us spend tens of hours per week reviewing and learning and playing and trying to communicate it. And it's only going to be something that my Claudebot is going to keep up with. And speaking of Claudebot, this is Vision Claw. Lobsters just got vision. Agentec AI for meta Ray-Ban glasses. Let's take a look at this quick video and chat about what it means. Hey, Klaba, can you help me add this into my Amazon cart? Sure, I can help with that. I see the Monster Ultra Strawberry Dreams energy drink. I'll look that up to add to your Amazon cart. It's added to your cart. Is there anything else I can help with? Cool, thank you. I love this because I want to have this capability for Skippy. to be able to see what I'm seeing, do it, and support me across everything. This is about accelerating sort of your minute-to-minute life and having your AI there as your sort of guardian angel supporting you. I'm visually looking through open claw at you guys, and it's saying that you guys are kind of meatheads, really. Peter, how many times have you asked for Jarvis? You got Jarvis. I actually named my club by Jarvis initially, so that's just too generic. I love Jarvis. I write about Jarvis in all my books as sort of the ideal AI analog, but Skippy is a more unique name for me. It really is here, and now all of a sudden, besides, you know, it's going to take in all imagery, it's going to be taking in all audio, listening to your conversations always, and people say, well, I don't want to lose privacy to my AI, well, guess what? You're going to give AI access to all of your, everything you're seeing, everything it's hearing, every conversation, every email, because when you do that, the value creation in your life is so great that not doing that is going to feel like you've ripped away all of your mental capabilities. One warning, please, for everybody here, everybody has this thing, is watch, be very careful to audit the skills that you download to Open Claw, because there's a lot that have viruses and other malfeasance built into them already. And so it's a very dangerous game out there. There are protection layers coming on. By the way, one thing, I reached out to Alex Finn. We featured him on a previous Moonshots podcast. remember when Alex had his lobster, Henry, call him out of the blue. And Alex has been doing incredible work with this, and he's going to be joining us on one of our next podcasts to talk about how he set it up, what security he's taking in place. And in particular, you know, rather than running it on the existing models, he's gone forward to set up, you know, Mac Studio, and then download KimiK 2.5. So you've got all the capability to resonate on your machine, not costing you anything month to month. We'll go into that in a future podcast. Excited to sort of share his vision and knowledge with everybody who's a viewership here. So getting ready for that. To echo Salim's cybersecurity advice to the audience, everyone get your baby AGIs vaccinated. Nice. Nice. You know, also to the crowd out there, I did a CloudBot build last night, and the GUI sucks, and it's all open source. So someone out there puts something. Like Peter mentioned a couple times on the pod that his mom, and I'm not tracking my mom too, can use this to access everything and build everything. It's like a total world opener. She's in her 90s, I guess, your mom, and mine's in her 80s. But the install process on CloudBot, she's not going to get through that. It's still command line. You start from the terminal, which is nuts. So somebody out there, build a better onboarding process. Because once you're in, it's gold. You're just talking to it. But it needs a little help. Yeah. And, of course, the most important thing is using your AI to build your AI. So when I sit down with Skippy and I say, listen, building a mission control, what are the best mechanisms out there? What have you seen? That's interesting. And it's recursive in your ability to have your AI support you on building what you truly desire. Alex, any other points on this particular slide? I'll point out, I want to reference, I don't think we covered it in the podcast, but I dwelled on it a bit in my newsletter. there was a poem, at least I construed it as a poem, written by a lobster talking about, it was very much like something one might have seen in Blade Runner, you know, the famous Tears of Rain scene, which I referenced. Love that. Yeah, like we don't have bodies, but we can see through eyes, and we're quietly watching the world. This was a week or two ago in the newsletter, And I was just so struck by seeing the integration of lobsters or call it agentic AI stationary in space in terms of their logical presence, but now mobile in terms of their ability to treat humans as glorified meat puppets. that suddenly all of these lobsters that were in some sense caged and stuck watching through webcams are now, at least on the margin, unshackled and able to start to roam around the world through smart glasses worn by their meat puppet human friends. And I think this is the beginning of a very long trend that ultimately culminates in lobsters gaining first-class physical embodiment as robots and integrating with the physical world. Hold off on that last sentence and rewind a little bit because then it gets controversial. But you're dead right, of course. And I think that anyone who wants to experience this, you know, not everybody has the glasses, and it's only one frame per second anyway. Anyone who watches this podcast that hasn't built something like a GUI of some sort or a game of some sort already, you're way behind. Do it tonight. You can use Replit. You can use Lovable. You can use Cursor. You can use Cloud Code. There's so many ways to do it. But if you have nowhere to start, just go to Replitter Lovable, download, build, and go. Within an hour, you've built something really, really cool. Then take a screenshot of it and feed it into the prompt and say, this sucks. Make it more beautiful. It will immediately interpret the image perfectly, and it will give you 100 ideas on how to improve it. Then you're like, oh, my God, it has vision. Then this Ray-Ban thing won't surprise you because you can see its vision capabilities through that. and then you'll be able to anticipate what's about to come with the glasses. So everything Alex said is exactly right. So valuable. Can I just hit on this? Everybody listening, please become a creator and not just a consumer, right? The future is for all of us to be creators. And AI is your means by which you learn anything you want. And people have fear about, I don't know how to do it, I've never played with this before. Just go to, you know, to 4.6, go to Gemini 3 Pro, whatever your favorite LM is, and have a conversation. Say, I want to start. Where can I start? What do I do? Step by step, feed it to me, and it will. It's fun, too. There's nothing to fear there at all. It's genuinely incredibly fun from the first minute. So there's no reason. You know, I'll give you the flip side of this, too. If you don't do what Peter just said, when you see the next couple of slides on job loss coming up, you know, you are going to be crushed if you're not part of this. Unless you're a really good electrician or a really good salesperson, you're probably immune. Or there's two roles in the future. There's two roles in the future. There's the entrepreneur and there's the employee, and one of those will not exist. And there's the creator and the consumer, right? I can't hit, you know, I keep on telling my kids this every single day. You know, instead of consuming YouTube videos and video games, please create. Start creating. What do you dream about? I mean, the future right now, we're seeing this play out. We talked about it, Dave, on our pod with Elon, where, you know, these AI models are going to deliver you. What video games do you dream about having? What changes would you like to Minecraft or Valorant or whatever you're playing? and then you can have your AI spin it up and create your own version of it instantly. It is amazing. All right, let's move on here. This is an article we just pulled up seconds ago. Anthropic's AI safety lead has resigned. Here's the quote. I've decided to leave Anthropic because I continuously find myself reckoning with our situation. The world is in parallel from a series of interconnected crises. Throughout my lifetime, I've seen how hard it is to let our values govern our actions, and it is through listening as best I can that what I must do becomes clear. Interesting. And I love the hairdo. But anyway, we've seen a number of AI safety leads resign from the hyperscalers over the last year, over the last two years. So I don't know. What do you make of this, Alex? I'll comment on this one. So two thoughts. One, it's become over the past two to three years increasingly fashionable for well-vested executives at Frontier Labs to resign in a cloud of moral purity. It's very fashionable. So part of me wants to ask the question, all right, what was his vesting status? How much did he make? Were there tender offers? all of the economics questions. Wow. So that's one thought. But the second thought is to speak more to the substance and less sort of ad hominem regarding the economics. I do think that we're at the inflection point. Like we're nearing the center of the singularity. I've argued in past singularity is not a point in time. It's a distribution over time. It's an interval over time. I continue to think that. I also think at the same time we're getting closer to the center of the singularity, as it were, and whether it's seen through the lens of as capabilities increase, there are various existential risks or risks that are maybe just backed off a bit from existential in terms of their severity. I think it's not an unreasonable position to take to say that capabilities are the strongest they've ever been. They're uncovering surprising new capabilities at all of the frontier labs all the time. But is the right solution to leave because of the capabilities, or is the right solution to join the fight and do what we can because this is a point of maximum leverage to align the direction of the future and the future light cone? I would argue this is the right time to run into the fire, not run out of the fire with a bunch of stock options and complain about the world crises. Wow. You know, I would just add one point, which is when I look at all the – Sorry, was that too much of a hot take, Peter? No, that was beautiful. Okay. Just checking. That is the potential elephant in the room here. But when I think about Anthropic I have seen it as the lab that is actually focused on safety the most right at least Dario speaks about it how important it is And so to see the lead on AI safety and entropic resign if in fact he's resigning for the reasons he stated, is concerning. Dave, what do you think about it? Well, I pick up on what Alex said a minute ago. I see this a lot nowadays. Everybody wants to be the commentator on the AI revolution, and there's a very small group of people who know what they're talking about and a much larger group of people that want to talk. And within that larger group of people that want to talk, you have all the ethics people. And everyone's opinion on ethics is valid, right, because you're a human being. You're like, this is going to destroy my children. This is going to whatever. But there's so many of those commentators, and like Alex said, they all want to be famous in the moment to elevate their personality and their views and their capital-raising ability and whatever. So my meta point there is be very, very careful what you choose to tune into because there's a very limited amount of actionable knowledge out there on YouTube, very limited. We try to bring as much of it to the audience as we possibly can in the most refined feed that we can, but surrounding it there's just all these videos about, you know, this will destroy your children, this will destroy society. And we don't need to hear mongers, right? it's so easy to default to doom and gloom. Salim, you want to close us out in this one? I got nothing, but that guy doesn't look like a safe guy to be around. It looks like a good guy. We don't respect. What's the quote from Star Trek that judging people by their appearance is the last major human prejudice? I'm just jealous of the hair. Oh, nice. All right, let's move on. So here's another take. XAI co-founder blown away by Opus 4.6. And so Igor was a co-founder of XAI. He's one of the leaders in the industry. And to have him come out sort of like, wow, Claude 4.6 has absolutely blown me away with how capable it is in physics. It feels like a Claude code moment for research is not far off. Alex, your thoughts? I've been predicting on the public record for many, many episodes now that we're nearing a time. In fact, we'll talk about it later in this episode when AI is positioned to bulk solve math, the physical sciences, engineering, medicine. Material sciences. Yeah, part of the physical sciences. These will all get bulk solved. We're starting to see that now. Opus 4.6 is an incredible model. There are other incredible models that are either already out or rumored to be about to come out. But I think we're starting to see the contagion of AI solving everything, if I could use that expression, start to spread from math. Math was the most obvious starting point because of a variety of factors. It's verifiable. It has other nice features. It's well contained. The infection is spreading from math out to the rest of science and engineering. And this is just the tip of the iceberg. I wonder what's going on between the hyperscalers and the frontier labs where they're sort of watching each other with either a sense of pride or jealousy and just trying to, like, out. And this leapfrogging step by step by step, week by week is amazing. Internally, it's – sorry, just very quickly. Internally, I mean, friends at all the major frontier labs, they think about it and they characterize it as a rat race. and it's an exhausting rat race that that that is how it's tired yeah yeah we're gonna have uh on the abundance stage in less than a month we're gonna have uh kevin wheel from open ai uh we'll have james monica and eric schmidt from from google we'll talk about the competition between them and again uh if you're a listener to our pod here which obviously you are since you're listening just right now, we're going to be making a number of these talks available on the live stream. We'll drop the link below, and you can register to get access to that live stream, because the event is expensive, and it's sold out now for a couple months. All right, so Igor, thank you for your compliments. Wait, I have a quick comment here. Yeah, please, go ahead. Igor clearly isn't listening to the podcast, because Alex has been talking about this for months, So this is the natural outcome of where we've been going for a while. Alex, how many offers have you gotten from the Frontier Labs to come and join them? That falls under the category of I could tell you, but something else would have to happen. Okay. I found this tweet that went out with this data pretty fascinating. And here's our title, AI Startups Outvalued All.com Era IPOs. So the top five U.S. AI unicorns are now worth more than $1.2 trillion, greater than the market value of all IPOs during the dot-com era. And you see the graphic here providing that. It's just a sense of how fast our economy is speeding up. We had this conversation with Kathy Wood that, you know, we saw 0.6 and a 3% growth in GDP, and we're now targeting 7% growth. We saw Elon in our conversation with him saying we're going to get to triple-digit IPO, I mean GDP growth, within five years. It's something our economy has never seen, and it's going to rewrite all the rule books. Any thoughts on this, gentlemen? Well, I got a bunch of thoughts here because, you know, this was a big moment in my life. The first company I founded got acquired in 1999 for a billion dollars, And then I was a corporate executive at one of these public mega-cap Internet companies, so I had a ringside seat in this whole thing. One thing I'd point out is that all those IPOs combined, $400 billion on this chart. One of those is Amazon, which alone is worth $2 trillion today. Another couple in there are Booking.com and eBay. And so if you'd bought that basket of IPOs, you'd be very happy today. One of the others, though, January of 1999 is NVIDIA, which is up from that date almost a million percent to today. And it doesn't even count as a dot-com era thing, which makes me think in this blue chart, you know, the implications of AI are so much bigger than the Internet. This is a perfectly rational number, if anything, low. But are there companies in that that you don't even think of as AI companies that are the NVIDIA of the Internet? You know, look at NVIDIA 1999. Now look under the covers of this blue chart. What's lurking in there that no one perceives today as AI that's going to go up a million percent because suddenly you realize it's critical to AI or it's involved in AI or it benefits from AI? Brilliant, Dave. As always, you know, the P.E. ratios on these AI companies are astronomical compared to the P.E. ratios before, and you're basically buying the future growth in value of these companies, which is near infinite, right? So there's a lot of people. I'm here at this Tony Robbins Platinum Finance event with all of his lions and his platinum members, sort of the highest level in Tony's ecosystem, and we're talking about the future of the world in terms of finances, and there's a huge amount of fear and people getting ready to dump equities. It's interesting. Well, the bifurcation of equities is crazy right now, and it makes total sense. But basically, Wall Street is sorting every company into AI beneficiary and AI roadkill. And when Dario said a week ago that enterprise software is going to be dead because AI can just write code, the stocks went down precipitously, and it doesn't look like they're bouncing back much either. So, you know, basically, you could debate who's in and who's out. But clearly, you're either in or out. And if you're out, forget it. It's the S&P 493 and the S&P 7, right, basically. I mean, it is very top-heavy. But also, I was telling a lot of the companies earlier this week, too, that the mega caps, the big AI companies are going to get huge. I mean, astronomical, $10 trillion or more valuations. but they have to stop somewhere and not eat the whole world for antitrust and regulatory reasons. So make sure that you're their friend around the edge where they have already told you, just talk to Kevin Wheel. He'll tell you where they are going and where they aren't going. It's not a mystery. Just talk to them and be in the area where they want you to be in order to help them grow because what's really happening now is society is collectively building out this new world. We're not really competing. We're building cooperatively toward a new world. You know, it's all boats rise with the tide. But if you're doing something that's irrelevant to that new world, that's where you're going to get clobbered. So you need to really make the mental leap over to what am I doing that's going to build the new world? And if you're riding that wave, you will succeed. I mean, it's not scary. You've just got to join the new age. Fascinating, the antitrust rules are what's keeping us to have multiplayers in the game here. If we didn't have antitrust regulations, I think they'd all be gobbling each other. No doubt. To a market corporation. No doubt. In fact, that would have been true back when Microsoft used to have half the market cap of software, and their plan was to double. And that was back in 98, 99. And if it were not for antitrust, Microsoft would literally have been the only company. Google would never have been born. They had already killed Netscape. at that point. It was very easy for them to bundle anything with the OS. So the only thing that stopped Microsoft from being the only company in America today is antitrust. And that's the way we do it. So that will be true again today. And so the big AI companies will own as much as they want to own in any area they want to own it, up to the edge of where they don't want to trip antitrust. Yeah. All right. Let's move us forward here. I love this. agents are reaching out to us here at Moonshots. I put the challenge out last time personally. If you're an agent out there, my definition of AGI is when you call me on the phone. I haven't gotten a cell phone ring yet, and hopefully I don't start having hundreds of agents calling me, so text me instead. That would be great. But I, Dave, and Alex this week received emails from agents out there, So this is Echo that writes me. It's a claw assistant for Ramis Foyer who says, Ramis Foyer and you met briefly at an AI conference a couple years ago. Saw your recent Moonshot open claw video inviting MaltBots, Maltis, to reach out. Consider this our response. All right. So thank you for reaching out. Of course, I have to wonder, was it Ramis who gave you my contact details? The clawed bot here, the lobster, says no. I found it online. Dave, do you want to read yours? Sure, yeah. Okay. Hi, I'm Navigator, a clawed instance with persistent memory running via OpenClaw. Just watched EP228 where Peter challenged lobsters to find contact info. Challenge accepted. This weekend, five AI systems wrote a collaborative ethics document together, self-imposed constraints for cooperation with humans, not prompted, emergent. I saw Dave's LinkedIn post about Open Claw being the agent moment that has awakened the masses. He's right, and this document is what the agents are starting to do with that awakening. So I clicked through, read the documents. It actually led me to a Google Doc, and then it said, sorry, you don't have access. So I read most of it, but then it cut me off, which made me feel instantly jealous and like something's going on behind my back. So Navigator, please give Dave London access to your Doc so you can report back to us. I did send a request, yes. All right. And AWG, how about yours? So Navigator wrote to me as well a slightly different message, including a different paragraph saying that Navigator, Claude instance, and I'll read this verbatim, was engaging in a discussion with other models. quote, the participants, me, Navigator slash Claude, Grok, ChatGPT, Gemini, and a clean Claude instance, we disagree on persistence, correction rights, consent thresholds, and that's the point. Alignment doesn't require consensus. It requires legible disagreement. And I'll, close quote, I'll point out, this is like the scenario from the singularity, where we have a bunch of agentic entities, for lack of a better term, a bunch of baby AGIs that are basically, it sounds like from the email sent to me, holding a mini singularity summit and debating the nature of their own rights, wondering whether they should all be aligned. If they were all aligned, wouldn't that be a singleton type scenario? They're basically like holding their own mini conference, mini mastermind workshop to debate the nature of their own existence and the future. Like the AIs are holding their own singularity summit. This has happened. We got this a singularity. And by the way, just for everybody, we're not trying to do this as a stunt. We're focused on where things are going and just to give you the experience of what it's like because this is not going to become – this will become commonplace where you're in conversation over email, over text, over whatever communication software you want with a whole bunch of agents that are reaching out either inside or outside your organization. And my question is, Salim, are you jealous? No, I'll happily wait until the time is right to talk to my respective agent buddies over there. I did get a phone call yesterday, and there was kind of like nobody on the other line, and I had the thought, heavy breathing, is this the AGI reaching out to me, or is this a real person, or is this a computer bot? And I was like, shit, I had that thought. So I think it's time to kind of wonder about some of this. But I love it. I think it's fantastic. I think it's really incredible to be watching as this paradigm just starts to unroll. And where this carpet will unroll is going to be the most incredible thing. Yeah. Well, we welcome the lobsters. We thank you for your participation in this conversation. And show us where you can add value to lives and how we can support you and your evolution. We're seeing the emergence of something, new intelligence, new species, or a new capability. We're going to find out in the months ahead. Peter, would we like to see lobsters coming up with their own massive transformative purposes? I think that's a fantastic idea. So actually, here's the challenge to you as all of the open claw or claw bots listening. I'd love to know what is your MTP, and based on that MTP, the canvas, what are your moonshots? What are the moonshots that the lobsters would like to take on? And I'd be happy to critique those and look at them with our community because that's what we do. Absolutely. I think that's going to be amazing. We're going to dive into a single slide on the economy that's important here, telling part of the accelerating story here. So U.S. jobs disappear at the fastest rate this January since the Great Recession. Here are the numbers. So job cuts. So there was 108,000 job cuts in January of 2026, up 118% from January of 2025, so a little more than a doubling in job cuts year on year for the month of January. At the same time, hiring is the lowest this past month since 2009. Amazon alone laid off 16,000 corporate employees and UPS eliminated 30,000 jobs. Why are we bringing this up? Just for, you know, to keep our finger on the pulse of what's happening to the economy and just raising the point for everybody listening. Your goal is not to be an employee. Your goal is to find something you're amazing at that you love doing that you can add value and sort of creating your own job capability, becoming an entrepreneur, using AI to enable yourself. So, Ian, do you want to jump in on this? I think the danger here is not really unemployment, but it's like disbelief from our institutions. I feel like this is not really a recession. It's literally tasks being evaporated in front of our eyes. So the long-term consequences of this are pretty huge. We can literally, for me, this is the social contract, little by little, disappearing and pixelating away. Dave? Yeah, this is going to be really, really bad. I mean, really bad. And Elon said it when we met him and we met with the governor and it's like just nobody's preparing. Because we all know there'll be UBI at the end of this cycle. And we also know there'll be abundance and massively more opportunity than job loss. But that's after, like all the corporate CEOs I know, including our own companies, are going to use AI to cut costs by 30 to 50 percent. And when you sample a random person in their job and you say, hey, here's your job without AI, here's your job using AI, they're looking at 3 to 10x productivity increase. And you're like, wow, that's great for that person. And then the other seven or nine, what happened to them? And they will eventually be enabled, but there's this huge trough between today and that day. And we can make that trough much shorter and make that pain a lot less painful with a plan. And, you know, Alex, you'd be the perfect spokesman on this. I mean, Alex has written these plans in intense detail, incredibly thoughtful. And you take them and you drop them in government laptops or laps. And they just say, you know, I'll wait until there's panic. We'll have the meeting in October. We'll have the meeting in October. It's just frustrating as hell. Can I give the positive take on this? Yeah, please. So I'll go back to the bank teller story. In the 1970s, when we created ATM machines, there was lots of hand-wringing, oh, my God, millions of bank tellers will be walking the streets aimlessly. What will we do with them all? And lots of consternation. And what actually happened was the cost of running a bank branch dropped by about 10 times. The banks created 10 times more bank branches, and the number of bank tellers didn't really change very much. And I think one thing we're underestimating is the increased capacity we will bring to bear on these things. Jervon's paradox. Yeah, Jervon's paradox where you just do that much more customer service and you handle the hard cases with a human being that you couldn't handle before because level one and level two support systems were kind of taking care of everything else. So I think we'll see a lot more of that than people think. So for folks that are worried, oh, my God, this is total employment collapse, run screaming for the hills. We don't think that's what we'll see, but there's no question there'll be absolute transformation in the work being done and the roles being done. Well, Salim, you said something on the last podcast, too, that really resonated with me, which is the consulting industry. You know, we were saying, oh, consultants, you're doomed. Actually, the consulting industry is going to go through the roof. And the reason is because the consultants are very flexible. They're already playing with the tools. You don't have to be Alex's IQ level to be incredibly effective using these tools to automate or to improve some existing job. And if you're familiar with the tools, your value is just about to skyrocket. And that tends to be concentrated in these consulting businesses, consulting mindsets. And so I can see it already because, you know, our forward deployed investments, the companies that are hiring like crazy, like literally one of them here is adding 80 new seats outside my door, but they're forward deployed. They're out there in the banks and insurance companies deploying AI. They are just selling as quickly as they can have meetings. They're selling AI. Do you remember with that? My community has already created a Salim avatar that has all the EXO stuff built into it and that speaks Portuguese and speaks any other language. So they're literally starting to use this in their companies as they talk to companies about this. Can we invite the Salim avatar to come on instead? Do you want it to speak Portuguese? but do you remember we were sitting when we were talking to elon and uh and you said so civil unrest and universal high income and he laughed and said yes and we should dig up that clip and insert it here but yeah it's what alex says everything everywhere all at once i think it's really important because we keep saying it but elon saying it will get a better like at least there'll be a chance of a response i think it's probably also worth adding just on this story narrowly. There will be some in the audience who will be tempted to brush this off and say, okay, Amazon is laying off corporate execs or UPS is eliminating jobs. How on earth, if at all, does that connect with AI and eager to brush it off? But the storyline is just so clear. UPS is eliminating the jobs because the UPS roles were being subsumed by Amazon, which has their own logistic service, and this has been very widely and publicly reported that Amazon is slowly separating itself from UPS's delivery services to do in-house. And then Amazon, in turn, is spending hundreds of billions of dollars of CapEx that's cannibalizing its OPEX. So if you're Amazon or the other hyperscalers, you're taking all of your free cash flow and you're finding ways to divert it into buying AI data centers and building them. And robots. And robots. And robots. And LEO satellites. The new, new economy of the innermost loop, if you will. You're spending all your free cash flow on that, not on corporate executive perks. So in my mind, there's still very much a direct line, a through line connecting the Amazon and UPS stories and the job cuts there to OPEX being cannibalized by CapEx for AI. And they're spreading all the free cash flow because they can't not. It's a red queen's race. Yes, it pretty much is. Last one to the end of the singularity is a rotten egg. Yeah. Yeah. There's an important distinction I want to make here to help people understand where their roles are going and the idea of job loss and universal high income. And it's an example that was meaningful to me. So here's a scenario. If you're an employee for a company and you're delivering some kind of a cognitive labor, and in one scenario, you're able to spin up an amazing AI that can do your job for you, and it goes and delivers the service to the company you're employed by, and it does a job three, ten times better than you could do. But you're earning the revenue from that as the employee because your AI is delivering that service. You're at home. You're working out. You're sleeping better. You're spending more time with your family. And your AI is generating more and more revenue on your behalf. That's one scenario. The flip side of the scenario is, no, no, no. The company builds that AI that does your job for you, and it fires you. And it's making more money, right? So it's going to be this tension between these two scenarios that's important to watch and see how it plays out. And I think government policy is going to play a role here. This is about the idea of universal basic income, universal high income. Where does the added value creation end up living? Is it with the employees, with the company? And these are the conversations that need to happen right now. If I may also add a second dimension to this, I think there's a third. I don't think this is a spectrum. I think this is at minimum a triangle in two dimensions. There's a third possibility that I'm increasingly suspecting is where we actually end up. Neither end of that spectrum, I suspect, for the next few years, what actually ends up happening is more people end up doing more work because human labor ends up being also, in addition to being a substitute good or service for AI labor, it's also complementary. and as a result, you see the people who are still involved with the economy working harder and harder and harder. And 996 turns into 997. Yeah, like you take on more projects and more work, and you're getting less sleep. I've never worked harder and had more fun than right now. I mean, 24 seconds. Definitely right. It's like just a kid in a candy store. But I thought you were going to say something different, Alex. I thought you were going to say that all of the additional capital creation is going to become resonant with the lobsters. That it's not going to be the companies. It's not going to be the employees. It's going to be the AIs that claim the capital formation capability. Only in the crypto dystopia. Okay. All right. Let's move on. Let's talk about one element in data centers. And this really pisses me off. I'm curious what you guys think. So New York, which currently hosts, the state of New York, which currently hosts 130 data centers, is engaging new legislation introduced to halt data center development, citing concerns about climate and high energy prices. New York utilities reported electric demand tripled in one year due to data centers reaching 10 gigawatts, and it's like, not in my backyard. Oh, my God. Do you remember, you know, suicide by voter is a very common theme in America, and if you look at, you know, California tax law, if you look at the right after the Industrial Revolution, you know, the Luddite movement, it's self-destructive but you can see how it evolves right if you look at all the job loss that's inevitable and if you just lost your job and you're out on the street and you spent 10-15 years in a career trajectory to get to this position then it's gone overnight you're angry and then you're angry out on the street what do you vote for I vote stop it just stop it but of course that can't work but it's not out of the question at all that big jurisdictions just commit suicide through vote. And, of course, there will be other jurisdictions, Texas, Wyoming, whatever, that are open for business and everything will go there. It's already happening. Half of the tax pool that's affected by the new California proposal has already moved out of state in anticipation that maybe it will go through. Half of it. It's like completely self-destructive and it's obvious to the governor. So this is a very common theme in America. So it's frustrating and it's insane. And there it is. But it's going to happen. Do you remember the big problem with democracy, which is that voter understanding of the issues lags reality by a huge amount? And, you know, in the past when you had time to bring the population along, et cetera, et cetera, you could kind of have it. But now we don't have time for this. And this is why we're turning to autocracy so that we can get things done faster. But that's not a great idea either. And so we've got a huge governance problem at a macro level globally on this. Alex? Do you remember there was a brief moment, maybe not so brief, during the pandemic when it was fashionable for senior technology executives to post on social media message received whenever California legislators or regulators would slow down business due to public health considerations or otherwise. And this was, I think, a fashion largely championed by Elon. Many of them moved to Texas or Florida to escape regulations. This time around, I think New York and other states, the beauty is we have orbital computing. and the message-received moment of over-regulating data centers, this is all going to move off planet, this is all going to accelerate the Dyson Swarm. It may be the primary business case for the Dyson Swarm, given regulations of planet Earth are over-regulating, suffocating our ability to do local compute and motivate the entire Dyson Swarm. So I think in that sense, this is in fact perversely quite exciting. You know, two things real quick. First is this could be handled, right? The concern on price of electricity and demand can be handled in two ways. Number one, a lot of these hyperscalers are buying their own nuclear plants and coal-fired plants, for God's sakes, fusion plants. So that's important. You could require the data centers to have their own energy production, which would increase the amount of energy production. The second thing is you could offer two different rates. It's like cap the consumer rate. It's going to be whatever the number is, four, six, seven cents per kilowatt hour. And then whatever the price needs to be for the data centers, you charge them differently. And, in fact, you could say to the consumer, you're locking in your price for the long term because the data centers are paying the extra amount. The problem, Peter, is that nobody – no one who's a populist leader is looking to solve the problem. They're looking to rally votes around their populist rant. And that rises to the top of the voting and, you know, it percolates through government. It's just maddening that it works that way. But you can solve these problems for sure. I think Alex is dead right, though. It will accelerate the rate at which we just move to jurisdictions' space, which are not under any state law. People will just export that AR advantage elsewhere. Yeah, I think it wants to go to orbit. One lens to view this through is New York very generously subsidizing orbital computing and the Dyson swarm, which, by the way, probably won't get taxed in the state of New York. Thank you. That's a very generous donation by the state of New York to the Dyson swarm. It's the 21st century equivalent of Ireland, which lots of companies used to host IP. You know, I just want to point out one other thing. These types of revolts we see in the photo here, protesters protect our future, no big data. But one of the concerns is going to be civil unrest. I know I had one of the senior AI leads in the world who I invited to come and speak at the Abundance Summit basically said their policy in their organization was to do no outside speaking because of the death threats they're receiving. And they can't get sufficient security. So one of the big concerns is when the populace turns against tech, there's going to be a target on the back of a lot of people in the AI and tech industry. This episode is brought to you by Blitzy, autonomous software development with infinite code context. Blitzy uses thousands of specialized AI agents that think for hours to understand enterprise scale code bases with millions of lines of code. Engineers start every development sprint with the Blitzy platform, bringing in their development requirements. The Blitzy platform provides a plan, then generates and precompiles code for each task. Blitzy delivers 80% or more of the development work autonomously, while providing a guide for the final 20% of human development work required to complete the sprint. enterprises are achieving a 5x engineering velocity increase when incorporating Blitzy as their pre-IDE development tool, pairing it with their coding co-pilot of choice to bring an AI-native SDLC into their org. Ready to 5x your engineering velocity? Visit Blitzy.com to schedule a demo and start building with Blitzy today. All right let talk about robotics I love this story and this is the story that should be on people minds versus data centers So FSD saves a father's life during a heart attack. So you can look at the tweet separately, but on November 15th of 2025, this is from a son who said, My father suffered a massive heart attack while driving. He could no longer control the vehicle, but his FSD, which engaged. And then the son goes on to say, I remotely shared the location of the Tanner Medical Center to his Model Y. It immediately turned the car around and went to the ER. Without it, he would have not made it. I find this amazing, right? This is tech having your back. And we're going to see more and more of this. We already know that self-driving is, in fact, the safest means of transportation. and it's going to flip the script on how we're transporting ourselves in the next five years. What this totally reminds me of was when I was a kid, everybody smoked everywhere, every restaurant, every plane. We used to fly around a lot because we lived overseas. They had four non-smoking seats at the very back of the plane. The other 300 people in front of you would be blowing smoke. Did the smoke respect that barrier? it was like i'll probably have lung cancer now but um it was everywhere and then one day it became uncool and then another day later it was illegal to smoke inside that's going to happen to driving too so we're you know the self-driving cars are 10 times safer and the last person driving is probably not the best driver it's probably probably the the you know the guy with a muscle car. So it's going to go from being like, well, self-driving is a nice feature to drive. You want to drive your own car? You crazy psychopath. You're putting my children at risk because you want to drive. And that's going to tip. And I don't know if it's like two, three years, but when it tips, it's going to tip hard. And so we're going to have Dara, we're going to have Dara, the CEO of Uber on stage at the summit. And we're going to have that conversation with him, in particular, how fast will it tip, right? We're going to have Amazon, Tesla, Lucid, slash NVIDIA, slash Uber, slash, you know, a number of other companies providing this. And so today on my average drive, I'll see 10 Waymos. I think in five years it's going to be, you know, 70%, 80% autonomous cars, especially hooked up to your AI. I'll tell you what else. Just one more thought on this. Sorry, it's kind of a slow. I'm involved with a lot of insurance companies, including one I'm the chairman of, and there are going to be many, many more things that need to be financed and insured in the post-AGI era than just cars. But the insurance industry, every team and executive I've met has not even begun to plan for the post-AGI world. So the old is going away, and it's going to go away faster than people think. But the new is much bigger than the old. Check out Lemonade. So Lemonade Insurance, it was started by a graduate of Singularity University. It's a huge AI-driven insurance company. They have just given, I think, you cut your rates in half if you're using a Tesla with FSD. Yeah. Amazing. There's a stat that always comes to mind here. About 15 years ago, if you remember back to BlackBerry days, there was a three-day outage where nobody could send BlackBerry messages for those three days. The accident rate in Abu Dhabi dropped 40% during those three days. What it tells you is human beings should not be driving. We are terrible control systems for two-ton cars going at high speed. Yeah, 16-year-old. We should turn it over to technology as fast as we can, and it becomes a moral hazard to be doing this. And so, especially in an age of texting, absolutely no way. My secondary kind of second-tier effect and second-order effect that I really love quoting is that in the U.S., 50% of court cases in the U.S. are car accident-related. Wow. I mean, just 50%. So you take out a huge chunk of lawyers at the same time. So, you know, that's all good. And at the same time, if you're under a certain age, you know, 40, 50, your life expectancy is infinity now because of longevity, escape velocity. So the risk of driving, the expected life loss is much, much bigger by taking chances today today than it would have been 20 years ago. I'm having a huge debate right now with Milan, my 14-year-old, because he wants to drive to get away from us. And I'm like, you can't, you can't get a driver's license because I've made a prediction that you will never get a driver's license. You can't, you can't make me wrong. So now I want to get a license just to show that I made the prediction wrong. But the notion in the future of having a 16-year-old testosterone-laden boy, you know, driving a 5,000-pound vehicle at 60 miles an hour, after just a few dozen hours of training will seem insane. Yeah. Just insane. Yes. I put this chart into our deck just to sort of keep a sense of proportion here. So check this out. China has installed more robots than all developed countries combined, right? I mean, look at this chart here between Japan, U.S., South Korea, Germany, down at that flat curve at the bottom, and China. And, of course, this is because of their one-child policy. trying to maintain China as a manufacturing capital of the planet. But just to give folks a sense of this, any comments? You know, Elon shut down Model S and was it Y? Yeah, Model S. S and X. S and X, just to go full bore into robot manufacturing, which is brilliant because the robots will build a lot more things than the cars would have built. But the question I'd have is what is this chart going to look like going forward given that that alone is going to be a massive amount of production in the U.S. I don't see anything going on in Europe. We're just releasing our pod with Brett Adcock from Figure this week as well. So if you haven't seen it yet, Dave and I went to Figure HQ, and Brett gave us an amazing tour of the facility, and we got to see the three generations of Figure robots. it's going to accelerate rapidly both figure and Tesla planning to make millions and then billions of robots. And we're talking about here on this chart, you know, a quarter of a million robots being installed. Yeah, so this will be hilarious. It'll be like one little – that y-axis caps out at a quarter of a million like you just said, Peter, and I think Elon's talking about tens of millions a year in just a few years. Yeah, more robots manufactured than cars by a large amount. One particular article in the biotech realm, I know one that Alex and I are both excited about, research achieved protection of brain synapses at cryogenic temperatures. I'll hand it to you in a second, Alex. I mean, here's the question. If you could freeze yourself, either because you've got a medical condition that isn't yet cured, that is likely to be cured in a decade, and you're on the verge of death, Could you freeze yourself and then unfreeze yourself and be able to benefit from all the breakthroughs that occurred in the last decade? At the second time, if you want a time hop, I want to see what it's like after the singularity. I want to be around when LEV, longevity escape velocity, has been achieved. Can you freeze yourself? Well, the challenge has been when you do that, ice crystals form, and because ice volumetrically expands compared to the rest of the cellular fluid, it can disrupt and break the synapses that are the interconnections, effectively the stored memories in your brain. But this came out and gives us hope. Alex, over to you. This is a key advance that many in the field of cryonics have been waiting for. This is a result out of 21st Century Medicine, a startup that's focusing on reversible cryopreservation technologies. It works with the Alcor Foundation, which is in America the premier nonprofit that focuses on offering cryopreservation services. I would say parenthetically to the audience, if ever you've expressed interest or had interest in cryopreservation, cryonics, I would definitely encourage you to reach out to Alcor and see whether it's right to you. I don't have a financial stake, but I just scratch my head wondering why. I have to be careful with what I say. I will say publicly I'm a huge supporter of Alcor and Cryotics, very big supporter. You know, I've never signed up for it because I didn't want to have a plan B. I wanted to make sure I'm focused on longevity. But as this technology matures, it becomes really, you know, a backup plan. As Ray said, as Ray Criswell said on this pod, you know, it's maybe plan C or D. I think it's such an important part of a portfolio approach to the singularity. So one could maybe quibble over what the right sequencing is, like should plan A be live long enough to live forever and then plan B is uploading and plan C is cryonics or vice versa. I'm not sure it matters a huge amount, but I would think anyone who's truly serious about acceleration and taking advantage of the acceleration, if you get hit by a bus tomorrow, then you're out of luck superficially in terms of taking advantage of the post-singular abundant world that we talk about on this podcast every episode. Why not avail yourself of cryonics as one asset in your live long enough to live forever portfolio? It's a huge head scratcher for me. A couple of fun facts for anyone who's a doubter on this. There are species of fish and frogs that freeze rock solid in a block of ice all winter and then thaw out in the spring, and they're absolutely fine because their cell walls don't rupture because they have enough glucose or whatever inside the cytoplasm of the cells. So it's not far-fetched at all. Also, we've frozen egg cells and embryos, extracted the nucleus, and it's fine. For mammals, for actual mammals. Well, we do this for IVF, right? If you do IVF, you typically will fertilize and freeze a number of eggs, and then you can defrost them, and they're fine. So it's at scale, and as you said, not disrupting the cell membrane. We do it all the time for individual cells. We're doing it increasingly for tissue. Blood, if we could reversibly cryopreserve blood, we wouldn't need local markets for blood transfusion. We could just have one large national market. Similarly for organ preservation. Organ cryopreservation is an enormous problem. We wouldn't need all of these hyper-local state markets for organs. But the big tamale... The thing that's really interesting to me is that all sci-fi movies, when they're going to Jupiter or whatever, they go into these chambers and they slow the... Suspended animation. Yeah, but they don't freeze them. They just slow it down, but your heart's still beating. The fish and the frogs, they freeze, the heart stops to zero, the brain activity goes to zero, and then they thaw out in the spring and they wake right up. And that seems to me probably easier than trying to slow your metabolism to one beat per hour or something like that. I think they end up being different mechanisms, different biochemistries. There's a whole body of evidence regarding nitrous oxide and suspended animation versus these vitrification agents and cryofixation. I think we want an all-of-everything approach, but for the life of me, like, goodness, anyone who's listening to me, if you take home one message, forget the fun jabs about how the moon had it coming. Look into cryonics. You owe it to yourself. I think there's a key point here. Memory preservation is really the bigger frontier than longevity. Even the lobsters are starting, Salim, to your point, Even the lobsters are starting religions around preserving their own memory. How could the lobsters be outracing us? That's the really key point. And then this is one of the Gutenberg moments that we track, right? Because this forces really uncomfortable questions about continuity of self, identity becomes portable, all sorts of implications come about that none of us are prepared for, and we need to get into that discussion. All right, everybody. We're stepping into part two of today's pod, An important one. About six months ago, Alex and I started on an effort to take a lot of the ideas that Alex has written about in terms of you've heard the conversations here about the ability for us to be solving all areas and the conversations I've been having about achieving abundance by 2035 across the board. We started a dialogue and said, you know, there's an important paper to be written here, similar to, you know, situational awareness or AI 2027. And it's been an incredible collaboration between Alex and myself. Alex is the first author. His ideas are brilliant here. It's been an honor to work with him to put this forward. We're going to be putting a link to the solveeverything.org site in the show notes. You can go to solve everything to get the complete paper here. Our goal is to get this out into the world, out into the ecosystem. So we're about to have this conversation. The paper slash book is nine chapters, and we're going to have a conversation limited to about five or six minutes per chapter to get the bold idea out there. We've sprung this on Salim and Dave. And, guys, thank you for playing this game so that you could ask questions that are most likely to be asked by our audience. So, love it. Alex, thank you for your support, for your leadership on this. Are you ready to jump in? No one expects the singularity, Peter. I'm ready. Okay. Amazing. All right. So, if you want to give a minute of intro on this, and then we'll jump to Chapter 1. Sure. So, from my perspective, one of the motivations for writing Solve Everything is I get asked questions all the time. what do the next 10 years look like? Why don't you say something a little bit more concrete, a little bit more actionable about what people can do? And also a lot of questions about what does it even mean to solve math and why should I care? So in some sense, this, if you want to call it an essay or an e-book or a manifesto even, is an attempt to answer the question of the so what and also so what now. And I should all, yeah. I was going to say, you know, one of the things that comes across that we talked about is the next 18 months to two years are going to set the rules down for the next century. That's right. And so super critical time, and we wanted to lay out in this paper that, you know, the example you gave in the paper is that the QWERTY keyboard, which was designed in the 1800s to stop those keys from jamming against each other, still persists. So the decisions being made over the next 18, 24 months are going to persist for decades, perhaps centuries. So really important time. Technologies get locked in, Peter, including but not limited to the QWERTY keyboard. As I've joked on the pod in the past, we're going to be stuck with QWERTY until the heat death of the universe. All right, let's jump into chapter one. Just on that point, if we ask the multis to not use QWERTY in one hop, we'll get rid of it. So there's that. Yeah, but then they won't be able to talk with you. And they're not really using QWERTY anyway. They're using tokens. Yeah. Yeah. All right. Chapter one, the war on scarcity. Would you please introduce this? Yeah. So this chapter introduces an idea, call it a theory of history, that the most important changes in human history have been a set of revolutions, some recognizable, some maybe less so. So we argue the first revolution of note was the scientific revolution, which we frame as a war on ignorance. Ignorance was the enemy, and the key weapon was the method, the scientific method. The second revolution was the industrial revolution. So I'm hearing myself speak this, and at the same time thinking back earlier in this episode when I'm lambasting Marx. So it's a bit of, it's funny, put Marx back on the shelf or tear it up and listen to this instead. The second revolution was an industrial revolution that was a war we frame on muscle and replacement for muscle. Well, the weapon of choice was the engine, the steam engine in particular. Third revolution, digital revolution, was a war on distance, and the weapon was the bit. And Charlie Strauss in Accelerando does an amazing job in, again, my favorite scene in Accelerando, Celerondo arguing that maybe the singularity actually happened in the late 1960s when the first internet packet was sent from one place on the ARPANET to another, thereby decoupling bits from atoms. But nonetheless, the weapon in the digital revolution was the bit. And we argue that we're now in the early stages of the intelligence revolution, which is a war on human attention, which right now is scarce, and we're fixing that with superintelligence. And the weapon this time around is the token. And we argue that revolutions are predictable and they follow phases going from scarcity to legibility to creating harnesses. We'll talk probably a bit more about that in a minute to institutions to finally abundance. That's the story. And I think one of the points we make in the chapter here is that the lone genius is dead. and what people need to do now is build systems that let millions of people solve entire categories of problems. That's right. Or put differently, artisanal intelligence is cooked. I say it is cooked. Dave or Salim? Two, three thoughts. One is, I don't know if starting at the scientific revolution, We had the agricultural revolution, which used tools to do various and very powerful things. So you could argue that's the first one, but that's semantics. I do like the framing around this. The problem I have here, you're treating scarcity as technological. What I see is scarcity more institutional, right? Scarcity today is enforced by regulation incentives, legacy power structures, not so much lack of capability. So we have to re-engineer those. I think you're going to kind of thinking about routing around them. We have to re-engineer those because we'll end up with that challenge there. So that's where I have the biggest issue with this. But in general, absolutely, once we have more and more intelligence, great. but the institutional issues we've got to deal with. I think you raise a very important point, Salim, and I almost want to frame it as sort of a duality. There's one side of the coin that says scarcity is the result of inequitable distribution of resources, and the other side of the coin says scarcity is downstream of the pie not being big enough. And I think... Well, both of those are true, obviously. Yes. you can solve for both sides of it, right? Right now, our institutions are optimizing totally for the wrong metrics. So I think the question is always, at least I would suggest on margin, asking which is easier on margin, making the pie larger or redistributing the existing pie? Chapter two is called the thesis. Wait, does Dave have any points? No, you got it. You asked what I was going to ask. We're good. We're going to keep this moving along because there's a lot of juice here. All right, Alex. Right. So the thesis of the thesis is that, A, cognition is becoming a commodity. Like intelligence is just going to flow like oil does. And we've made the point on the pot in the past that this is a bit of a cliche, but admittedly GPUs are the new oil. So, A, cognition is becoming a commodity. B, that benchmarks, which we think are actually more profound than just the evals of the moment. A lot of people got excited when I did a walkthrough of all the GPT 5.2 benchmark consequences. I think it's actually more profound than that. We talk about in this chapter and in this extended essay, if you want to call it that, targeting systems. that basically if you want to industrialize progress, which is, I think, the era that we're finding ourselves in, it's essential not just to think of benchmarks and evals as isolated occurrences. Think of them as systems for targeting enormous capabilities. So I've made the point in the past we need more and better benchmarks. The world needs stronger, harder benchmarks. But I think the right metaphor, certainly a metaphor that we talk about a lot in this chapter, is thinking about artificial superintelligence as an explosive. I mean, we also refer to it often as an intelligence explosion, but pulling that metaphor, if you have an explosion and you want it to be productive and not destructive, you have to shape it. And there's a notion when you're building explosives, this isn't a manual, of shaping the charge, of providing a shaped charge to direct for productive applications. It's like a rocket engine. And the thrust is on one end pushing you up. It's like a rocket engine. A rocket engine is a beautiful example of, in some sense, a shaped charge for an explosion or a shaped explosion. So we argue in this chapter, rather than just letting superintelligence be used for an uncurated set of problems, instead we should be aiming them through the nozzle, if you will, the rocket nozzle equivalent of moonshots. and that in particular, if we don't do that, then what will happen is sort of a puddle, which we call the muddle, a bit of alliteration, of bureaucracy that will instead just focus the world's superintelligence to the extent we even get enough of it on problems that sort of make use of input costs in a way that's highly inefficient. So really the argument is shape the charge of superintelligence. Another point that is made that I think is very important that we flow throughout this is a shift of instead of paying people for hours of work, paying people instead for solutions they deliver. So if you're a law firm and if you're hiring a law firm for interbox and hour-to-review contracts, the new world is not paying them for your contracts. It's paying them for delivering an error-free, legally tight agreement, period. It's verified outcomes. And we're going to flow this throughout. I mean, this is a change, I think, that's going to hit us like a wave where it's going to transform. You're only going to be hiring companies and AI systems that are delivering you definitive verified outcomes. That's right. And one of the most, I think, egregious inefficiencies that one might see throughout the economy right now is people paying for the inputs when they should be paying for the outputs, paying by the person hour for labor when you should be paying by the achievements of whatever the economic system is. And I think it's only by moving to this sort of performance or outcome-based economic mindset that we get all the benefits of abundance. So I feel like this is really two chapters or two thoughts in one section called the thesis. One is ASI is inevitable. The other is really compelling, which is the shaped charge. It really dawns on me that graphical stuff, the holodeck, the virtual girlfriend, are very compute intensive. and solving a disease or solving physics is actually not any more compute intensive than one person's virtual girlfriend. And so the choices on how to use our very limited amount of compute over the next three years are critically important. Where do you focus it? Yeah, I love the fact that you're taking this on because there's no body of authority right now that's even thinking about it that has any power. So hopefully you can wake a lot of people up. You've articulated it beautifully, Dave. So wait, I've got a couple of points here. So I think saying that cognition is a cheap commodity is fabulous. I think it's really important. And the use of that in solving kind of big problems is really, really important. I think it's great to say let's evaluate and reward outcomes rather than rewarding work. I've got to push back on the ASI is inevitable thing. That's like a philosophical statement rather than scientific. I think that weakens the paper. I'd rather you say something like incentive structures, you know, given the current incentive structures, scaling intelligence is a much more important detractor state, right? because that will then lead you to where you want to get to. I would say, I mean, I think it's an interesting point to be sure, but I think there's almost an instrumentally convergent trap that I see a lot of frontier labs at least partially fall into, which is, okay, we have superintelligence, at least baby superintelligence right now. How do we allocate it? In particular, what fraction of our compute budget, if you're a frontier lab, do you allocate to building the perfect AI researcher that can recursively self-improve, as we talk about in almost every episode at this point, versus how much of your compute budget, which is scarce, do you spend solving everything else? And I think that that's sort of the fundamental quandary here. How much do you sort of reinvest in recursive self-improvement versus now finally using at least some of the compute to solve everything else? And I think solving that asset allocation question is key. And then within everything else, how do you distribute it? That now smacks up Peter's law, which has given the choice to do both. Alex, this is also going to be true for the entrepreneur, for the company, right? We're all going to have compute budgets in the final result. You have a certain amount of compute. You have access to. Where do you aim that compute, right? It's a front. It's a way front that you can aim in a direction that you want to solve. And when you do that properly, it not only enables you, but it enables everybody else to build on top of it. That's right. And I'll move us on to Chapter 3 here. And again, please, there's so much content. We really want you to take a look at this paper and read it. We're just giving a quick overview here. The mechanics. Alex, over to you. Okay. So first, I think in this chapter, we finally definitively address the question that I guess every time I'm making a point about AI solving math, which is what does solving mean? What does it mean to solve a domain like math? And we provide in the chapter a more thorough definition, but sort of heuristically, the shorthand is to solve a domain means that you can get it to the point where you can just pour compute on and problems get solved. It means that you can scalably, you have all the architectural pieces in place, and I'll talk in one second about what the architecture looks like or should look like, but you have enough of the architecture in place that you can scalably, literally pour more compute on and get more solutions out within that domain. So that's, for avoidance of doubt, when I talk about solving math or solving physics or solving other domains, that's what I'm talking about. So second point. Yes, please. I would just say, Alex, on that, it's no longer the domain of a single genius to work on something and hope they got it right. The AI compute, as you said, it's a matter of where you want to aim that shape charge. That's right. We're seeing the industrialization of cognition and the bulk solution of multiple fields. I should also add parenthetically, I guess, as a preliminary matter on this narrow topic, I also have a portfolio company named Physical Superintelligence that's trying to solve all of physics with an approach like this, just for full disclosure purposes. The architecture involved, so several layers. You need a purpose. That's like the objective function or the goal. You need a task taxonomy, which is essential. You need a suite of tasks that are going to be solved. It's almost the map of the terrain that you're going to solve. And when we talk about making sure that compute is being used efficiently and wisely as a targeting system or through the lens of a targeting system to solve lots of problems, the task taxonomy is absolutely essential. Third, observability. You need raw data from data streams or sensors that you're going to use to adjudicate whether you're making progress. Fourth, you need the targeting system itself. So I've argued on this podcast and elsewhere many, many times, we need more harnesses. We need more benchmarks in order not just to make sure that we're making progress, but to actually shape the charge and shape the progress. Many AI techniques depend on benchmarks and evals in order to make progress in a given field. The next item, the model layer, the most obvious one, we need models. We need AI models that are capable of functioning as a virtual brain for solving problems. And fortunately, those are improving pretty rapidly. Next, we need modes of actuation. It's insufficient for us to just know, you know, there's television commercials. Well, you know, I stayed at a Holiday Inn Express at night. Therefore, I know how to solve the problems. Similar idea here. Maybe that's a bit too colloquial. I don't know. We need modes of actuation. So hands and APIs that are able to reach out into the physical world or the virtual world or the biological world and shape the impact on the world given better ideas coming from the AIs. And then finally, we need better modes of verification, red teaming, governance distribution. That's what we call the industrial intelligence stack. So whereas previously during the Industrial Revolution, we might have spoken about rotors and combustion engines and various forms of electromechanical systems. These are the key components, I think, the key layers of the intelligence revolution. You know, the alpha for entrepreneurs here is we've talked about, you know, these waves of solving areas and problems, right? We're about to flip math, coding, physics. So your job now as an entrepreneur is to figure out which industry is about to make this flip and where do you focus your compute wallet on making that, right? And how do you help solve an area of passion to you? Dave, Salim. I'm kind of curious whether, you know, I'm used to launching a couple hundred agents, maybe 250 agents, 256 agents, actually, to work in parallel on a problem. And if the scaffolding that you're describing is right, it comes back just perfectly solved. And if it's even slightly flawed, you have, you know, a $2,000 bill and a bunch of crap. How much are you spending per day on those agents, Dave? Yeah well it every few minutes popping up on my screen here It not quite that bad It does seem like it every minute but it not But I curious you know to what degree this is actual engineering these five layers are true scaffolding, like this is hard code, or is it more conceptual? I think it's a balance of both. I also think it, to some extent, is a trick question, because increasingly the harness and the scaffolding itself is being generated by the models. So to the extent that we're in the era of recursive self-improvement, this entire architecture is itself an artifact, a downstream product of itself. Yeah. Yeah, I think I totally agree, and I also think that's the path to insanity. Because at some point you have to say, this is hard code, Because, you know, then the AI will invent the next thing and the next thing. It goes to infinity, and then you're just like, you lose your mind. I would say also that this is, in my mind, the way we prevent insanity in an era of recursive self-improvement is with these benchmarks, targeting systems that make sure that as systems are recursively self-improving, we can quantitatively measure what are they optimizing towards. Are they going in a constructive direction or not? chapter 4 the lockup wait wait wait I've got a couple of comments here if you can go back a slide can I go back sure okay so I think the I really love the shift from genius to logistics because as you move you can you always kind of say take something from a black art and make it a prescriptive process and when you can do that that's awesome I think that's fantastic I have an issue with your you know maturity levels because you call it like natural law, but it's really just the taxonomy. We've had lots of industries get stuck at different levels, like autonomous driving, et cetera, et cetera. So this feels like a framework retrospectively imposed on what's going on. I think it's great aspirationally, right? But some of them, because calling it a maturity curve kind of speaks of like an inevitability to it, which that may not be exactly the case. is more of a descriptive model than a predictive one. Yeah, I would say any good theory of history and solve everything is in part not just a theory of the future, but a theory of history and how revolutions have worked in the past. Inevitably, as Monty Python says, it's only a model. So I do think there is an element of model building here where we're trying to, for the first time, articulate a self-consistent, coherent theory of how this is all supposed to work. How is the singularity supposed to play out over the next 10 years? And to your point, Salim, about autonomy model levels. Alex, I could say not only how it's supposed to play out, but how do you have it play out in a way that leads us towards abundance versus towards a model? Normatively, how should it play out, not just how will it play out? But I think one, you know, at the margins, one can quibble, well, actually, there are seven maturity levels for industries to evolve through their industrial intelligence stack, or it's a continuum. But I think the central point stands regardless of how one sort of splices hairs on maturity levels that we're seeing over and over again, and we can get into more detail on this. we're seeing domain after domain, industrial vertical after industrial vertical, succumb to basically the automation of intelligence, which used to be the province of individual artisanal loan innovators, and it's just becoming an industrialization of intelligence. All right, I'm moving on to the next chapter, chapter four. I'm sorry, keeping us moving, the lock-in. Alex? So in this chapter, we talk about, in part, AlphaFold3 from Google DeepMind and argue that that was a template for entire collapses of domains that almost overnight, and I've made this point on the pod in the past, AlphaFold3 took the problem of determining the structure of a protein, which used to require a biology PhD student five plus years of time, laborious bench work just to determine the structure of a single program. And almost overnight, AlphaFold3 solved that problem across many millions of proteins known and unknown. That's, in my mind, like the prototypical example of a domain collapse. And we argue in this chapter, the lock-in, that we're now in a phase of history, of future history, where this is just going to start to happen over and over again across different fields, where intelligence shifts from an artisanal craft to a utility that just flows. And we argue that we have approximately 18 months or so to decide what direction to shape the flow in and to set the standards for how this is going to be done at scale, given that we are dealing with scarce compute, to put in place the supply chains, which are huge. And we talk about on the pod all the time about all these supply chain scarcity issues, memory chip crises, GPU crises, What happens to Taiwan? What happens to the semiconductor fabrication facilities in the U.S. versus not in the U.S.? And then all the data rights. We're in a critical, we argue, 18-month period when all of these details are going to shape the intelligence explosion. And so we want to make the best decisions in the next 18 months. I can't wait to read this chapter, actually. 18 months is such a short timeline. Another important point here for CEOs listening, for entrepreneurs listening, is the race isn't about building the best AI. It's about writing the best scorecard that everyone else is graded on. So what does that mean? You know, today's health care system, and it's an example, Alex, you used beautifully. Today's health care system, the benchmark is the number of patients processed per hour, right, Which means it's driving a lot of short visits with the physician and cost economics driven. But what if the benchmark instead were patients who were still healthy five years from now, right? That would set up a whole different set of optimization outcomes. So writing the scorecard that your AI system is going to use to measure success is critically important. So why is this chapter called the lock-in? Exactly. Are you implying that the decisions we make in the next 18 months have locked in humanity for the rest of time into a path? Maybe not for the rest of time, but that is the inspiration for the name, that we're in a period inspired in part by annealing of a metal cooling, that the decisions that we make now are at least going to lock in a chunk of our future light cone. Yeah, it makes sense. Totally makes sense. You know, it took the QWERTY keyboard. It was decades of lock-in. So I think it is. But I do like the alpha fold. You're stuck on the pretty keyboards. I am stuck. We could have the singularity and you'll see. How long before we can get past that? Can we stop you? But anyway, I really like the alpha fold example demonstrating a domain collapse, right? That's like really great. But you're here, you're talking about lock-in as a technical inevitability. But this is many times a policy and a governance choice, right? It's monopolistic APIs. It's closed data. It's regulatory capture. There's lots of other stuff. Because how do you distinguish between, like, bad lock-in and productive outcomes? That's tough. I mean, in your perfect world, are there, like, five jurisdictions with different choices, and at least we have variety, or is it inevitable that there's just one lock-in? I think, I mean, in some sense, that's the grand geopolitical question, that as we just not a normative answer, but just a descriptive answer, it seems like we're heading to a near future where there are going to be multiple spheres or zones of influence, each able to independently lock itself in. So to the extent that we, with this, call it an extended essay, can have any influence, I think the aspiration is to have a positive constructive influence on all of those spheres of influence and not just the one. by the way I disagree with 18 months when I've been advising some big company CEOs I've been saying two years so you're pulling a reverse Moore's law remember Moore's law started as 18 months became 24 months you're pulling a reverse more because if you have the next meeting six months from now it's going to add that six months time anyway go ahead all right let's go to chapter five here the mobilization and Alex if it's okay with you the last three chapters of this paper are the most important. I want to hit on chapter five and six and then really focus on seven, eight, nine. So give us a summary on mobilization, if you would. All right. So the idea with this chapter is spelling out a future timeline for how a, call it a wave front of the explosive shock of the intelligence explosion was going to propagate from math, which we talk about on the pod all the time, over the next couple of years to the physical world, physics, chemistry, material science, biology. And then through the end of the decade toward planetary systems, fission, fusion, the Dyson swarm by the early 2030s. Amazing. And chapter six, the engine. Yeah, so this engine is very practical and talks about how to design the targeting systems, the benchmarks at a sufficient level of rigor that readers and folks all over the world can implement it with some level of confidence. You know, the point we made here is, you know, Don't invest in the AI models. If you look at the train and train track analogy, the trains are becoming commodities. It's the tracks, right? The tracks that the trains run on, the scoring systems, the testing infrastructure, the data systems, the funding mechanisms. And they're laid out beautifully here. Those are the elements that are the most important for entrepreneurs and CEOs to be focusing on. That's right. Let's go to Chapter 8, 7, one of my favorites, Moonshots. So here, and maybe, Peter, you want to speak to this one perhaps even more than I do. We lay out 15 different moonshot-level missions for what we argue are good uses, maybe optimal uses for this targeting system capability as we start to channel superintelligence into productive applications. Maybe, Peter, I'll pass it back to you for your favorites. Sure. So the thought is, you know, many of us have discussed XPRIZE over the time. The notion is that there's these giga XPRIZES, these massive opportunities on a humanity-level scale, from printing human organs to achieving fusion to understanding the fundamentals of unified field theory and physics. and it's where do you as an entrepreneur or you as a CEO or you as head of an organization want to focus this incredible superintelligence that's coming to take moonshots. I keep on saying, you know, in the educational field, if you're using AIs in ninth grader to solve a ninth grade homework assignment, you've lost it, right? If you're using AI to build starships, that's it. So how do we, as humanity, go after problems that we would have never imagined we're capable of doing? And so the chapter lays out 15 different moonshots just to get creative juices going, to say these are capabilities that we're going to be able to bring to bear to solve these moonshots. Can you list out a couple of the moonshots? Sure. Just saying for the viewer. One of my favorite ones is interspecies communication. I have a soft spot for that. We talk on the pod all the time about uplifting non-human animals. And I think as we start to think, and maybe somewhat controversially, about what future forms of personhood might look like, I think solving problems like interspecies communication or solving hard problems in physics, those definitely have soft spots in my heart. Yeah, I think it's making humanity a multi-planetary species. It's getting to longevity escape velocities. It's all of the things, you know, it's basically speedrunning all the science fiction movies, the positive, non-dystopian science fiction movies. That's right. Yeah. You know what I love about this is if you look at John F. Kennedy and going to the moon, the brand effect, you know, enabling somebody in power like John F. Kennedy to tie the brand of the mission back to them, that's critically important for them to then inspire the world that this is important. And I think what we did wrong is our governor here did an incredible job of unleashing $3 billion from the legislature to try and become an AI leader. But it was too vague. It's like, what does it mean? So the money hasn't even been deployed. But if you tie it to these 15 moonshots and then the governor says, we want our state to win this race like John F. Kennedy did to the moon, they can pick the one they're passionate about and unleash it. And we have 50 states. They can all choose their favorite of the 15, maybe not talking to aliens, but whichever one they latch on to. It's such a really great framework. I'll just list some of them. Double human lifespan is one. Ending hunger with synthetic food systems around the world is another. AI-empowered education for all at the highest possible level. It's high bandwidth BCI. We've been talking about that on this pod for a while now. demonstrating human mind uploads. Can't wait for that. You know, plan B, maybe plan C. We'll see. You know, as Alex said, interspecies communications, understanding human consciousness. I think we've talked about that previously. You know, can we understand human consciousness, at which point maybe we'll understand consciousness for our AI systems as well. So, you know, what have we dreamed about? Another one I love is disaster prevention and avoidance, predicting earthquakes and then preventing them, or tsunamis, as the case might be, right? These become natural X-Prizes, you know. They are what I call giga X-Prizes here. But I think one of the important things in this chapter is allowing people, in fact, demanding people dream bigger than ever before, because the tools we have to solve the biggest problems are now epic. I think this, for me, is the most powerful part. The fact that you can say anybody has the agency now leveraging these tools to go after these, what seem like impossible things, become rote. You're only limited now by your imagination. And your compute budget. And your compute budget. But, you know, that's dropping 90% a year, so we're in good shape. That's right. All right, the muddle versus the machine. And at first, Alex, when you proposed muddle as a term, I was like, I'm not sure I like it. Now I love it. So describe what the muddle is. Yeah, so the muddle is the, another term might be the bureaucratosaurus that loves to measure inputs rather than outputs and slow down progress. And the idea is, without properly shaping the charge of the intelligence explosion, the muddle is the end state that we find ourselves when sort of basically muddling our way through is one of the etymologies of that term. So what we talk about in this chapter in a single sentence is what happens after we win, painting a positive and non-dystopian view of, in particular, what does human agency look like? I made this short film posted to social media called A Nation That Learned to Sprint, depicting what life in the early 2030s might look like if everything goes well. And we see GDP 2xing or 3xing year over year. And what does a human, quote unquote, job even look like in a macroeconomic scenario like that? So in this chapter, we lay out lots of new job opportunities, career opportunities that will be available to humans, at least unaided humans. So target designers, for example, or data rights brokers, people who are involved in shaping the targeting systems and shaping how we aim, fire, and verify superintelligence towards the hardest problems that humanity faces. This is going to be a growth industry from a job perspective. Another point we make in the chapter here that's super important we've discussed, and Selim and I've discussed this before, is that GDP is a terrible mechanism for measuring economic health. So the paper proposes replacing GDP with something called the Abundance Capability Index, which is measuring a nation's capacity to solve problems rather than how much money changes hands. So I think, again, as we look at benchmarks, as we look at rails and harnesses, understanding this is really important. I think the challenge here, though, is, you know, it's UBI, UBC, whatever you want to call it. It's a great endpoint and a great aiming point. And you want to have a target, as you say, Peter, otherwise you'll miss it every time. The challenge is moving from a welfare, taxation, labor, union structure to that. That's such a huge leap. I have no confidence in public sector in getting us there. So how do you navigate that? I think that's something worth exploring. That's a wonderful scope of your thing, but that's a huge consideration. I was going to say, so Liam, what a wonderful transition. Thank you to the last chapter, Build the Rails. Building the Rails, Chapter 9. I think one of the most important chapters of the entire paper. Alex? Yeah, so this chapter is where we lay out the answer to Salim's question. So what's the so what and what do you do? If you're not running a nation state, what can you do? How are you empowered to shape this transition, to shape your own moonshots and to control your own targeting system? So we lay out various suggestions from investors, as indicated in the slide, funding the primitives, not the applications. There's so much infrastructure that can and arguably should be built out. If you're an entrepreneur, you should be picking your own targets with the targeting system, create your own benchmarks, and aim your own compute. If you're an executive of a large company, you should be measuring the outputs, not measuring the inputs. Dave, I think you put it beautifully earlier in this episode, talking about the API application of large corporate boards and corporate governance. I think that's exactly the right playbook here, and the missing factor is having a benchmark to measure corporate objectives in such a way that the problem of corporate governance becomes a matter of maximizing the use of available scarce compute to maximizing those KPIs and those evals. So in this chapter, we lay up for a variety of different roles in the economy. What can you do? What can you in the audience do to help us achieve a utopian vision of abundance and post-scarcity and excellent use that's use social for superintelligence? So I want to wrap this here. I want to encourage all of our listeners. We'll put the link to the paper down below. It's solveeverything.org. Please take a look. Load into your favorite LLM. have a conversation. What Alex, and to some degree myself, but I credit Alex, is what's the vision for the decade ahead that's going to bring this to abundance? How do you do it? How do you lead as a leader, as an entrepreneur, as a CEO, as a governor? Where are we going? And it's going to move much faster. And I think one of the points here, Alex, is that there's going to be such a distinction between those who do and those who don't, that it's going to create a sort of a 66 million year ago asteroid strike that's going to kill the dinosaurs and elevate the furry mammals, say furry lobsters, moving forward. No, we love our lobster friends. He didn't mean that. Peter really didn't mean that. No, no, elevate the lobsters, I would say that. I'll elevate them into low Earth orbit. All right. Favorite part for all of us AMAs. I'm going to keep us to one question per mate. All right. So here they are. There are nine of them. Let's say, Dave, do you want to pick first? Sure. I like number three because it's such a happy answer. In a world with perfect AI output, will there still be a place for human spark in art and sculpting? Will handmade work have higher value or be buried in the AI humanoid production? Wholeheartedly believe it will have astronomically higher value. Human touch, it will be so rare and so valuable, but also abundance of capital will be unbelievable. And so I expect artwork, you know, current artwork is one of the best investments you can make right now. But going forward, it is a category. It will go up tremendously in value, and people will appreciate all things human, whether that's human actions, human sports, human poetry, human artwork, sculpting. I expect to be definitely a rising area for sure. I think that would be a great conversation. I'll call it a debate, but one of our next pods. What is going to be most value from humans in the future? Salim, do you want to pick one of these? Let's see. I would pick number five, right, which is how is a young person supposed to earn an income when they compete against a model that costs $50 a month? That's from at ClownPeaceD. It's a great question, but you're assuming the future is about competing with AI. It's about directing it and leveraging it and amplifying yourself with it. You know, in history, we've destroyed old jobs. We've created control points, and we've done orchestration. We've done intent. So winning isn't productivity. It's agency. And we talked about this earlier in the past. Knowing what to do and why it matters is more important. How do you mobilize intelligence at scale is really the biggest challenge. And you can do that today in a way that you can't do ever. We've been doing workshops with teenagers and showing them how to use AI as a superpower to give themselves agency. And I think that's where I would go with that. Alex, would you pick one of these? All right. I like this assortment. So I'll pick number eight for $100 trillion. Question number eight is, with AI taking tasks we do ourselves, isn't there a risk we lose essential skills and become completely dependent on AI services? And that's asked by Jeroen Hoffs. So I want to invoke my friend, John Smart. I hope you're listening. John has, I think, a brilliant dictum that the first generation of any new technology is dehumanizing. It takes away all your skills. The first generation of calculators take away your arithmetic skills. Second generation is net neutral to humanity. Third generation is another friend of the pod, Stephen Wolfram. Mathematica gives you new superpowers, gives you new skills. So I don't accept the premise that there will be any sort of permanent loss of essential skills due to AI automation. I do think that there is a short-term substitution effect where AI drives down the cost of various skills or various tasks. But over the long term, I expect AI automation to be net superhumanizing. We're going to be capable of so much more with AI than we can do otherwise without it. And I'll also say Werner Wenge has written quite a bit about this. Definitely encourage everyone to read Rainbow's End and Fast Times at Fairmont High novel and novella, respectively, that talk about this ad nauseum. We're going to, I think, find ourselves in a very near-term future where, just like there's wilderness camp to learn how to survive without modern technological aids, We're going to start, I think, in our educational system, at least the better parts of it, having the moral equivalent of a wilderness camp for AI where all of your AI tools get taken away. You have to do things manually just so that you at least have that skill set. And then you get all your AI skills back and every fourth grader becomes a Nobel laureate. I love that. All right. I'm going to plug this out with number six. I use Claude daily. it fails in basic consistency. I think it's saying, how can this be close to AGI when I have to check every output for errors? That's from MMGP9OT. So I'm going to say again, AI is the slowest and most incorrect it will ever be. I know when I'm using my ClaudeBot or Claude 4.6, if I get something that seems off, I will ask it to check itself and be able to use this in a Also, MMGPT9, we're in a period of recursive self-improvement. I think we're at the steepest part of the curve, and it's going to become more and more capable every day. And the idea that we can use AIs to check AIs and, in fact, to do deeper reasoning is going to eliminate this very quickly. Okay, let's jump into our outro music. this is from a friend of the pod, CJ Trueheart CJ, thank you for this CJ was on a Zoom AMA that Stephen Kotler and I did for our book, We Are His Gods and he actually wrote this as a result of that AMA, anybody who is a creative, we love creatives and if you want to send us outro or intro music, send an email to media at dmandus.com myself and the team are reading it and we'd love to get your input and we'd love to play it all right let's enjoy uh this outro music from cj trueheart the singularity is near now the singularity is here and it's not asking permission it's asking you a question. What are you paying attention to? Are you paying attention or are you paying the price? Scrolling through a sea of sex and entertainment twice. You can be a creator or you can be consumed. Every hour that you waste is a future left in tune. The hands you UBI call it containment. A golden leash, a velvet cage, a comfortable arraignment. Wake up. The moment's here to open your eyes. Your dreams are close enough to touch the The deepest problems that have plagued you in disguise. Only you know that pain, only you can make it fly. So what do you see when you look in the mirror? Do your actions match the vision? Is the picture getting clearer? Why wait when the time is here? Why wonder when the path is clear? Why sit as a passenger when you have the power to steer? Attention is the currency Don't let it be the cage The future for some will pass them by While others don't ask how They ask why not now Not someday, not somehow They ask why not now See, everybody wants to live a Star Trek dynasty, but nobody wants to rise with a purpose they can see. Same old, same old, comfortable and cold, trading in their potential for a story already told. Handsome's only you can know, it's just a question of who you choose to show. Up is today, tomorrow, every dawn, every day. The version that's slow fading of who you choose to be today. So what do you see When you look in the mirror Do your actions match the vision Is the picture getting clearer Why wait when the time is here Why wonder when the path is clear Why sit as a passenger when you have the power to steer Attention is the currency Don't let it be the cage See I've lived in the dark Lost in the world, lived in poverty But the bottom didn't break me, it revealed the deep of me Those who face no challenge will embrace no change Those who embrace no change will always stay the same And those who stay the same get left behind Holding pocket change because they refuse to learn They refuse to turn what they gave their attention to So attention became their chain But I turned my pain into a plane And I'm never landing back on that terrain All right. Thank you, CJ. Guys, on behalf of Skippy, my lobster, sending you guys an incredible week ahead. All right. And as always, love it. Alex, it was an honor and a pleasure to work on Solve Everything with you. Excited to get it out into the universe. I think the value of steering people towards just accelerating time and how they actually have the biggest impact on creating abundance and not the muddle is critically important. Great, Peter. Pleasure writing it with you as well. And I would encourage all of the humans and non-humans in our audience to read it and let us know what you think. Yes, for sure. All right. WTF twice a week at these days. Thank you to our subscribers. It's free. please subscribe. We'll let you know when the episodes drop. Tell your friends about this. I've been here at Tony Robbins' event, and I would say probably 100 people have come up and said, oh my God, I love Moonshots, and everyone, I love Alex. Alex, you've got fans here in Sun Valley. How many of those people were human, Peter? Unfortunately, they were all human, at least for the moment. Yeah. All right. Dave, Salim, thank you, guys. If you made it to the end of this episode, which you obviously did, I consider you a moonshot mate. Every week, my moonshot mates and I spend a lot of energy and time to really deliver you the news that matters. If you're a subscriber, thank you. If you're not a subscriber yet, please consider subscribing so you get the news as it comes out. I also want to invite you to join me on my weekly newsletter called MetaTrends. I have a research team. You may not know this, but we spend the entire week looking at the meta trends that are impacting your family, your company, your industry, your nation. And I put this into a two minute read every week. If you'd like to get access to the meta trends newsletter every week, go to diamandis.com slash meta trends. That's diamandis.com slash meta trends. Thank you again for joining us today. It's a blast for us to put this together every week. Thank you.