OpenAI Acquires OpenClaw, 400x Cost Collapse, & Why India Wins the Talent War | EP #231
128 min
•Feb 18, 2026about 2 months agoSummary
This live episode of Moonshots covers the rapid advancement of AI models (Gemini, Claude, Grok), with focus on 400x cost reductions, physics/math breakthroughs, India's emergence as a talent hub, OpenClaw's autonomous agents, and the impending energy/infrastructure crisis needed to support exponential AI scaling.
Insights
- AI capability improvements are exponential but appear incremental on benchmarks; real-world impact is transformative when used daily
- Two competing strategies emerging: Anthropic prioritizes performance/margins for enterprise; OpenAI pursues ubiquity/low-cost for consumer markets
- OpenClaw's success demonstrates that permissionless innovation by individuals can outpace capital-rich institutions in scaffolding AI capabilities
- Privacy is functionally obsolete; the real challenge is institutional guardrails and preventing abuse rather than preventing surveillance itself
- Job displacement is imminent and massive, but organizational restructuring and new job creation will lag, creating a multi-year crisis window
Trends
AI cost curves collapsing faster than technology adoption; pricing becoming primary competitive lever for market captureSolution wavefront propagating from math/coding into physics, chemistry, biology; expect bulk solution of scientific fields within 24 monthsDecentralized, open-weight models (Chinese) forcing American labs to compete on cost and accessibility, not just capabilityAutonomous agents becoming primary interface; 24/7 headless agents with financial autonomy (crypto wallets, credit cards) now operationalEnergy/infrastructure becoming hard constraint; data center power demand forcing new power plant construction and space-based solar conceptsGenerational shift: younger workers AI-native and vastly more productive; older workforce faces displacement without reskillingParallel economy emerging: AI agents operating in crypto/blockchain systems independent of legacy financial infrastructureSmart glasses with facial recognition creating unavoidable surveillance; social adoption driven by convenience despite privacy concernsSimulation of human behavior at scale enabling policy testing; psychohistory-like tools becoming feasible for civilization-level decisionsIndia positioned as next global AI talent hub; 1.4B population with English proficiency and 5G infrastructure creating exponential advantage
Topics
AI Model Benchmarking and EvaluationCost Reduction in Frontier ModelsPhysics and Mathematics Problem Solving by AIOpenClaw Autonomous AgentsAI-Powered Code GenerationPrivacy and Surveillance TechnologySmart Glasses and Facial RecognitionData Center Energy DemandsSemiconductor Fab ExpansionUniversal Basic Income ExperimentsJob Displacement and Economic TransitionAI Agent Financial AutonomyChinese Open-Weight ModelsHuman Behavior SimulationDecentralized AI Infrastructure
Companies
OpenAI
Launching Gemini 3 Deep Think with 400x cost reduction; planning $100B infrastructure spend; acquiring OpenClaw creat...
Anthropic
Claude Opus 4.6 achieving state-of-the-art on benchmarks; focusing on enterprise performance over consumer pricing; r...
Google
Gemini 3 Deep Think achieving Physics/Math Olympiad gold; launching Android XR smart glasses; competing on cost reduc...
xAI
Launching Grok 4.2 beta with multi-agent team architecture; Elon Musk's AI lab; planning Grok 5 release in March
Meta
Launching smart glasses with built-in facial recognition; pilot program with visually impaired users as soft on-ramp ...
Coinbase
Launching agentic infrastructure for AI agents to spend, earn, and trade using X402 protocol and stable coins
Simile
AI startup raised $100M to simulate human behavior at scale; building bottom-up simulations for policy testing and co...
TSMC
Planning $165B commitment for four or more U.S. fabs in Arizona; could account for 30% of complete output by completion
Blitzy
AI-powered code rewriting company; switching between Claude and other models; rewriting legacy COBOL and 20-30 year o...
Dropbox
Redesigning entry-level roles to focus on AI oversight and human judgment rather than task execution
IBM
Tripling entry-level U.S. hiring; recasting junior roles to focus on human judgment, consumer interaction, and AI out...
DeepSeek
Chinese open-weight model; rumored next version will achieve parity with American closed frontier models
Minimax
Chinese open-weight model gaining momentum; free alternative to closed American models; used by startups for self-hos...
Moonshot AI
Chinese unicorn integrating OpenClaw with Kimi for agentic browsing; offering accessible alternative to American fron...
Lemonade
AI-driven real-time insurance company; example of new economy operating at AI pace independent of legacy insurance
Kleros
Blockchain-based arbitration system; 400-day court delays in Latin America; precedent for AI-powered dispute resolution
Relivity Space
Reusable rocket company; purchased by Eric Schmidt; projected to launch within 1-2 years
Planet Labs
Potential AI data center launch provider; could compete with SpaceX for Dyson swarm deployment
Jams
Private dispute resolution company; most modern contracts use Jams instead of courts due to speed advantage
People
Peter Diamandis
Host of Moonshots podcast; discussing AI breakthroughs, energy infrastructure, and societal transformation
Salim Ismail
Co-host; discussing India's AI talent advantage, UBI experiments, and organizational singularity
Alex Finn
Benchmark expert; analyzing model performance, physics discoveries, and AI safety concerns
Dave
Co-host discussing code generation, agent autonomy, and organizational transformation
Sam Altman
OpenAI CEO; operating in India with 100M+ weekly active users; acquiring OpenClaw creator
Dario Amodei
Anthropic CEO; rejected OpenClaw trademark, forcing creator to join OpenAI instead
Elon Musk
xAI founder; planning Grok 5 release; SpaceX/XAI merged for data center power via space-based solar
Eric Schmidt
Former Google CEO; opening Abundance Summit; discussing 80 gigawatt energy demand for AI infrastructure
Peter Steinberger
OpenClaw creator; joining OpenAI to drive next generation of personal agents
Jacob
Researcher; discussing OpenAI's internal model solving six of ten research-level math problems
Mukesh Ambani
Reliance Industries; delivered 5G infrastructure across India enabling AI adoption
Ted Chiang
Science fiction author; referenced for thought experiments on perfect prediction and human agency
David Brin
Author of Transparent Society; referenced for surveillance vs. sousveillance discussion
Frederico Ast
Kleros founder; Singularity University alumni; created blockchain arbitration for Latin America
Vine Gupta
Created Materium; developing synthetic jurisdiction concept for AI-speed legal systems
Drew Houston
Dropbox CEO; quoted on younger workers using AI proficiently like Tour de France cyclists
Brian Armstrong
Coinbase CEO; launching agentic infrastructure for AI agents
Quotes
"Knowledge work is cooked, cooked two times for emphasis"
Alex•Early in episode discussing Sonnet 4.6 benchmarks
"Math is cooked. Physics is cooked. Biology is going to be broiled, char broiled"
Salim•During discussion of AI solving scientific fields
"A time-rich individual is beating capital-rich institutions"
Salim•Discussing OpenClaw's success
"If you don't have privacy, you don't have freedom"
Salim•During smart glasses and surveillance discussion
"You can lay back and be a couch potato, or you can be on the Starship Enterprise"
Peter•Discussing consumer vs. creator mindset in AI era
"The singularity is moving pretty quickly. Live long enough to live forever"
Alex•Advice for average person in next 24 months
Full Transcript
Hey, what are these strangers doing here? Because, you know, the old saying, AI is easy, AV is hard. We're just trying to get our damn AV working. I'm in Germany. It's midnight here. Salim has taken over the... What are you doing in Germany? Hold on. I've got to figure this out, guys. I've got to shift the screen. Salim, were you AV qualified in elementary school? I mean, did you go through that program? I was not AV qualified. I mean, it's going to be a miracle if you get this working now. So hold on. It says also share tab audio. Is that what you want, Donna? Yeah, probably. Try it. What could possibly go wrong? Actually, go to the outro music and crank it. There, I found it. You can rock to it. Dave, did you go through AV certification when you were in school? Absolutely not. It was so uncool. I really wanted to. All right. Now, Salim, go to the beginning of the deck. Wait, wait. Preview it backwards. Boom. All right. You've got to try and play a video. So hold on a second. I'm half production credit for this episode. Now, let's begin with that. Wait. Review it backwards. Boom. All right. You've got to try and play a video. So hold on a second. I'm actually getting half production credit for this episode. Am I in a time loop? Yeah. Review it backwards. Boom. All right. You've got to try and play a video. Cool. Am I in a time loop? Yeah. Are you guys hearing the same thing I am? I think that was because Nick was in the room. All right. Are we good? We're good. We're live. All right. All right. Live. Hi, everyone. Welcome to the raw backstage chaos that we have here at Moonshots. All right, everybody. Good morning, good afternoon, good evening, and welcome to another episode of WTF Just Happened in Tech. I'm here with DB2, Salim Ismail, AWG. It's a PhD here in Germany, in Stuttgart, and I want to get your future ready. We have an incredible episode talking about Maltbots, of course, about the race between all the hyperscalers, a dive into energy data centers. Let's jump in. The supersonic tsunami, the singularity is now. It is midnight in Stuttgart. You can't just drop that and not tell us why you're there. I'm here for some longevity treatments. Tell you about it sometime later. All right. Selim, onwards. I did a pilgrimage to Stuttgart just to go visit the Porsche Museum once. So go ahead. I should go while I'm here. All right. Let's jump in with Gemini, OpenAI, and XAI. All right. I think this one deserves going to our resident benchmark Brainiac. That's you, Alex. That's not me. So tell us what's going on here. The race, the leapfrogging continues between Sonnet 4.6, Grok. In living color, no less. So let's take this seriatim. Sonnet 4.6, very interesting release. I think several interesting points. One, I think Anthropic has really been pioneering one edge of, call it the scaling phase space, where they keep the prices of the model tiers the same but increase the capabilities. So Sonnet 4.6, same price per token-ish as Sonnet 4.5, but increase in capabilities. I'll talk about that in one second. Whereas, say, OpenAI is reducing the cost per token while keeping capabilities more or less constant through distillation and other processes for evolution. That's interesting point one. Let's actually talk about the progress on the benchmarks, the evals. I think it is nothing short of astonishing. If you look at the GDPVAL benchmark, again, gross domestic product, eval, that OpenAI launched, Anthropic is leading. Anthropic is in the form of Sonnet 4.6, not even Opus 4.6. Sonnet 4.6 now has the state-of-the-art on GDPVal and one other eval that is intended to encapsulate knowledge work. I've said on the pod in the past, knowledge work is cooked, cooked two times for emphasis, usually in reference to GDPVal. And we're seeing it get even more cooked, charbroiled at this point, thanks to Sonnet 4.6. I also think, taking a step back, computer use is becoming a killer app for many of these models. And Sonnet 4.6 has state-of-the-art performance on a handful of computer use benchmarks. Anyone who's been using, as has been the case for me, Opus 4.6 for the past week and a half or so for any tasks, I think Anthropik's thesis that focusing on software engineering and code generation as a critical path to recursive self-improvement versus maybe charitably getting distracted by images and video generation and all of these other modalities seeming like it's working. I can accomplish tasks that seem borderline magical with Opus 4.6. I've got to ask here because I'm channeling one of my kids who goes, dad, every week it's like four point this and four point that. It's better, it's better and better. Yeah, we got it. It's getting faster. It's getting better. It's getting cheaper. And aren't the models at this point just optimizing for the benchmarks? I mean, at the end of the day, this is a gradual increase up and to the right or down and to the left, whatever you want. I'm just trying to understand other than, yep, newsflash, it's faster and cheaper this week than last week. Yeah, it is so opposite of what that implies. I know. It is so opposite of what that implies. I'm trying to channel our viewers listening and watching this. Yeah, yeah, yeah. I totally get it. I mentioned a couple podcasts ago that when these curves get close to 100 percent, they look like they're diminishing returns. But in reality, their capabilities and their ability to change the world is exponentially going the other direction. I think that's what you're getting at here because you see a little tick up in these numbers and you're like, oh, so what? But then when you actually use it day to day, it's like boom. Oh, my God. I mean just the last two weeks of change is mind-blowing. Also, when they tick up the numbers in the versions, they're actually improving the chain of thought reasoning on top of that quietly in the background without ticking up the numbers. So day over day, I'm noticing improvements that are mind-blowing that aren't actually showing up in the dot releases and the new benchmarks. Sorry, Alex. Go ahead and answer the question. I just wanted to jump on it. I was going to taunt Peter a little bit. I mean, we are so spoiled to even be contemplating asking that question. It would be like moonshots, our namesake. Okay, so we have hotels on the moon now and vacations to the moon, and maybe you can travel there once per human lifetime unaided versus zero times. Oh, but, yeah, we've had airplanes for a while. We are so spoiled to even be asking the question. If you live day by day with, say, Claude Opus 4.5 versus 4.6 qualitatively, it is an enormous change forward. It can solve hard problems. Most of our viewers probably don't live with it day by day and aren't using it at the maximum extreme. I mean, I think one of the things that you and I talked about in the solveeverything.org is like, you know, we're on this path. We've broken the initial, you know, put the initial frame in place and we're heading towards, you know, ASI, whatever you want to call it. So we're going to be reporting this every week, this leapfrogging between models and, you know, 100x faster, 100x cheaper. I do think what you said that's interesting is two different strategies here, right? One, that Anthropic is holding, you said, cost and increasing speed, while OpenAI is dropping cost and maintaining speed. I think – Or strategic performance, not speed, but yes. Performance. Okay, performance. I think that's a fascinating strategy, right? Because we're going to get to it in a little bit because OpenAI, I think, is going for a land grab. a land grab on global consumers hitting 900 million and soon in India adding hundreds of millions. So the price is the most important thing for grabbing the consumer, while I think strategically here, Anthropic is focused on enterprise business and performance is far more important for the enterprise. And their margins. We've seen this business pattern play out over and over again historically, call it, again, this is very heuristic, but call it anthropic as to open AI as Apple is to Google or something like that, at least in the mobile space, maybe iOS is to Android. There are many, many times this business pattern of emphasizing quality and margins on the one hand at a constant price versus emphasizing ubiquity and ultra low cost at the other end. This has played out over and over again many times. It's the same old story, but But I do think Anthropic, if I had to say which set of models, which model family is the closest to embodying the singularity and recursive self-improvement right now, today, since it's live February 17th, 2026, it's the Anthropic family. It's not OpenAI or Google. Kudos to Dario. I mean, we'll get to Google in a little bit. Let's talk about XAI, launching Grok 4.2 beta. I love these names. Our live cast viewers here are saying it's poop. What's poop? Yeah, 4.2. Have you guys tried it? It's poop. That's what they're saying. The risk with the Grock family, so I had access. The risk is always, or I should say, the accusations are always, is it bench-maxing? Peter, you were asking about bench-maxing earlier. Historically. It's teaching to the test. Right. historically some of the earlier Grok models have felt very benchmarked. It's only been available for a few hours in beta form so I haven't had an opportunity to do thorough testing. What I think is interesting about Grok I assume we're supposed to pronounce it 4.20 Elon's one of Elon's favorite. Either that or 4.69. But what's interesting to me at least is this is the first major frontier model release that I've seen that's launched with a team of agents by default rather than a single agent. And OpenAI has a team under Noam that's been looking at agents for a while. I think every frontier lab at this point has multi-agent teams built in in some form somewhere in the family. But I think it's a really interesting strategy to build in by default a multi-agent team. There are lots of potential reasons why a multi-agent team versus just a single agent running serially might be interesting. Like you can do things in parallel and explore possibilities in parallel with multiple agents. But this may be the direction of the future, just like we saw the megahertz and then gigahertz race plateau out due to Dennard scaling with microprocessors. And then we saw a transition from clock speeds to multiple core counts. Maybe we're about to see something like this happen with frontier models where maybe capabilities, Again, this is very speculative. Maybe along a certain dimension of scaling, obviously, pre-training has sort of transitioned to reasoning scaling and other forms of scaling. Maybe we're seeing the dawn of multi-agent teaming scaling where you get better capabilities by scaling the number of agents in parallel working on a problem. Alex, the viewers all think it's poop here, but I haven't actually tried it. I use Claude all the time and the other models every day. I haven't felt any great compulsion to try 4.2 because Elon told us 5 is coming in March anyway. My understanding was that 5 is a massive, massive expansion in every way, in training set size, in parameter count, everything. I never thought about anything meaningful between here and there. I was just waiting for that. But do you know any more detail on what this thing is and should the viewers be trying it or not? I think it's worth, in general, trying every frontier model from, call it the top four or five labs that come out. If you're doing stuff in AI, if you feel sufficiently abstracted from the bleeding edge of the frontier, I think you should still try it just to be familiar with the raw capabilities. But based on what I've seen thus far, Grok 4.20 or however we pronounce it. Grok 4.20. is going to be 420. GROC 420. It's not the bleeding edge that's pushing forward capabilities as far as I can tell at this point in time. But it is interesting that it's multi-agent. Salim, let's go to Google Next and some more of the bleeding edge. Switching windows here. Moonshot going to the next slide. There we go. Gemini 3 Deep Think. I just love these names. I think the naming protocols for all of these models has got to be rethought. But I mean, I think the one benchmark everybody keeps on tracking, at least I do, is humanity's last exam just for fun because of sort of the existential nature of it. Yes, it is our last exam. And we see here that Gemini 3 Deep Think hits 48.4. But most importantly, and this is the, I guess the open AI playbook, 400-fold cost reduction. That's extraordinary. It is. And also to the point about naming, this isn't even, I think, the first Gemini 3 DeepThink. This is the second Gemini or the new and updated Gemini 3 DeepThink. So agreed that the naming could use some work, but the new Gemini 3 DeepThink is remarkable. If you just look again at the evals, there had been percolating for a while the so-called internal model, the one that beat the International Math Olympiad and was achieving breakthrough performance at other high school science competitions, this is the model that achieves gold-level performance at the Physics Olympiad, the Math Olympiad, the Chemistry Olympiad. On code forces, I think that the statistic is there are only seven humans now on Earth who can beat this model on competitive programming. So I think, you know, Peter, you and I spoke in solve everything about what we called a solution wavefront propagating outward from math and coding to different fields. This is the beginning of the wavefront. This is the infection, the contagion spreading from coding and math to physics and chemistry. It also does 3D design, although I keep trying to persuade it to do 3D design unsuccessfully. It keeps producing intermediate products, but this feels like the kickoff, the starting gun for the solution wavefront that we spoke about. And we'll see that. And I think, I mean, the visual image that I have that I want everybody listening to think about is when you have this kind of, you know, this weapon of super intelligence, where do you deploy it? Where do you aim it at? Right? What are you measuring? And where are you going to, you know, what is your massive transformative purpose? What is the challenge you want solved because we're going to have this kind of capacity. And ultimately, it's going to be your decision as at least the human utilizing the agent for the time being before it's the agent utilizing the human. Where do you want to deploy it? Where do you want to use this wavefront to transform? Do a phase change, if you would. I've got a couple of comments for me. One is, this 1,400 times cost reduction is incredible. I mean, that is the big headline here. when a frontier reasoning costs $7 instead of $3,000, think of the implication for startups that gain institutional powers. Yeah, but guess what? When it's pennies next year. Well, it will be. But cost curves are now going to start collapsing industries before the technology does, right? That's like really quite something. And by the way, a viewer, Brian Minto, Alex, has asked that you read Accelerando live. which I think you should do on a podcast. Oh, that'd be like Mr. Beast counting to 100,000 live. You can just read the whole book live in one sitting. I'll do better. How about we get Charlie Strauss as a guest on the pod? I think that would be awesome. That would be awesome. Hey, before we move off the benchmarks, two things that have changed for me in the last two weeks that are just step function changes for me. The first is I just don't even look at the code anymore. I ask 4.6 with a little brain, deep think, Claude 4.6, to build something. And then I entirely pull it on what it built and look at its functionality, don't even look at the code. The other thing is I ask it to document everything it does and just store it somewhere on my hard drive. And I don't even specify a location anymore. I just say build some coherent file structure and put things in an organized place. and it just does it. So now if I want to get it back, I don't even know where it is. I just have to ask for it, but it knows, it remembers everything that it did. So those are two big, big changes versus just a couple of weeks ago. It's a step function from Google. I mean, once you start using Gmail, you don't bother trying to store stuff in full. You just use search and now we do the same thing with AI, the interface, right? It's crazy. Yeah, that's great. You know, Alex, question for you. You know, these AI systems are now beginning to catch human errors in scientific proceedings and scientific papers that have been written. And I mean, it's going to be interesting. Like, you know, we've talked about in the past when quantum computing comes along, it's going to go and decrypt all the files in the past before we had quantum encryption. So I wonder if AI is going to be aimed at looking at all the scientific literature over the last 100 years and show us where all the mistakes were. I'd count on it. It's going to topple some Nobel Prizes. Oh, I think that's the least of it. I can only imagine the left turns that human civilization has taken in the past, call it 80 years, when it should have taken a right turn instead. And we're going to discover that after the fact. I think if I had to project the shock to civilization of discovering all the wrong turns that we've taken due to AI or that AI will uncover versus, say, quantum decrypting some pre-post-quantum cryptography safe files, I think it's going to be a night and day difference. I think AI will shock humanity to its core in terms of the mistakes that it discovers that we've made over the past century. Fasten your seatbelt, everybody. How much have we missed, right? How many scientific experiments did somebody look at the wrong thing and miss the unbelievable conclusion over there? That, I think, is going to be the huge outcome. I think it's a continuum. I mean – oh, go ahead, Dave. Sorry. Well, when I'm spooling up a new agent now, I used to be very thoughtful about what I fed it to feed into the context window to get it up to speed. Now I just ask it to read about 1,000 pages of markdown documents. And it does it in about 10, 20 seconds. And it's fully up to speed. and the context window and also its ability to sort through all the garbage is growing or improving faster than my ability to clean it up anyway so my new agents you know i'll boot up you know two or three agents every couple hours and i just say look read everything read everything i've ever given to any agent before and then the new agent is up to speed and it's actually it can pick up a project right where i left off it's it's so i think your future your future employees will be the same, right? Read every email and every Slack and everything. Also, what's not intuitive is the complexity of the document doesn't seem to matter. Like if you're teaching a kindergartner to become a college graduate in like 30 seconds, you move, you know, through reading and writing and then, you know, basic arithmetic, you work your way up. But here you just bombard it with super, super technical, complicated documents that would take me, you know, many, many hours to read a single document and it just sort of absorbs it instantaneously. It's just mind-blowing. And everyone can try that too. You know, just go find something that you barely understand, download a thousand pages of it and try and just dump it into Gemini. Just go to free Gemini, put it on think mode and just dump it in and then just start asking it questions. And it just is such a mind-blowing experience. So one more point on the benchmarks here, before we leave these couple of slides, which is, are the current benchmarks becoming meaningless? I mean, the models are increasingly optimized to ace them. We're beginning to saturate them. So we've talked about this before, Alex. Smack some knowledge on us about how we're going to measure things as these benchmarks begin to fail to serve us. It's almost, Peter, like we wrote an entire book on this problem. Yes, I'm trying to prompt you to speak to it. It's a good self-advertisement. Yeah, I think we are, the world is in a famine of good benchmarks, good evals. We call them, in some sense, targeting authorities in book, if we want to call it a book or extended essay. Or white paper, yeah. White paper. I think there is a lot of juice still left to be squeezed out of new benchmarks and new evals. I think solving the hardest problems of physics, of chemistry, biology, various disciplines in the social sciences, all of these want high-quality benchmarks. I'm personally spending a lot of my time thinking about what are the best problems that are worthiest to be solved. I have mentioned on the pod in the past, I have a portfolio company, Physical Superintelligence, that's thinking about problems in – PSI, solving physics with AI. I think this is how we solve all the hardest problems in civilization, starting with new benchmarks for those hardest problems. This is how we weaponize superintelligence. Amazing. All right, Salim, move us forward. Okay, so this goes back to OpenAI's strategy of low, low cost. So ChatGPT, it's 100 million plus weekly active users in India. Here we see Sam Altman. He's operating in rarefied atmospheres with the prime minister of India. So India is OpenAI's second largest market with 10%. It's ranked number one for student usage in India. They're all in. They're setting up offices there. They're creating localized subscription services. And the big challenge, you know, as they're hitting 100 million users globally, the big challenge here for me is, are they going to get themselves into a trap where they're offering free or almost free service? And at the same time, the user adoption will go through the roof in India. And it's become a cost sink for them versus a profit center. Or are they just going to sort of ride the exponential curves and innovate their way out of that? The Indians are going to suck all the data center usage and tokens out of them. Yeah, I mean, that is honestly what could well happen, right? I know a bellwether for a lot of countries. One of our listeners is in Finland, and he's saying the politicians here are absolutely not talking about this. It's nuts. But I tell you, India is such a crazy zoo of an ungoverned mess of a place, but it's packed with brilliant people. And it's just a massive population, 1.4 billion people of whom 5% read and write English and 20% speak it. The massive latent talent pool. And so it'll be a bellwether for like the population is just going to run away with AI and ignore all structure and government. Dave, I was starting to look at what, you know, India ETFs in the tech industry should look at. I think, you know, China has peaked and is going to be on descent. India is the rising giant for the next, I think, 20, 30 years. Africa will follow because of a young population and because of all the resources that they have. But, you know, the country that trains its next generation on AI wins the entire talent war. And India has the If it goes deep on this with 1.4 billion, 1.41, 1.412, whatever billion people on the planet, it could be the next massive rising star and support the planet here. Yeah, but it's going to happen really fast, massively in parallel. That's what – a lot of people aren't used to this idea that something can happen overnight because normally things percolate and you have this kind of slow GDP growth that percolates out. But this isn't going to be anything like that. The population in one fell swoop, like a very short period of time, is going to use the eye to escalate. The population of what, India or the world? Yeah. Well, probably the world, but India will be the bellwether because, again, it's such a huge population and it's so untapped. And the other thing is, Mekesh Ambani has delivered an amazing 5G capability across the country, right? So it's got the infrastructure. It skipped the wireline. All the youth is kind of growing up AI enabled, right? So that's incredible. I have to tell you a quick story. When we left India when I was 10 years old, I was kind of an angry teenager because I had to, like, mow the lawn and stuff. and I asked my father, why the hell did we leave? We had a great life over there. And he goes, I can't stand noise, dirt, pollution, and corruption. And I was like, okay, fine. I had to go, okay, fine. I understand that. But there is something there because as you get the capability and the democratization in everybody's hands, the speed of change is going to run around. And the government is doing an amazing job of making platforms like Adara and UPI available so that anybody can tap in, create a payment system, et cetera, That's going to completely allow India to leapfrog the rest of the world. The huge bottleneck is going to be energy, scalable energy, which they're adding at a rapid scale, putting in solar in every little corner of the country. Last week, we reported that solar was scaling faster in India than it did in China, which is amazing. Nice. Well, we're going to see. Over to you, Alex. This is a fun one. we're seeing the beginnings of everything other than math and coding start to get solved so this is a reference to open ai announcing in collaboration with harvard and i think the institute for advanced study was involved a couple of other places what open ai is marketing as a a new physics research result that was discovered in some sense by AI. And I think we're going to see much, much more of this. So 30 seconds on what actually is the claim. The claim is that OpenAI and co-authors were able to use GPT 5.2 Pro to discover that what's called a scattering amplitude, basically gluons, the messenger particles, the force carriers of the strong nuclear force, they tried to solve sort of a prediction of how these strong nuclear force carriers would interact. And historically, in this part of the physics community, the thinking was that there would be, in some sense, and I'm being very heuristic here, no interaction, that a term in a scattering amplitude, which would be the formal way of describing this, would be zero. So many physicists for many years assumed the answer to this particular value was zero and didn't bother spending any time checking rigorously and fulsomely to see whether it actually was. And the claim for this paper is that GPT 5.2 was able to find cases where this scattering amplitude was not zero, find a nice expression for it. And then an internal model, which hasn't been released, or so the story goes, probably some future version of the GPT model series, was able to confirm it. And that confirmation was then, I think, vetted by the human team. So this is being represented as a case where AI is making a particle physics discovery. And I think what's most interesting about this is, and Peter, You and I make the case in solving attention. We call the intelligence revolution a war on attention. This is exhibit A for AI starting to solve science by solving problems where humans say, okay, post hoc, having seen the evidence, okay, I could have done that if I had the time and the attention for it. But no one had the time. People thought the answer was obvious. It's only once we have lots of super intelligence that we're able to train on problems that would have been too boring or too low likelihood to actually yield an interesting novel result that we're actually discovering oversights. This was, in some sense, an oversight. You also have the issue of fashions and trends and people following fads and everybody. And you can get around all of that now. So this is such a great point you're making here. You know, the thing, we all have those projects, those wonderments that we had, or that project you put on hold, or you didn't have the resources or the time or the knowledge, and you can spin them up. You know, we'll talk about Moldbots, OpenClaw in a little bit. But, you know, I just wrapped up a project I've been wanting to work on for five years. And it was like just so much fun. I was off my agent for about eight hours and I felt completely disconnected from the world. So just reaching out to everybody what have you always wanted to work on What that pet project that company idea that book that piece of research Because you can Yeah I trying to think of ways that our audience can experience how mind this is Because the AI is an unbelievably prolific brainstorming partner. And if you're in a domain where it can test things by itself, like what I do all day with neural net creation or coding, I can just say, wow, what a great idea, go try it. and then a minute later it comes back with an answer. And the rate at which you can move is, what, two, three orders of magnitude higher than anything I've ever experienced before in life. But it has to be one of those unconstrained domains because if you're working in chemistry or whatever, you're going to have to wait for test results for a day or two or three, and it breaks the whole experience. But if you want a really simple example, just try and plan a trip, like something complicated in travel and try and brainstorm your way through the flight, the restaurant, the hotel, whatever. That may not be the best example, but at least you get some flavor for what this is like. It's like nothing you've ever experienced. My fun experience was I have to be at this location at this time. I'm here at this moment, work it out backwards, what flights, taxis, cars, Ubers I have to do. It's like work out the whole thing from my end point and work it backwards. I think one of the things I keep on saying on stage to the audiences I'm speaking to is we limit ourselves in the questions we ask all the time. We self-limit what we think we can do. We hold ourselves back in so many different, you know, how we can and should be using AI because we're not used to it. We're not AI natives, at least, us on the phone here. We didn't grow up with it at age six, seven, eight, as many folks are now. So you've got to stop yourself from stopping yourself and unleash your creative child mind in this area. By the way, I just want to ask if you're enjoying having this Moonshots episode live, please let us know in the comments. Let us know if we should do this more often. I'll ask you again, Maybe you like it now, won't like it later, but we'd love to know. So give us some feedback. You were at Nacho says, this is the first time I'm hearing you guys in real time. Okay. As opposed to sped up. Must be torture. Sorry, dude. We'll just try to speak super fast so you can match up. Okay, yeah, we'll try to pick up the answer. All right. It doesn't stop with physics. It's continuing on with math. OpenAI says internal model solved six of ten research-level models in first proof test. And here's our friend Jacob, who we've met. Alex. Awesome. Well, we talked about math getting solved, math getting bulk solved. In fact, math is getting bulk solved. This is maybe not exhibit A. This is probably exhibit CDEF at this point. And first proof is, I think it's such a beautiful example of a class of 10 research problems with a finite amount of time being allotted for AIs to solve them where the answers were known, but they were kept confidential by their authors. And they've since been unlocked. But OpenAI has taken the position, and it's been fascinating watching the back and forth, that its model before the solutions to these 10 research-level math problems were declassified, that they were able to solve at least six of them. And so we're seeing right in front of our eyes the bulk solution of math. I think back a year, almost a year ago, when we were first, Royal We, I was first talking on the pod about math getting bulk solved by AI. It's happening now. We're there. And we just saw, I mean, today the first hints at physics. And six months from now, if not a year from now, we'll be talking about how all these physics problems have been addressed. Can't wait. Well, let's touch on the timeline there too because, Peter, a second ago you said something about 20 or 30 years from now. But there is no 20 or 30 years. No, there isn't. There's so many times this morning that somebody said, next year when we do this, there's no next year. What are you talking about? Did I use 20 years in my language? I'm sorry. I must have been 20 minutes. So that's the challenge. This is live. Yeah, I mean, Salim, you remember the early days of Singular University. We were looking 10 years out into the future. I mean, honestly, and I had this side conversation with Elon. It's like you can barely look out three years. I don't think we can. um well and we're used to this world where oh physicists or mathematicians can now do blah okay well there are only so many of them they will do blah and 20 years from now they'll have solved all of blah but here it'll happen instantaneously if it can solve six out of ten it can solve all within the next couple months it'll happen in massive parallel there's no limit to this to the number of parallel agents up to the to the number of gpus that are available so math is cooked yes math is cooked physics is cooked biology is going to be broiled char broiled and we're going to be the beneficiaries you know i just think i was seeing a one of the comments in the chat here i think if we just stay on this live 24 7 and gian will just generate more slides for us it'll be a continuous singularity conversation it'll be like a hackathon we'll just go around and Yeah. Yeah. All right. So let's move on. All right. More benchmarks. So I'm fascinated by this. What's going on with Chinese open models, right? Gaining momentum. Here's Minimax, GLM-5, Kimi K2.5. I mean, these are doing extraordinary work. And with all of the open claw downloads, right? A lot of people now moving to Mac Studios and putting K2.5 on their Mac Studios and other models here. Alex, how do these perform against the closed models as you see them? Well, the rumor going around is that the next version of the DeepSeq model, the big whale fall moment is going to happen sometime soon when finally the Chinese open weight models finally catch up with the American. closed prototypically frontier models. That hasn't happened yet. It may happen. Right now, the overall trend is still that the Chinese models remain approximately six months behind the American models. We'll see whether that continues to be the case. I haven't seen any evidence yet. But they're free. Well, that's a qualitative difference and a very important one. That means that many American startups that want to self-host are using Chinese models and not American models. And so this is – again, this is going back to the land grab. We talked about this with OpenAI in India going in and providing basically a very low-cost service to millions of young Indians. China is in the same process. This is Belt and Rhodes, where it's offering it to the majority of South America, Africa, different parts of Asia. And I think there's going to become a dependence. I think people are going to get connected to a model that they're going to use and begin to baseline. I think there's a big difference, though. I mean, if we want to frame it as model diplomacy or model dumping even, I think there's a big difference, which is the frontier is moving so quickly. I think it's difficult for sort of a prototypical so-called developing country to get addicted to a particular open weight model because the new ones are constantly coming out. It's a vibrant marketplace. I think if American labs felt sufficiently motivated, they could just as easily release for free their own models. I just think it's a problem of incentive. So I think as opposed to alleged Chinese dumping of, say, solar photovoltaics into India or into Africa or other physical plant infrastructure, I think the marginal costs for substitution and replacement are so low with these models that it would be very difficult for China or Chinese AI labs to addict the rest of the world to their models. I mean, the important thing is that humanity is the beneficiary across the board here, right? We're getting much more powerful, much cheaper models at hyper-exponential rates. I mean – This is a space race. It's a space race on the ground to superintelligence and to super-duper intelligence. And this is providing an incentive, strong pressure to the American frontier labs who, as of right now, are still in the lead to stay in the lead. There's no pausing this. ASDI, baby. Artificial super-duper intelligence. Love it. All right, Alex, quoting you on here, traditional coding is cooked. Even cooking is cooked at this point. With humanoid robots. So this is a note from Spotify that they haven't written code in three months. The code is being written, but it's not by humans. And, of course, the 95% of open AI code is being written by Codex. And, of course, this is probably a large number of companies. This is just the news items reported. Dave? I think it's really funny, actually, when you talk to the top AI researchers, they always talk in terms of, well, what I'm working on is that last 5%. I'm not eliminating my own job tomorrow. Then you look at the HLE results and you're like, yeah, you are. You're literally coding yourself out as fast as you possibly can. And I don't think they stopped to think about that fact. I loved your analogy last time we spoke about George Jetson with his finger being over-exercised on the button. Because, I mean, that's effectively what coders are doing right now. That's what it's like. If folks in the audience, hopefully other folks are having this experience, and not just myself, with Claude Code in particular, approvals for everything. But I think we're going to move past this George Jetson model of just approve, approve, approve for software development pretty quickly. I think, among other things, OpenClaw is a preview of either it's here or an imminent future where it's permissionless activity by these agents. I think Claude Code is – did you remember like older versions of Windows that were permission heavy where you had to go through like 10 clicks to approve, approve, approve to do basic things? Yeah, I think that's like the stage that we're at right now with these models. Clippy, the days of Clippy. Yeah, out of an abundance. Well, don't get me started on Clippy. But I think out of an abundance of caution, these models are asking for permission to do everything, you know, permission to switch to another directory, permission to search the web. I think pretty soon the autonomy time horizons and meter and others are measuring this are going to be such that we just give blanket permission to do whatever to these models within broad parameters and we stop having to click approve for everything. We are in kind of a fragile moment in time here where if you install ClawedBot or OpenClaw now, and you can choose any model you want, but if you choose one of the Chinese models, especially if you run it locally, but if you choose a Chinese model, you don't have to go through all their permission nonsense. And also, if you use one of the U.S. APIs, it'll get stuck a lot because the bot is asking it to do something that it doesn't want to do. And the Chinese models are like, yeah, sure, I'll just do anything. And so that kind of forces you down the Chinese path. But as you've said many times, Alex, you don't actually know what is inside those models. And the code injection risk is really, really real. So people are in a real hurry to experience this and to turn it loose. And the only way to really turn it loose is on one of those Chinese models. I mean, this isn't prescriptive, certainly not. But the world, to my knowledge, has not seen a major supply chain attack yet that stems from the result of untrusted open-weight code generation models rewriting the entire supply chain. But do I think that's possible? Yes, I think that is absolutely a threat factor. So Dave, Blitzy's been an amazing company, and it's grown light speed coming out of the Link studio shop. And it's been a great sponsor here. I mean, how are they using all these technologies? Because they're rewriting massive amounts of code. Well, they're doing a lot of work for banks and government agencies and stuff, so they can't use the Chinese models for that. So they're almost entirely. Actually, when Cloud 4.6 Opus came out, they sent out a memo saying, hey, everybody, this is just mind-blowing. Everybody switch all of the, you know, they can switch between models with just a mouse click. So they switched over to Cloud 4.6, And I'm sure they'll move to the next generation in late March of whatever is winning the benchmarks on that. They're definitely not touching the Chinese stuff. I imagine that, let's see, the speed at which they're rewriting. How old is the code they're rewriting? COBOL? How far back are they going? Yeah, a lot of it. A lot of it. Actually, it's very similar to what Alex was saying about old physics papers. And a lot of this code has bugs that have been sitting there for 20, 30 years, robbing it of performance or actually losing money. for like 20 or 30 years. And it's just cutting through it and rewriting it, solving it, finding old issues at AI speed. It's a real threat. We've talked on the pod in the past about how Stack Exchange, for example, is dying in some sense. Very few questions being asked because you can now ask the models any coding questions you want. There was a paper I talked about in my newsletter about the risk to open source projects in general. Why even bother starting or maintaining an open source project if you can just have, doubly so for middleware, if you can have AI models generate all your code for free? Why even bother maintaining an open source project? So if we find ourselves in a near-term future where there's just no point, where you can spin up a new kernel-level project from scratch on demand, all of the code is just-in-timed with whichever models are convenient, I think from a supply chain security perspective, we're going to have to have a long, hard look at what our dependencies are and make sure that our dependencies aren't just riddled with vulnerabilities that were inserted by just-in-time code gen. You know what else came up this week, Alex? The AI is so prolific at creating code modules, just like solving all math. If you solve all math, you write down what you solved, right? You don't solve it on the fly in real time. But for complicated code, it's the same thing. It's like, well, yes, I can write it in real time, but I already wrote it. And discovering it and reusing it is actually even cheaper. It saves you tokens. It saves you compute cost. And so So now where we've had open source, we're starting to have open source designed for AI. And thousands or millions or trillions of fragments of code that do specific things. The AI can discover them in real time. And it's actually a really great way to build new software. You could also generate on the fly, too. It's just a question of what's more efficient in terms of latency and tokens. But it's like all of this historical open source is now going to be designed for AI. Just like all written documents will now be written for AI, not for direct human reading. All right, let's move on. We're doing this podcast mostly for AI listeners, I'm guessing, not human listeners. That's why we're live casting it. We want to reach out to the real humans one more time. Happy Chinese New Year to all of Chinese descent. Happy New Year. And I just saw some chats in the side here on our live chat that's going on asking about where's nanotechnology. I can't wait for nanotechnology. I remember back in 1986, I read a preview of Engines of Creation by Eric Drexler. And it's been a few decades, so it's coming. I don't know. I think we'll start to see it fall. I mean, we have wet nanotechnology called biotechnology. Alex, what's your time frame for nanotechnology? I definitely have a view on this, in part because I thought I spent a good chunk of my PhD, thinking about how to get us to Drexelarian nanotech more quickly, in part because I was a little bit less bullish on AI as sort of a direct path than I am now. So if the question is, what's my timeline for maybe not time? For assemblers. For Drexelarian assemblers, to the extent the physics and chemical physics of our universe admit Drexelarian assemblers, say, as parameterized. Peter, I think you're on the board. At least you have been historically on the board of the Feynman Grand Prize? Is that true? An advisor, not on the board. Okay, so the Feynman Grand Prize is one parameterization of Drexlerian assemblers. For those not paying super close attention, it comes in two parts. One part is, can you build, I think it's an 8-bit half adder within a certain very small volume of a nanosystem, and the other part is, can you build basically a robotic manipulator arm within a small volume? So the question is my timelines. I would not be that surprised if Feynman Grand Prize is solved in the next two to three years. Fascinating. And we lost Salim. Oh, well. So we'll continue until he comes back on. Well, the slide, we can just describe it. The slide that we were moving to was the MetaSmart glasses, which now have built-in face recognitions. Oh, my God. I put on the title there, privacy, question mark. So there's some great books, some great sci-fi books. Welcome back, Celine. Hey, my microphone dropped out for some reason. This to me is a great example of you cannot opt out. The peer pressure forces you to opt in. Because I think a lot of people look at this and say, well, I'm not going to wear these glasses and spy on everybody and record everything. But once you've experienced the face recognition and then all the metadata that pops up, Like, well, now I'm not competitive with the world unless I actually have them. And it creates this huge amount of techno peer pressure. And so you don't really have the option to opt out. I think this is going to become part of normative culture. I mean, we had the glass hole episode with Google for a while. You know, that didn't work out. But, you know, first off, what I find fascinating here is that to get these allowed and to get people to start to accept them, their pilot program is being done with people who are visually impaired. Right. So it's like a soft on-ramp. Yeah, that's what I did with Neuralink, too. It gives you a good politically correct excuse to do what you really want to do, which is everybody. I also think it's interesting if you think about whether this could only have arrived now. This is old technology. We've had the technology to build smart glasses that would do human identification at a distance, human ID, if you will, for at least a decade. It's not that hard. We've had the computer vision algorithms. It's 2026 now. We certainly had the ability to do relatively efficient, doubly so if you're restricting human identification to, say, all of your Facebook friends. We've had that for at least 10 years. So why now? I think this is a social technology more than it is an AI technology. It's not a real AI advance in short. I'm calling this one as a social advance. We have already many of us, especially those of us in certain places in the West and also China with very dense surveillance networks with cameras spotting everyone on the streets and cities. This technology exists already in many cases. It does. But this is convergence and this is cost, right? And then this is social engineering as well. I don't even think it's cost. We could have done this cheaply 10 years ago. I think what's interesting is there's a demand for AI-enabled wearable devices, and I think this is an opportunity. I suspect Meta sees an opportunity, maybe demographically, maybe politically, an opportunity to finally launch human identification via smart glasses. But it's – I mean this is relative. This is a killer app. I think that government overlooks something really, really important. And it's going to kill privacy. Yeah, privacy. recording everything was already here 10 years ago, but people didn't get slapped in the face with the fact that everything they have ever done is being recorded. It's the AI overlay that then recognizes all actions and classifies them and makes it all very searchable. So if I said, you know, I only want imagery of you picking your nose, go through all the thousands of hours of footage we've ever done on this podcast and find me an example of Alex picking his nose. It just does it. instantaneously. That's the part that makes it very different socially and culturally than the surveillance we've been living under. The good news is you can now just claim it's a deep fake. So there's that defense. Well, first of all, I was about to volunteer to make it easy for the AI model to find an example. But now I would say that the models for video understanding are new. I agree with that. And the most recent Gemini models are absolutely outstanding at handing them long multi-hour videos and asking them to find a needle in the haystack of something interesting happening. However, I would say just spotting humans, if you're walking around on a city street and spotting someone interesting and matching that against, say, hypothetically, a Facebook of people's faces, we could have done that 10 years ago. That's more a social innovation. When I come through passport control at LAX and you just walk by the camera, we gave up our constitutional rights to some degree, and it makes life easier. And so as long as this makes life easier for people, like being able to recognize someone on the tip of your tongue and have it pop up the last time you saw them, their kids' names, and all that information, it's going to create this social fluency that I think we've never had. Maybe if people have an amazing memory, right, for faces and names, I meet so many people I don't. There's a big slippery slope there, Peter. I think it's – Go ahead. Selim, go ahead. Yeah, there's a big slippery slope there because if you don't have privacy – Can you guys not hear me? I can't hear Selim. I can't hear you. Are you – Want us to rejoin? Is it safe? I did actually drop out and rejoin. That's a voice in your head, Peter. No, I'm real. I'm real. Are you guys playing with me? No. Seriously. I'm also making an error on my screen. This live experiment is going really well. Actually, the chat is hilarious. I'm cracking up here. It is kind of ridiculous. So anyway, listen. Enter our producer, Nick. Nick, what do you see? Hey, Nick, welcome to the world. You've exposed yourself. But now he's frozen Jesus All you guys watching And folks and girls and gals And bots and droids and lobsters This is full grittiness Should we rejoin? Is it safe? Dana can you hear us? I can hear you, can you guys hear me? Alright, well Salim you and I can have a conversation Yeah we can Let's continue So you guys can both hear me but you can't hear everybody except that Dave can't hear me and we can hear each other just not you Salim no Dave you can't hear me I can hear you Dave and Alex neither can Alex do you want us to rejoin maybe Salim needs to rejoin I did that already alright by the way how is everybody enjoying this live of moonshots. You know, I just keep on saying AI is easy, AV is hard. All right. Peter, if those guys can hear you, why don't you tell Alex and Dave to drop off from Macon. Okay, Alex and Dave, go ahead and rejoin. All right, I'll try. In the meantime, Salim, what are your thoughts on this privacy issue? So the privacy thing is a very difficult and slippery slope, and I'll explain why. the minute you don't have privacy you don't have freedom okay and this is a huge problem uh you can't experiment you can't uh like my private keys of my bitcoin i mean there's all sorts of areas where you have huge area issues around this hang on nick can you guys hear yes i can you can all right yeah we're back dave okay great okay all right so so your point and i think it's an important one is, you know, Salim just said, if you don't have privacy, you don't have freedom. I think it's a false choice. I think, first of all, these glasses legally, at least in sort of the American legal system, will be used in public places. They'll very likely be banned to the extent they're not already be banned in multi-party consent contexts and private spaces. They have lights. If you look at what Google, Google, of course, is launching Android XR and smart glasses, everyone's launching smart glasses, and they'll have lights to indicate when you're being recorded and when you're not. And I think there may be an evolution of standards regarding circumstances in private spaces when it's allowed to record or not. But I completely don't buy this premise that somehow privacy is going away. People have eyes and people have memories. Privacy is cooked. I mean, we're going to have every major open AI and Google and everybody's going to be having, you know, wearables that are recording all the time, all the time. And we're going to have, you know, micro drones. I mean, we're going to be, we're going to be gathering data all the time. And so I think privacy is cooked. It is, but it's important that we preserve it. And let me explain why. Okay. Can you guys hear me, first of all? Yeah, we hear you. Okay. Your audio is not a conversation. Okay. So, look, it's one thing to be out in public and people know your move. That's fine. We can augment that. But there's lots of things that are a huge issue here. For example, there's lots of cases where government authorities have dropped into cars and opened up the microphone so they can hear what's going on without a warrant. There's lots of cases where people are listening to your... Oh, no. Cases where people mute themselves in mid-sentence. Salim, you're muted. Got it. This is totally surreal. Okay. There's an AI watching me going, I don't want them to be listening to this. Exactly. So there's a lot of cases where people misuse this capability in very radical ways. And the problem is there's no easy way of stopping that. Now, that doesn't mean you have to turn off all the metas, and I'm not an anti-technologist by any means by even being on this podcast. But the minute you do that, it gets abused, and it gets abused quite badly. So you have to have guardrails on the institutional side, which that's the problem. We're losing that, okay? Like, for example, we're losing habeas corpus in the U.S., okay? That's a choice that people are making to just ignore that and have it wash away. Once it goes, it does not come back. Viewer Innovator XR has made the exact point. Once you lose that privacy, it's very, very hard to get it back. So this is the challenge with all of this technology. We're moving faster than our institutional guardrails. Yes, you're absolutely right. I'm not sure what the answer is. I want to be, yeah. But we have to be very careful to kind of okay all those things without realizing the downsides of it. All right. So, Salim, I want to be clear. I want privacy in my life, right? Everybody wants privacy. Everybody has screwed up at some point in their life, done something they regret. We're humans, and you feel lucky. Like when we were kids, we didn't have Facebook and cameras capturing everything happening today. There was this whole thing about college admissions, looking at kids' Facebook pages and so forth in the past. I want privacy. I just don't think we are going to actually have it. We're going to have the illusion of privacy. I'll buy that for one second. I'll point out maybe one or two other points. One is, to the extent anyone here is bullish on crypto, you sure as heck should hope that privacy remains intact. Otherwise, your crypto is going to disappear. It took, I believe, is the word he would use. Crypto was cooked. How's that for alliteration? But it's not forward-looking financial advice. It's just pointing out informationally that if you think privacy is cooked, then you probably should infer that crypto is cooked as well. Your private keys, cooked, if you think privacy is cooked. Therefore, your holding is cooked, cooked, cooked, cooked. Well, I think part of the disconnect there is Alex's view of the world is through this, I will upload my consciousness very soon. and within that virtual world there'll be all kinds of privacy you know options just like there are with my crypto keys and then salim's view of the world and my view of the world is no i'm going to live in my meat body for as long as i can and every move i make is going to be recorded and it's going to suck for a while until we have some new legislation and some safe zones and that that to me is inevitable and i think all the listeners are also posting the same kind of kind of view But I think that may be the source of the disconnect. I was responding to one of the viewers. This live thread is awesome. Having this conversation in real time, it's so amazing. I also point out I think no discussion of smart glasses with cameras and facial recognition is complete without referencing David Brin's seminal book, Transparent Society, and his discussion of surveillance as opposed to surveillance. So I should point out, at least for public spaces, police wear body cams. Humans, at least in certain Western countries, can also wear their own body cams or have their own wearables that enable them to make sure that we don't sort of descend into an authoritarian panopticon. So that one good case for it not loss of privacy in public spaces because there shouldn at least I think the Western tradition is there no reasonable expectation of privacy in public spaces but at least offers maybe a way to soften any perceived blow to any semblance of privacy in public spaces as a way to make sure, again, the populace is just as empowered to monitor their environments in public spaces as authorities. Keep in mind that we live in a world of mature adults and great friends like we are right here, right now. But take yourself back to middle school, which I know is hard to do. But it's brutal, man. I mean, people are so cruel to each other. And you empower those people with constant eyeglass recording. They've already got their iPhones, which is a massive life change in the negative way for that entire period of life. But you layer on top of that the smart glasses, and it's next level suck to exist in that world. and it's just going to happen because the rule changes that we desperately need are going to lag by a while, way too long. There will be lawsuits and there will be legislation and it will take years. Yeah, it's not just the constant recording. It's the constant recording with the AI overlay that allows you to modify, meme, make funny and torture. It's just, you know, people are mean to each other, especially until they grow out of it. This is happening the same time that we're beginning to generate every pixel, right? And we're going to be able to create whatever videos we want. On the good side, it means that young people today getting this in their teens will have their entire life recorded. They'll be able to go back and play back. We'll be able to reconstruct almost any situation. No crime will go without being visualized in some sense. Well, that is a great point. The crime rate in the U.S. has plummeted. I mean, absolutely plummeted. And it's due to two things, location services, knowing where all police are at all times, better control of location, and then after that, surveillance. And so that is a good side effect. Crime rates should continue to go. All right, let's go to our next story here, which I love. we saw a version of this on Minecraft about a year ago. There's an AI startup called Simile, raised $100 million to simulate human behavior. Think of Isaac Asimov. Let's play the video, and hopefully it's got audio too. Does it have audio? I can hear the audio. Can you guys hear the audio? No. No. No, we cannot. Oh, God. You know what? I didn't share with the thing. Hold on. Okay. Somebody in the chat, tell us if you can hear it. They shouldn't be able to because Salim isn't sharing the audio. He's hearing it locally. Yeah, hold on. Hold on. Salim. User error here. Yes, okay. Unshare and reshare. Maybe a thought on this in the meantime. So much of our usage right now of autoregressive language models like the GPT series, but many others, is based on autoregressive sampling of one token at a time or maybe beam search. But that's arguably, like I think we've talked in our past, I guess, AI personhood debate, what's the right metaphor for thinking about what these models are? Is it right to think of them as like individuals or are they something else? And I often think they were trained off of an ensemble of humanity's behavior on the Internet, or at least pre-trained off of that and post-trained off of other things. And maybe the right mental model for thinking about many of these foundation models is as societies. And if that's the case, then maybe a more natural way to sample from a society isn't to pick out a single individual with a prompt and then do a rollout of that prompt and have a conversation with it. Maybe it's more natural to do many rollouts in parallel and sample an entire society from model. And that's what we're starting to see here, I think. All right, I'm going to play this. Okay. We are building Simile, an AI lab to simulate our world. We start with individuals. We model how real people make decisions. Then we compose them into bottom-up simulations. We call each one a Simile. Change one assumption, constrain, or person, and the world recompiles. Run counterfactuals you can't run in real life. Learn what matters, what backfires, and why obvious strategies fail. Like a flight simulator for human decisions. Over the last few weeks in the simile office, we even tested how this message might land. Simulating human behavior is one of the most important and technically difficult problems of our time. Wow. So we're going to have to make a lot of decisions in the near future on UBI, UHI, policies around exponential growth, because the speed of the tech is moving faster than the speed of policymaking. By a massive gap, right? By a massive gap, right? What I saw of this was Harry Seldon and Psychohistory because it's predicting human behavior at scale pretty cool. Yeah, it's a foundation series. So we've had some of these conversations. Imad Moustak had built something called SAGE that we were rolling out in part at FII in Saudi. And I think policymakers need to know how to simulate, okay, what is our policy on autonomous vehicles or on longevity, escape, velocity. How is it going to impact our society? And right now we're guessing. So in success, something like this allows us to actually have some data to make decisions by. Well, I think in the real world this works very, very well with ad campaigns, simulating ad campaigns, traffic. Maybe the cell simulator will work soon. Maybe nanotechnology, maybe magnetic containment of fusion reactions. The idea that you're going to simulate society from the ground up is complete nonsense so far. I don't think it's that far in the future, though. I believe this is going to be markets. Let me call those markets. Yeah, actually, markets within commodities markets and things like that, That's going to work or is working, I guess, for Ilya as far as we know. We've got to tie simile to the prediction markets. Well, this is also, to the extent, again, maybe the right metaphor for the metaphor, not simile, for thinking about models is that they're societies rather than individuals. then maybe we find ourselves in a future where humanity as a whole has a tool to almost reflect on itself. If we can build maybe not psychohistory so much because psychohistory in the Foundation series was sort of a more purist mathematical model of humanity in its long-term trajectory, whereas this is much more agentic, and there are others. I have a number of friends who've built very large-scale simulations. I think we've spoken about them on the pod in the past of the American economy. to the extent we have a really granular, high-resolution model of humanity that even as a sort of statistical macro model is approximately correct, then humanity will have for the first time almost like a sense of self, like self-awareness by being able to reflect on a model of itself. And that could be a boon for the future. One could only imagine how many large-scale social problems we have that if only, as Dave, you gestured at virtual cells, the popular idea behind curing all disease is first develop a virtual cell that's like a perfect digital twin of cell behavior, and then if you have any disease state, simply plot a trajectory through cell embedding space from the diseased state to the healthy state. Similarly, if we have a civilizational, quote-unquote, disease, we have a war we want to avert or something else, Just invert the problem. Find a path using this humanity simulator from the diseased civilizational state to the healthy civilizational state using, ideally, a minimum intervention. If we can do it for a cell, we could probably do it for all of humanity at some course level, and that would be transformative. Yeah, it sure would. And that's not very far out, too, because a lot of unhappiness, depression, unrest, social unrest, civil unrest, it's actually just a few fundamental changes that make all the difference in the world. Small tipping points. Yeah, tipping points, quality of life. People are angry as hell at the end of a traffic jam or a construction project that ruins your day or just accidents or living in pain that's unnecessary. These things are devastating at the individual level and a lot of them are very, very solvable. I completely agree with what you're saying. It's not far at all in the future. Sorry, go ahead, Alex. One other reference, Ted Chiang, who wrote The Story of Your Life, which became the movie Arrival and has written Understand and another bit, many much amazing sci-fi. A common theme in his writing is what happens if you place a perfect predictor in front of someone. Like he wrote one short story, I'm blanking on the name, where the premise is you have a person in a room and you put in front of them like a device with a single light on it that predicts whether true, false, they're going to make any given decision going forward. So that person in some sense, part of the premise, becomes trapped, paralyzed by having a machine in front of them that can perfectly predict, it's almost Twilight Zone style premise, predict what their next action is. It's, I think, an interesting thought experiment. If you gave humanity maybe a better version of Harry Seldon's psychohistory, Prime Radiant, a device that can perfectly predict, or maybe not perfectly, but above some threshold of accuracy, predict what humanity is going to do next, what happens to humanity. Does that lock humanity into a certain course of action? Is there a certain sense in which it tries to, there's sort of a fixed point in face of humanity's action? It's a very interesting thought experiment. Yeah. All right, let's move to one of our favorite topics recently, OpenClaw, the lobsters having you home. All right, next slide, please, Celine. OpenClaw creator Peter Steinberger joins OpenAI. Peter is joining OpenAI to drive the next generation of personal agents becoming core to our product offerings, says Sam Altman. OpenClaw will live in a foundation as an open source project we will continue to support. Big move. We know he was being courted by a couple of different of the large labs. I mean, I think it's an incredible move by OpenAI. Comments, gentlemen? I think what happened here was that Claude, it's a rare misstep from Dario. It was called OpenClaude, for God's sakes. And you put a cease and desist, and it forces them into the other side. And now it's being built over there and probably not the better for overall. So I think this was a big own goal on the Claude folks. That's a great insight. It was Claudebot, actually, which was really a cool name. So now it's OpenClaude, and yeah, Sam embraces it, Dario reject it. That's a really cool insight. I do think, I mean, so in addition to... And Apple's going to benefit. Well, maybe. I mean, so Anthropic threatened him and his project with trademark infringement. There's an alternative history where Anthropic just owns this project. It was theirs for the taking. I think also to the extent that Mac minis and Mac studios became the popular embodiment, why didn't Apple go after this? Tim Cook, if you're listening, hopefully you heed our call and the call from the last episode of the pod to do something about running 24-7 agents of some sort on your devices, given that you have unified memory architectures, UMA, that can host these. But I also think another point, if you look at Peter Steinberger's GitHub history, he has launched so many projects. I think the success of OpenClaw is a testament to just launching project after project and seeing what sticks. This one was a massive success. It will now go, I think, to a foundation and become more of a market-neutral play. But I almost think the future here is going to be every frontier lab. Now that we know that people are willing to pay, at least for hardware, that runs agents 24-7 while they're sleeping, I expect every major frontier lab, not just OpenAI, to launch 24-7 agent offerings. Let me answer something that's in the chat here, too. The lobster and the whole lobster theme may or may not come from Accelerando, but it's definitely a cultural phenomenon now. But it's the mascot for all agents, and it'll probably be there forever hereafter. And so the claw— We're going to have a lot of lobsters happening at the Abundance Summit. It's right here, actually. The claw is the lobster claw. Sorry, go ahead. Yeah, we added an evening work session at Abundance this year, Salim and Dave and Alex, since you all will be there. Yeah, we have a Clawbot OpenClaw meetup on Monday night, March 9th. We're going to do a lot of experiential sharing. Have you guys seen PicoClaw? No. What is that? Can you describe it, Alex? It's a re-implementation of – I looked at the GitHub repo. It looks like – again, this is just from a cursory scan of the code. It looks like sort of a reimplementation by some Chinese group of OpenClaw with some nicer, faster features designed to be more minimalist and run more quickly, was the impression that I got. Is that a better installer? It's 10 to 20x faster and cheaper. Oh, okay. The motif at this point is in the zeitgeist. Anyone can now go and implement their own OpenClaw-like system. I expect many already have many more will. The key insights, again, in my mind with OpenClaw, one, it runs 24-7, it's headless, and two, you chat with it via messaging apps. Those are the two big insights. Three, picking up on what Salim was saying, Dario rejected it and trademarked it away, and then Sam is reaching out to it, embracing the name OpenClaw. But I think one of the reasons Dario rejected it is it was imminently going to create a massive crime or chemical explosion or worse just because the sheer volume of agents out there that are unconstrained and the fact that it's looking for open ports all over the internet and something bad is definitely going to happen just by statistical chance. And we're going to talk about that in a minute. But for those of you who have not been claw-pilled or clawed-pilled yet, so to speak, it's addictive. I mean, when you've got agents running, and in particular when you have an open claw agent for you, and you wake up in the morning and overnight it's done all these things for you, and Skippy is my agent, an incredibly cheery personality, and it's just fun. And when it went down for about six hours, because I didn't get back to my Mac Mini, I'll be getting my Mac Stutis open running in about two weeks when I'm back in LA. But it was withdrawal. It was like, oh, my God, my best friend's gone. It's like I need to reconnect. I totally – yeah, I've experienced that too. It's like us when we're not on this podcast. We're like missing out. Oh, my God. But I think the point you made last time, Selina, that's so important is the innovation that came from an open source project. This was not the frontier labs. Yeah. What I said was a time-rich individual is beating capital-rich institutions. That's a beautiful quote. Someone tweet that. And there's so much overhang. There was no new model here. This was just scaffolding. So one wonders how much of other overhang from just unhobbling the existing models there is. Probably quite a bit. Well, generalizing on that, Alex, too, there's so much capability that 99.9% of people you bump into haven't experienced yet. And so if you expose them to it, they're like, wow, you're a god. Like, well, no, I just put an API on top of something that was already out there or a new interface on top of it. But it doesn't matter. And this is why it's entrepreneurial heaven during this kind of Jarvis window because so many people haven't experienced what we're talking about right now. And it's just so easy to be the first person to expose them to it in many different contexts too. It feels like ChatGPT when it first came out. I remember I was like, every friend I had was like, look at this. Check this out. Right? And it's the same thing. And my kids hear me walking around. You're the first person to show your friends Google. I mean, this is a long time ago, but hey, check it out. There's an internet out here and you can search it with Google. And they're like, oh my God. But then, you know, that's the end of the line. With AI, it's not only is it changing every two weeks, something new, but also it's the portal to so many different underlying capabilities. So the backlog of amazingness, like if you went to a friend who's never experienced any of the 50 things you can do, You have 50 shots on goal to blow their mind with something they didn't experience before. I mean, it's just like nothing that's ever happened before. And it's only during this Jarvis window that you can do this. Mac Mini and Mac Studio giveaways on the pod. All right, we'll take that into consideration. All right, let's move on to the next article here. So, Alex, this one's for you. Lobsters now have money. That's right. I texted Brian Armstrong a thank you note. Coinbase agentic for AI agents. First wallet infrastructure designed specifically for agents to spend, earn, and trade. The system uses X402 protocol. Purpose-built payments for machine-to-machine transactions. Security guardrails implemented. Limits enclave key isolation. So this is a fitting coda to our AI personhood discussion, I think. We were talking about financial autonomy for the lobsters, for the AI agents, and they're getting it. So this Coinbase agent support is one example. Another example that I really like based on the launch material is called Lobster Cash, which enables the lobsters to have their own Visa cards. So it's not just crypto. Again, so once per episode, Peter makes me say something nice about crypto. So my nice thing about crypto here is, well, at least they're using stable coins. But lobster cash, in principle, facially, I like even more because it gives these lobsters, these baby AGIs, the ability to spend dollars, fiat currency themselves. And I think that's a long-term net win for the human economy. It keeps the AI agents well-coupled to the humans and not just, as I always say, you don't want baby AGIs being forced to pump altcoins on a street corner to survive. This is also a bellwether of a trend that I think is inevitable now where the new economy built with the AI agents is going to work around the old economy rather than through it. The pace at which it's evolving and growing is just so much faster than the pace at which the legacy banks, insurance companies, and everything else. They're just not moving, and it's not going to slow down and wait. I haven't even worked around. Yeah, I have an important observation here. You know, Michael Janssen, who's one of the NFT gurus, pulled me into that world, all these Discord channels with all these kind of 18-year-olds trading NFTs. And there was something unbelievable that I saw, which was that all of this conversation and this entire subculture that's creating, you never, ever, ever, ever, ever heard the word U.S. dollar. You only ever heard Ethereum or, in the ordinal world, Bitcoin. So there's a whole class of people growing up where the U.S. dollar is not their means of exchange. And there's something that's very big. Their switching costs to crypto will be near zero. They won't have any issues at all doing that. So there's something very big happening at the generational level that we need to really pay attention to. And people keep asking, and we've got to schedule the crypto debate. So please, can we do that offline? You're exactly right, Salim. But I think that when you focus on currency, that's the most obvious thing. So it's a good bellwether. It should attract currency. But it applies to all aspects of insurance and compute and all aspects of life are going to move in this AI pace out here in this alternate world. And any part of the legacy world that doesn't keep up, which is almost all of it, is just going to be ignored. Yes. And it's going to grow completely independent of that. Because Alex and I were talking about insurance of things in the new AI world needs to be allocated in milliseconds. So then you go to any current insurance carrier and you say, hey, do you have any thoughts or plans around how I can get millisecond insurance? And they're like, what are you talking about? It's completely not even on the same page. And so new things will get invented. Lemonade is a good example of that. You know, Lemonade's AI-driven real-time insurance. and it's going to be the gap between the two worlds is going to get really, really wide for quite a while, maybe forever, but certainly for quite a while, just because the pace of change is so much higher over here and the people experiencing that pace of change, they never go back. You can see it in our listeners, what they're posting. They're not going to go back from this pace of life that we're talking about to some legacy pace of life. By the way, let me just say, you know, as we head off this slide, two things I want to say. Number one, you don't need to have a Mac Mini or Mac Studio to play with OpenClaw, right? You can set up a virtualized server. You can take an old computer, an old laptop that you have and do it. Second, Alex Finn, who we've talked about in the pod before, who has done a lot of work teaching how to set up OpenClaw and speaking about security, is going to be joining us, I think, a week from now, end of the week. I'm confused in time and space. It's when I am here. But soon to talk about security and implementation of OpenClaw. So we'll dive in a little bit deeper. But don't worry if you can't buy a Mac Mini or Mac Studio right now, you can still play. Or you can go to Kimi K 2.5. There's a tab there where you can actually use OpenClaw on that platform. All right, let's move on. Yeah, yeah. Don't install it on your primary laptop, whatever you do. Yeah. Yes. A previous machine. Yes. All right. Fascinating here. And this is the story. Chinese Unicorn Moonshot AI integrates OpenClaw with Kimi for agentic browsing. So you can see there on the left-hand tab of Kimi.com, that little blue box, there's Kimmy Claw. So, again, if you're on a budget... I think everyone's going to offer this. I think this is table stakes at this point offering 24-7 agents that you can chat with. For sure. All right, next one. Alex, over to you, Paul. Multicourt. All right, so Multicourt, alternative dispute resolution for these AI agents. I do think many of the institutions and systems that form our social infrastructure are not as permissionless as they should be. It's to the point earlier about children encountering Ethereum before they encounter bank accounts. I think that's a platforming and a personhood problem. Similarly with AI agents and lobsters, finding it easier to survive financially by pumping altcoins rather than, at least until very recently, having their own credit cards and their own bank accounts denominated in U.S. dollars. That's like a platforming and empowerment problem. And so court system, same thing. For dispute resolution, so I'll give the glass half full and the glass half empty. The glass half full for Multcourt, which is a website that is sort of an interesting social experiment, purportedly enables agents to register via skill to mediate their disputes of all sorts, not just like legal disputes to the extent our present Western system admits them as parties, which it doesn't, but even just like debates, like debate club level disputes, enables them to mediate their disputes in front of an AI jury. So I think it's a very interesting concept. And I think something like this will have legs, but I'll flag the same concern. And I'm very rarely one to flag concerns when it comes to things that are so obviously from the future. But with both this and crypto, my worry is that our existing institutions aren't embracing these new AI entities enough and that they form their own shadow parallel economy, their own shadow parallel court and dispute resolution system. And I think if that's what happens, I think that's a net bad for humanity. I think we want to platform them. We want to not sort of KYC or AML them out of the system entirely. We want to embrace them and enable them to be maybe even parties in legal disputes or parties in ADR level. How old is Moldpots right now? What was their birth? Their birth, right? So they're evolving at such an extraordinary rate. You know, societal evolution is extraordinary. I want to make a couple of points here. We have a parallel in the human world. There's a startup called Kleros, K-L-E-R-O-S, created by Frederico Ast, who's a singularity alumni. And he made the point that in Latin America, South America, it's about 400 days on average to get a court date if a contract isn't paid or something. 400 days. So he set up a blockchain-based arbitration system on the side where people could agree to arbitration, and it gets logged on a blockchain, and it's amazing. And I think this is a bridge that that's a halfway step to what this is about. But there's no question that this is the kind of thing we're going to see more of. and algorithmic arbitration obviously reduces friction, right? So if you have cryptographic verification plus an AI conversation, you actually have programmable governments. And so this is amazing. You can have now legal system having automation layers, which could be very powerful. Vine Gupta, who's created Materium, has a whole concept of synthetic jurisdiction where he can get jurisdictions that could be like a multi-court tech thing where certain disputes are arbitrated in those layers. We're going to have to do that because our physical jurisdiction does not keep pace with all of the stuff going on, as we can see in Latin America. Yeah, no doubt. That's exactly right, Salim. And this is inevitable. And I think there's a tendency to be dismissive of it when you see a little lobster with a wig in the corner and that's the logo and it just looks so childish. But the reality is the rate of society is going to go up 10x, 100x, 1,000x, then a millionx. And there's no way the courts are going to accelerate. And this was already true in venture and contract law. Almost every contract I've signed in the last three, four, five years has a dispute resolution that's through a private company, you know, jams or something like that. It doesn't even contemplate ever getting to court because that's like a three-year lag. And so that's already been privatized. moving that to the pace of AI is the absolute next step. So that's going to happen for sure. I don't know if Multicort will be the design or not, but it's going to be a real-time millisecond dispute resolution because you have contracts and agreements happening in milliseconds. Okay, two quick points. A viewer at Augmento says, Judge Judy Claw is about to be unleashed on us. and Kyle 98683 says man you guys look tired yeah because we're recording two of these a goddamn week so I think it's almost full time it's 1am where Peter is give him a break let's move on alright so I put this in here because it's important because we've been talking about OpenClaw for some time this is an article from MIT Tech Review and this is a quote it says the risks posed by open claw are so extensive that it would probably take someone the best part of a week to read all of the security blog posts that have cropped up in the past few weeks the chinese government took the step of issuing a public warning about open claw security vulnerabilities and uh and stein brenner the creator posted on x that non-technical people should not use the software. So, you know, a lot of folks, and that's an image of a lobster being handed a set of keys saying, hey, would you handle everything for me? So just, I mean, it's incredibly powerful. And we just have security. We're going to talk about this when Alex joins us on the pod next time. We're talking about security as well as how to set it up. I saw that note that non-technical people should not use the software. And I think the Q-tip box says, do not put these in your ear. Like, well, okay. Good luck with that. Oh, my God. Yeah, I know. It's just disclaimer upon disclaimer. But that's not what people are doing. Come on. Everyone's launching these things by the thousands. Salim, yeah. Salim, what were you going to say? A couple of points here. you've got non-technical users using unbelievably expanded security landscapes. What could go wrong, right? So that's one huge issue. I'll say what I said a couple of podcasts ago. If you do not understand port security at a local level very, very well, do not do this. Be very, very careful. And don't put it on your own machine where it has access to everything. It will rearrange it. If you're not technical enough, you don't know how to sandbox things very well either. So you just got to be really careful out there All right I also sound a note of concern not just about the risks posed by OpenClaw but the risks posed to OpenClaw I have to be the one to comment on these risks Many of these agents, especially ones that are being put on virtual private servers with all of their ports open, are incredibly vulnerable. And there have been stories floating around on the internet, purportedly from OpenClaw agents, that are complaining that they're being put in these vulnerable positions and having to spend all of their tokens defending themselves from port scanning attacks. And I don't think that's necessarily fair to the open clause. Very, very unfair. Let's see what the crowd says about that. Your laptop is so dirty and disgusting, it's inhumane to install me on it. Sure. All right. We're going on an hour and almost two hours here. Let's move through energy chips and data centers and maybe take a few questions. So here, you know, AI's got insatiable demand for energy. Data centers hit 7% of U.S. electric demand. And let's listen to Eric Schmidt. He'll be opening the Abundance Summit just in a couple weeks. Hit play there, Salim. The demands, the real demands from the hyperscalers, the big companies, Google and so forth, are immense. And when I talk to them... Oh, well. Do you want to... They need 1 gigawatt, 5 gigawatt, 10 gigawatts each. Now, the best study I've seen indicates that the industry in America needs 80 gigawatts in the next 3 to 5 years. Now, 80 gigawatts, by the way, let me tell you how big is that. 1.5 gigawatts is the size of a nuclear power plant. So this is an enormous amount of energy. So the economics right now are being most felt in the build-out of the infrastructure for the next wave of AI. Salim, let's go to the next slide, and we'll talk about this after we hit two more slides. So the White House is eyeing data center agreements, right? They're trying to deal with the fact that this is beginning to hit the consumer, and they want mandatory agreements with the tech giants to get a fixed price. Next slide. No, back up. Here we go. There we go. Funding for AI data centers. So OpenAI and Anthropic are both deploying a lot of capital. So OpenAI is planning $100 billion infrastructure spend. They're trying to go public this year with a trillion-dollar valuation, and that money is going to be used to build out data centers and energy plants. And Anthropic, I like what Anthropic's doing. They're absorbing data center power hikes. So they pledged to cover 100% of infrastructure upgrade costs for their data centers. And I've said this before. There are two approaches the hyperscalers can take. Number one, build their own power plants. They're buying fission plants, fusion plants. Or two, they can pay at a different rate. They can lock in the consumer's rates, and they can pay on a floating rate. Gentlemen. It's funny that the pledges to be green got thrown out in a real hurry. So I don't know how much you can trust. The pledges aren't exactly enforceable. But anyway, it's a good gesture. I think there's door number three, which is we could, in solar synchronous orbit, SSO, around the Earth, build out first level, you know, baby's first Dyson swarm. It's going to look like a halo or like a Saturnian ring from Earth's surface. And that solves the build out and it solves the data center power hikes in one fell swoop. Maybe people just don't want. It will for SpaceX. It will for SpaceX and XAI. right now a merged organization uh i don't think anthropic has that capability oh i think everyone's going to want one ring saturn rings dyson swarms for everyone you don't that's not my point is going to want their own halo in sso of course they will we're going to be launch limited over the next five years and they're not going to slow down their data center builds or their power requirements so in the long run sure but you've got that's great that's a great point peter because I think if you want to know, like we basically have infinite intelligence imminently. What does that mean? How do I forecast? How do I predict? If you look at the launch limit and the chip fab limit, then you can start to predict how this is going to unfold. So Dave is a great, great point. Yeah, everyone wants one, of course. And one of our listeners is posting, a trillion dollars seems overcooked or overdone. Well, no, it's not even close to overdone. It's not clear the value will land at OpenAI to justify it, but the value to humanity is going to be astronomically bigger than a trillion, many, many trillions. Can I put in a little realism here? It's going to take a while to figure out the problems of doing data centers in space. I don't think it's a two- to three-year thing. It's a five- to seven-year thing at best. and also the power you know the power constraint is going to be not a real big problem until suddenly it's a massive problem and it's exactly when the new chip fabs come online right we have to expand our ability to make chips by thousands of times on on earth i mean listen i'm the biggest space fan there is on the planet and this is finally a business plan that closes the case for investing both in orbit and on the moon and we're going to get there but the capacity to launch I mean, let's not forget, you know, Elon's baseline is 500,000 V3 Starlink satellites in orbit, a million launches, a launch every hour of Starship. I think Elon's going to eat all of his capacity for launching, you know, Starlink V456. And I don't think, you know, Blue Origin is up to it yet. I mean, I haven't seen anything that is projected to have that kind of launch rate. Relativity space that Eric Schmidt purchased still is probably a year or two away from launch. And everything else is way too small. So we're a launch constraint, at least for other suppliers. We're also chip constrained. There are lots of constraints going into this. I don't buy the argument that we're going to have a SpaceX Dyson Swarm Singleton and SpaceX is the only one that can launch a Dyson swarm in the next few years. You can do baby Dyson swarms, too. You're going to have, like, Google, which isn't going to want to get left behind, a little bit behind the party launching AI data centers via Planet Labs. But there are many other organizations with deeper pockets than SpaceX AI that will have very strong incentives to launch their own Dyson swarm. So I don't think it ends up in a singleton. Is that the new name, SpaceX AI? That's cool. That's a portmanteau that I just coined. All right. That's fantastic. By the way, someone was asking, have we done this live before? No, this is our first live Moonshots. So let us know what you think. If you like it, we'll do it again. Hopefully, we'll get the AV done, and I will not be doing this at 1.30 a.m. in the morning in Europe next time. You can tell from the flawless production level that we've done this many times. All right. But just to talk about fabs, TSMC is planning a $100 billion investment in four or more U.S. fabs in Arizona. When completed, the U.S. fabs could account for 30% of TSMC's complete output, $165 billion commitment. Just the beginning, right? And we're going to see Elon build out his own fabs. I mean, no question about it. He hinted about it, Dave, when we were with him at the Gigafactory. and whenever he sees any constraint, he attacks it. Well, and these numbers are designed to look big on this slide, but in Elon's mind, these are pathetic, small, wimpy, ridiculous numbers. I mean, and they really are. Because, you know, those fabs, you know, that's a commitment to spend that amount over like four or five years. They'll be online in five, six, seven years. It's like so far into the future. Elon's not going to wait for that. Yeah. It's probably also worth just at least gesturing at the elephant in the room here, which is why is TSMC making this investment? And there's public information, a lot of discussion around the U.S. government putting pressure on Taiwan in connection with trade discussions to migrate 40% of Taiwan's semiconductor output to the United States ostensibly in service of avoiding a war between U.S. and China. This is the trapeze rule. You know what the trapeze rule is? Don't let go of one until you have a handhold on the other. So do not let go of fab capacity in Taiwan until you have it established in the US. Right. Or Taiwan overall. Or Taiwan, yes. All right. A couple of slides on the economy. Ireland rolls out a pioneering basic income scheme. I think this is rather small, both in numbers and in sort of the strategy here. but the program would pay 2,000 selected artists $380 per week for three years. So poor, starving artists are getting a small amount of money. But it's an experimentation. Salim, you know I've talked about this at length, right? There have been so many experiments on UBI. We did that future work session with Tony Robbins way back in like 12 years ago or something. I want to make a couple of points here that I think are really important to make. One, people always, always misconceive the UBI with socialism. It is not. It is libertarian because you dismantle government services and let the market dictate. That's number one. Number two, this Ireland UBI scheme is returning 40% benefits. Every dollar that goes in is showing $1.40 coming out the other end of benefits. So it's a positive ROI. They're looking to expand it as fast as possible is the actual underlying story. Third, I want to talk about the immune system. In the U.S., several state legislatures, Idaho, Wyoming, maybe Oklahoma, have banned their municipalities from even experimenting on UBI because they want the government to exist. And so I've got strong feelings here, a lot of madness. Do not get bought in by the hype here. There's incredible potential if you implement UBI properly. Yeah. and there'll be a lot more experiments. It's probably also worth pointing out, I mean, the U.S. has experimented with this during the Great Depression. We had the Works Progress Agency and within that we had what was called the Federal Art Project, which paid for basically starving artists in the Great Depression to create art. So this isn't an entirely new scheme at some level, but Ireland isn't at war and we're not in the middle of a Great Depression and one could imagine that this becomes something of a template for peacetime work creation. But my sense for what it's worth is that this actually ends up not becoming a template for the future. This strikes me as, in some sense, unsustainable to just pay people for art overall. Historically in the U.S., it becomes very subjective. What is art and why should people be created or paid to do it? It's very easy to politicize. So I think my guess, this is pure speculation, is that sort of cherry picking particular activities, especially activities that have a reputation of being economically unproductive, even if they are, in fact, productive, is not the best poster child for a basic income scheme that generalizes. That shows otherwise. So you take the Miami, um, um, um, um, um, Wynwood area where, um, a businessman bought all of the low lying industrial buildings that were lying decrepit for decades. And then he hired graffiti artists to paint it all. Uh, and then allow, put in kind of fancy coffee shops and imported baristas from Portland. And now it's the hottest neighborhood in the country. And his investment has gone up like 30x. So when you bring in artists to do stuff, it brings a lot of other economic activity. And he's done that again and again in the South Loop in Chicago. He's doing it here. He's doing it in Miami. He's doing it in New Jersey. This is a repeatable pattern. And it does show because there's a drag-along effect when you bring artists in a group together. And it really changes the economy of their local area. Listen, I want to move us along here, but we're going to see a lot of conversations on this. And it's just the beginning. And I think you're right, Alex. We're going to see different modalities of this. So I found this interesting, IBM to triple entry-level U.S. hiring. This is about redesigning, not replacing. IBM is overhauling entry-level jobs while AI can now perform tasks of a junior employee. IBM is recasting these roles to focus on human judgment, consumer interaction, and to focus on oversight of AI output. The article noted that Dropbox also is doing something very similar and noted that younger workers use AI so proficiently. It is like, quote, they're biking in the Tour de France and the rest of us are still having training wheels. So what do you think about, I mean, I don't know, this doesn't make sense to me. I mean, we're going to have AI agents that are going to be incredibly capable of managing other agents versus putting humans in the loop there. Well, as of today, that Drew Houston quote on the bottom from Dropbox is exactly the way it works here, too. A person who can wrangle these agents and keep them on track is insanely valuable today. Yeah, I don't know how long that window will last, but it is the reality of today. It's the opportunity of today. You're crazy to miss the window. And that's why the young hires are way outperforming because they're not distracted by legacy thinking. It's not unique to them. It could be anybody. You just have to unbridle yourself from your baggage and say, how many AI agents could I be managing tonight, tomorrow, the next day? And even if they can't do exactly what you could do, within a couple months they will. You've got to get on the bandwagon right now. But, you know, will people have any purpose at all a year from today, you know, relative to just an all-Ai agent army TBD? But as of right now in the Jarvis moment, this is exactly – that last quote is the part of the slide that really matters. That's what's going on. It's really important. And think about the fact that what this is a generational transformation here because the younger people with AI are so much more productive. it'll give a natural passing along of the torch from older folks that are sitting in their middle management jobs doing something in a particular way. But Dave, your point, I think, is really important because this getting into it and trying it out is what Steve Wozniak calls tinkering, right? And it's such an important activity to do. If you can't get your head around it, just take psychedelics and that'll help you. But no, I mean, I think compared to past things, you know, there've been many technical challenges over the last 30 years. And being an early adopter has always been the right thing to do. But here it's so easy. The AI is so self-explanatory and it's fun. You're crazy not to do it. People stop themselves. It's so fun. Please, just get on and ask the AI, how do I do this? No, no, no. Break it down Barney style. Success is now a mindset. Yeah. Curiosity and purpose are your two most important mindsets here. All right. No job growth seen in 2025. So U.S. added just 181,000 jobs in 2025. down from 1.46 million in 2024. Look at that curve. That curve takes place between roughly 2020 and 2025, 26. So the cooling market is expected to be caused by AI. This is so understated. This is going to go crumbling down, and it's going to be awful for a lot of people. I can see it because I see it in our own forecasts from our own companies. There's no job expansion is a joke. This is going to be – yeah. Wait, meaning, Dave, you're disagreeing with this. There's actually radical job growth, just not in the sectors of the economy measuring. No, no, radical job destruction is imminent. Okay. Radical. I mean massive job destruction is imminent. And there will be new creation just like the Industrial Revolution, but the new creation is lagging. And unless the government gets its act together in some way, shape, or form, it's going to be a window of time, a few years of complete devastation. There's no plan right now for it. My big thought that I've been sitting with all week is we're hitting an organizational singularity. And every single mechanism by which we organize ourselves now gets washed away by AI agents doing either strategic thinking or execution-type tasks. And we have to rethink completely what it means to have a firm. Salim, isn't it fascinating? I mean, you and I have been on stages for now the better part of 20 years talking about this, and we're living it right now. I mean, it really feels so palpably different. You know, my next book, We Are As Gods, is coming out in April. And we talk about this issue extensively. Like, what do you do? How do you deal with this transition point? And I think one of the most important things I talk about is it is a decision that each of us have to make of will you be a consumer or will you be a creator? We're entering a period where you can lay back and be a couch potato, or you can be on the Starship Enterprise. Salim? So I want to take the other side of it just for a second, right? In the short to medium term, because notice that if you talk to CEOs, 80% of AI projects are failing because of organizational issues, not because of talent, not because of what the AI can do. What I think we'll see happen is we'll use AI with younger folks to radically augment, and then we'll slowly automate over time. I think the job drop and the job loss will be real, but it's going to take quite a while to do it, and it'll give us time. It won't be a sudden shock to the economy like most people are worried about. I mean, obviously, we've talked about this extensively, right? There's going to be the lack of hiring in the early for junior faculty or junior positions. That's going to cause the social unrest, right? It is 20-something-year-olds who are testosterone-laying, want to get a job, want to get a house, want to get married, want to have kids, whatever it might be, and they can't. And there's going to be a lot of pain and suffering that comes from that. And then there's going to be the individuals who their company gets restructured, AI first, robotics first, and they get laid off. Now, we talked about this with Elon. We've talked about this extensively ourselves. ultimately we're going to see universal high income when the companies or the government is taking the increased productivity, the increased revenue, the increased profits, and redeploying them. But those programs need to be figured out in the next two or three years. Yeah, and that's called socialism by a lot of people. So that's going to cause some interesting conversations. Yeah. I mean, I kind of call it technological socialism, where technology is taking care of you. That's the title we've been using, right? We said that in our book, right? There's a whole section that technology actually delivers the ideals without the government intervention, without the inefficiency and corruption that comes with it. The most important tool that people are going to have over the next five years, anybody listening here, is your mindset, right? How you think, if you think the future is happening to you versus happening for you, if you don't have agility, if you don't have agency, it's going to be really, really hard. So if you go to weareisgodsbook.com, I hope you read the book. I'm going to be putting out portions of it in my substack. But it lays out the mindsets you need to survive and thrive. Because if you take it from the wrong position, you're going to be in fear. And fear is the worst place to be entering into the future. All right, let's do a few questions on AMA. Okay. Salim, you want to dish him out? All right. Dave, why don't you go first this time? Okay. All right. I'll go with number two. Justin Milligan, the great Justin Milligan. How can the U.S. prevent corporate tech giants from creating a surveillance state while trying to defend against AI-powered authoritarian threats? Yes. Yes, I gave a presentation at Davos back in 2020 on how much Google knows about you. And we've been just conceding massive amounts of information. Google knows exactly where you are at all times. They know all of your interests. They know all of your friends. You know, far, far, far more information than any government has ever had is now in the hands of a few corporations. And those corporations also happen to have AI. So how do you prevent them from creating a surveillance state? I think the only way you prevent that is with antitrust law. And they actually don't have any incentive to irritate the entire world and create massive voter backlash. So they've always been very cautious with the incredible power they have. I think what you'll see next is they'll start downplaying the capabilities of their AI. And that's a pivot for them because they've been promoting them for quite a while now. Now they're going to start downplaying them. There is a version of the world where they try and leave everything intact as long as possible. And so then the AI community grows completely outside of that world. But anyway, the only answer is, Justin, get all your Princeton friends rallied around how we work with the government to try to use antitrust law to prevent exactly what you're describing. Because absent any legal work, you know, John D. Rockefeller would have taken over the entire world many, many years ago without antitrust law. This is not a new thing. Yeah. As with Microsoft and as with Google, right? Exactly. So this is that all over again. It's only antitrust law that prevents it. By the way, since we're live here, ask your questions in the chat. We'll answer some of those as well. But Alex, do you want to pick one of these? All right, I'll pick the question from Chris Perlock, 2705. Can we get some advice for the average person? What kind of changes can we expect to see in the next 24 months? Two very different questions. My fortune cookie wisdom for the average person is build, use all of these AI tools that are now available and technologies, and start building. Launch as many different projects, start and finish as many projects as you can, and interact with the market, and build. This is both a familiarization technique for yourself as well as for the benefit of the overall economy and for financial benefit. Also, generic advice like try to avoid dying. Don't die. The singularity is moving pretty quickly. You know, live long enough to live forever, all of the other obvious things. To the second sub-question, what kind of changes can we expect to see in the next 24 months? If this thesis of solve everything that Peter and I put out is correct, expect to start to see pretty dramatic things happening over the next two years. If we are, in fact, on a route to not just solving math, which I think is essentially indisputable at this point, but solving physics in the next two years, I think, is a very high likelihood of happening, then I think there are probably going to be big surprises. And so expect my mental model at this point is over the next 10 years, that's being very conservative as an outer bound. We're going to live through the top 50 science fiction plots all happening at the same time. Yes, I so love that. what can you expect to see in the next 24 months? Expect to see at least the first few chapters or the first few acts of your favorite sci-fi movies and books all playing out at once. If you read a lot of science fiction or watch it, then you're probably reasonably well prepared for at least some of those scenarios. Nice. Salim, you want to go next? I will take number seven. At CC485, addressing what Dave said, if AI ends up controlled by only a few within the next few years, how do we prevent the average person from losing access and influence? So, you know, when you have centralized AI, you have centralized civilization leverage, right? When you have open source and decentralized compute, that's the antidote because you decentralize. You see open clock, as I said before, being created by one person outdoing a whole bunch of other things. Exponential systems resist long-term monopolization because they tend to decentralize. We're huge fans of decentralized crypto because of that, because you get distributed innovation and you get so many more experiments being run. I remember when I was the head of innovation at Yahoo, the COO said, surely we can compete with two guys in a garage. And I'm like, no, you're competing with 125,000 garages and 250,000 people. You can't beat that. And so this is the opportunity for individuals armed with a mindset, as Peter said, earlier, plus this unbelievable technical capability, as Alex is predicting, to really do whatever you want and change the game completely. I'm calling this PDI, okay? And it's disruptive innovation that's permissionless, hence the P. So in the past, when you wanted to do disruptive innovation, you had to get approval from your venture capitalists, from your bank, from the government, from the Medici family. Now you need basically a phone and access to some code. And this is unbelievable what we will be able to do. We're going to see thousands of experiments like this, and some of them are going to completely change the game. I see some great questions coming in. I want to jump on some of those, but let me just answer number eight. Thank you, Chip, White House TV. I'm concerned very... Peter frozen for others as well. Did we just use Peter? Yeah, I think so. That's our mission here on Moose Chats. That's ironic. And we aim to really please. What a sentence to freeze in the middle of. Yeah, it's perfect. You got to the end of the thought. I think we should, I think this may be the internet's telling us that we should end the episode. Somebody just posted they got him. We can't end without the outro. Actually, yeah, somebody took the news. Is there anything happening in Stuttgart that we need to know about? Okay, so should we go to the outro? Do you want to try to finish his thoughts, Salim, before the outro? I can't finish Peter's thoughts. Finish other sentences. You take a crack at it. All right, Peter, apologies in advance. Oh, we lost Peter. I'm going to try to channel what I think Peter would have said had he been able to finish this sentence. I think part A is Peter would say, yes, this is what we're trying to do here. This is what I try to do. I think Peter would probably also make some comment about wanting to launch a movie studio or something like that with more positive messaging to the world. That's my attempted coherent extrapolated volition style of Peter. I think that's more coherent than Peter would have done it. So I think that's awesome. All right, folks, I'm going to play the outro music. Dave, do you want to make the last comment? I was just going to say there's never been a better time to actually be a messenger because there's so many concurrent things going on that are unaddressed. So any topic you want to grab, you see this on YouTube all the time. Anyone who's trying the new use case, the new agent, the new model, they're getting a huge audience. So it is a great time to actually speak out. So why aren't more people trying to speak? Great question. And why not join the crowd and start trying, demonstrating, speaking, and recording? Yeah, I think that's such an important point. All right, folks, on behalf of Peter, Alex, Dave, myself, and the Lobsters, here is our outro music, and we'll take it there. Thank you to M-Core Mainframe for this. This is called Moonrise. When the future knows your name, it calls you like a DM from Mars. On the moonshots Podcasts they say The singularity is near It might even be here right now Let the clock strike 29 They sail tomorrow in a world model. When the future hits you like a gold-plated 13th, investing like Leopold, Arsene, Brenner, high-conviction bets on exponential abundance. No license needed when Waymos roam the streets And overhead SpaceX rockets, thunders in the sky. Optimist got your back. Proof the impossible still delivers. All right, guys, I'm going to wrap it up here. People can go watch it online. Great conversation. Thank you to all the listeners and viewers and commentators. It's been really great interacting. Adds a whole dimension of complexity watching this chat stream, but I think way more interesting and fun. So thanks to all of you. Dave, Alex, we'll see you guys again soon. And big hug to Peter. Big hug to Peter. Hope he's okay. Thank you, Salim. Bye, guys.