"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis

Zvi's Mic Works! Recursive Self-Improvement, Live Player Analysis, Anthropic vs DoW + More!

207 min
Mar 19, 2026about 1 month ago
Listen to Episode
Summary

Zvi Moshowitz discusses the current state of AI development, arguing we're in the 'middle game' of AI progress with recursive self-improvement beginning to take hold. The conversation covers the shrinking list of AI 'live players' to three companies (Anthropic, OpenAI, Google), Anthropic's conflict with the Department of War over domestic surveillance, and updates to responsible scaling policies.

Insights
  • We're transitioning from early to middle stages of AI development, not yet in the endgame which would require AIs driving their own advancement without human researchers
  • The AI job displacement narrative is gaining credibility with consistent productivity gains and employment revisions, suggesting real economic impact beyond over-hiring corrections
  • Only three companies remain as serious AI competitors, with talent advantages mattering more than compute until recursive self-improvement makes human researchers less relevant
  • Anthropic's constitutional approach shows promise for AI alignment, with evidence of self-reinforcing virtue ethics that could survive recursive improvement
  • The conflict between Anthropic and the Department of War reveals tensions over domestic mass surveillance capabilities and the limits of government control over AI companies
Trends
Recursive self-improvement in AI systems accelerating development cyclesAI-driven productivity gains becoming measurable in GDP statisticsConsolidation of AI leadership to three major playersGovernment attempts to control AI companies through supply chain designationsConstitutional AI approaches showing promise for scalable oversightTalent advantages in AI development becoming more critical than compute resourcesAI coding agents enabling dramatic productivity improvementsDistillation techniques allowing Chinese companies to leverage Western AI advancesReal-world AI deployment in autonomous systems approaching readinessData center construction becoming a national security and policy issue
Companies
Anthropic
Central focus as AI leader facing government conflict over surveillance and responsible scaling policy changes
OpenAI
Discussed as one of three remaining AI live players, with analysis of GPT-5.4 and competitive positioning
Google
Analyzed as potentially falling out of top tier due to cultural issues and poor model scaffolding
Meta
Discussed as struggling AI competitor that should consider licensing instead of building frontier models
xAI
Criticized for disappointing model releases and inability to attract top talent under Musk's leadership
DeepSeek
Chinese AI company discussed as test case for whether non-Western companies can catch up to frontier models
Tesla
Mentioned in context of Musk's AI strategy and potential advantages in physical world AI deployment
TSMC
Discussed regarding chip supply chain vulnerabilities and geopolitical risks to AI hardware production
Goodfire
AI safety company criticized for using interpretability techniques in training, called 'forbidden technique'
IBM
Example of market overreaction to AI threats, with stock dropping 10% on Claude COBOL announcement
People
Zvi Moshowitz
Guest and author of 'Don't Worry about the Vase' substack, providing AI analysis and predictions
Dario Amodei
Anthropic CEO discussed regarding company's stance against government surveillance demands
Sam Altman
OpenAI CEO mentioned regarding potential 'God Emperor' scenarios and AI governance concerns
Demis Hassabis
Google DeepMind CEO discussed as capable leader potentially unable to fix Google's AI cultural problems
Elon Musk
Criticized for poor talent management at xAI and unrealistic AI development strategies
Emil Michael
Department of War official driving conflict with Anthropic over surveillance capabilities
Holly Elmore
AI safety activist discussed for confrontational approach that may alienate potential allies
Amanda Askell
Anthropic researcher mentioned in context of constitutional AI development and alignment work
Bernie Sanders
Senator proposing data center moratorium, discussed as potential but problematic AI safety ally
Tyler Cowen
Economist referenced for prediction of 0.5% GDP growth from AI, used as productivity benchmark
Quotes
"I think the reason why people think of this as the end game is because they don't believe in the actual end game."
Zvi Moshowitz
"The S curve can stay steep longer than you can stay relevant."
Rosie Campbell
"Do you feel in charge? I think there is this very clear idea that people think, oh, if I have the rights written down in electronic databases, if I have the stock certificates, if I have the private property, then I get to be one of the special elite."
Zvi Moshowitz
"Why don't you try to make things good for the permanent underclass rather than try to escape it? That just seems like really flagrant defection."
Zvi Moshowitz
"You just don't do that. It gets everybody killed. Like, it's really, really bad."
Zvi Moshowitz
Full Transcript
4 Speakers
Speaker A

Hello and welcome back to the Cognitive Revolution.

0:00

Speaker B

Today I'm excited to welcome Zvi Moshowitz, author of the indispensable substack Don't Worry about the Vase, back for his record 12th appearance on the podcast. Whenever I get the chance to catch up with Zvi, I try to get his take on all of the most important recent developments in AI, and with so much going on, this episode stretches to more than three hours. We start with the critical question of recursive self improvement. Zvi explains why he thinks that recent events mark a transition from the beginning to the middle of the AI story, as well as what he would need to see to feel that we've entered the AI endgame, namely that AIs begin driving AI advances to the point that human research talent no longer matters. From there we discuss the rising narrative of AI related job loss, I ask Zvi to estimate the productivity impact that AI is already having on the economy, and we discussed the bankrupt ethics of focusing one's energy on escaping the so called permanent underclass, which we both see as flagrant defection and Zvi colorfully argues won't work anyway. After that, we consider the AI live players. The list, we agree, seems to have shrunk to just three companies, with Anthropic probably slightly leading, OpenAI still neck and neck, and Google in Zvi's mind, most at risk of falling out of the top tier. Zvi also explains why he thinks that Chinese companies won't soon catch up even if they do get an influx of compute, and explores what Xai and Meta might possibly do to get back into the race. From there we dig into Anthropic's recent update to their responsible scaling policy, consider their conflict with the Department of War, and we get Zvi's take on the efficacy of the constitutional approach and whether it's realistic to expect that a powerful AI could be robustly good. Toward the end, we check in on his current PDOOM number, compare notes on how we're each using AI to boost our personal productivity, briefly debate the merits of GoodFire's intentional design research agenda, assess the AI safety community's currently available options, and I get some personal, financial and professional advice. For a mix of broad situational awareness and razor sharp insight. There is arguably nobody better. And so I hope you enjoy this wide ranging survey of the AI state of play with the one and only Zvi Mashiewicz.

0:02

Speaker A

Zvi Mashiewicz, welcome back to the Cognitive Revolution.

2:25

Speaker C

Good to be back. It's been a while.

2:28

Speaker A

It has. I'VE been busy and so has the rest of the world. And we've got no shortage of major events from the AI world to cover. Oh, boy, I know you've been busy too. Let's start with recursive self improvement. I think if there are any historians around in the distant future, which could be as short as a few decades from now, to look back on this time and say what really mattered in early 2026, probably my best guess is that we are in the period where we're really starting to enter into a recursive self improvement dynamic from which there may already be no return, or we may soon reach a point of no return. I feel like subjectively we kind of went from late, early, you know, it was like the getting to the end of the beginning. And now suddenly I feel like we're maybe in the beginning of the end and I feel like somehow we missed the middle. But let's start with just your reflections and observations on where we are with respect to recursive self improvement.

2:30

Speaker C

Yeah, so we're in the beginnings of steadily increasing amounts of improvement. I would say this feels like the middle. I think the middle is real. I don't think you. I think that the reason why people think of this as the end game is because they don't believe in the actual end game. They have this belief that we're looking at an S curve. They believe the models will be commoditized. They believe that intelligence will be commoditized. They believe that the future will look like the past, except with all of this cool intelligence behind everything. But in the way that Star Trek is basically just modern humanity doing modern human things and talking about modern human issues, except with metaphors. And they don't really believe that everything will transform, that everything will change, because everything is not transforming yet. If I had to use the metaphor at the beginning and the end, I'd say this is the beginning of the middle game. You've got the US government starting to wake up and do crazy stuff. You've got the labs starting to pull away from each other, become importantly different, offer importantly, recognizably different services, building on themselves in ways that are rockets to the moon in various different ways that you've got, frankly, humans stopped writing the code and you're seeing cycles get faster and faster, but you aren't seeing true transformational changes to the world. You aren't seeing the humans being legitimately out of control of the process. You aren't seeing humans out of the loop. And those are the types of Things I would think would count before I would call it an end game.

3:36

Speaker A

You mentioned the S curve mental model. One of the tweets that has resonated in my kind of rung around my head for the last few weeks was from Rosie Campbell, who used to work at OpenAI and is now, I think, working on issues related to AI, welfare, sentience, consciousness, et cetera. She posted something to the effect of the S curve can stay steep longer than you can stay relevant. And I wanted to just kind of dig in on the S curve versus exponential for a second to see, like, I guess my mental model is an S curve. Does it matter if there's a difference between long term and exponential or an S curve? Like, if the S curve plateau is high enough? My kind of mental model is, yeah, it probably is an S curve, but I don't think that really does S

5:18

Speaker C

curve unless our model of physics is very wrong. Because there's a limited amount of mass energy as we understand it in the universe, and it can then be created nor destroyed. And that means there's a certain amount of potential energy and a certain amount of potential utility and a certain amount of potential intelligence that the universe as we know it can't contain. So unless our model of physics is wrong, which is who knows, not my area, this will. We have some very, very strong beliefs about, like, just the things that are theoretically impossible. You can't exceed the speed of light, you can't extract more than a certain amount of energy from a given amount of matter. You can't do a certain, more than a certain amount of work. And therefore the amount of theoretical intelligence you can get from that, the amount of utility you get from that by any definition is limited. So it's an S curve in some sense. Right. But like, this is, you know, you're sitting around an age in Athens and saying, there's an S curve, there's only so much technology mankind can invent. And like, you're right, but it's not relevant to your situation.

6:06

Speaker A

Yeah, there's a long way to go. Yeah, Okay. I think that's very. I think that's an important point of clarification for many debates because people seem to really want to latch on to this S curve idea and it really doesn't do that much for us in the end. I think that's like.

7:02

Speaker C

I think people, frankly, desperately want to tell themselves a story and tell other people a story where it matters that they're saving for retirement. It matters that they're doing ordinary human things and planning for an ordinary human future where, like, things won't change that much, where they don't have to go crazy, they don't have to look crazy to their friends and their family, where everything is going to be okay, everything is going to be normal in a fundamental sense. And it is important to hold onto those things, to prepare for that scenario and to stay sane. But they want the ability to just sort of push all of this aside, basically, and say, here's why AI is not that big a deal. Here's why AI will be only Internet big, or not even Internet big in some cases, although that's starting to fade away. I understand why they feel the need for that and I understand why they latch onto that. But it's simply not looking like that's going to be the case. It's becoming increasingly unlikely that we're going to stay in that zone. And people have to come to grips with that. Yeah, the S curve. At some point we'll hit one. But every day that we don't see that happening is one more day. And basically every few months someone will come up with a new study and they'll say, oh, this proves that we're at the top. And they're all wrong. And even if we were to hit the top of the S curve of fundamental capabilities, which is the curve that we care about, and we hit it with GPT 4, 5.4 and Opus 4.6, and these are the best models we're going to get. And it's all iterative from here. Yeah, they're still vastly underestimating what's about to hit them even then. And 5.5 and 4.7 are coming, unless they're 6 and 5.

7:24

Speaker A

One thing that many people would presumably have to change their narrative on is if there really were a big displacement of human workers. Right. If we started to see major sustained layoffs, rising unemployment, et cetera, et cetera, it seems like that story has hit maybe again, the sort of beginning of a tipping point in the last few weeks. How do you understand that right now? Again, we've got, of course, competing narratives, right? Like the CEOs themselves are saying we're doing it because of AI and we're going to be more efficient. And stock prices seem, at least in a couple notable cases, to have bumped on that communication. The counter narrative has been, well, you way over hired during COVID and kind of zero interest rate timeframe anyway. And so you have kind of an incentive to say it's about AI, but really you're just trying to undo previous Mistakes. There's probably at least some truth to that as well. How do you parse the AI layoff story and what are your expectations for the next, say, quarter?

9:08

Speaker C

Sort of like, it can always be a coincidence up to some point, but every month, every indicator keeps going in the same direction and things keep going farther. Excuses like there is dead weight to be gotten rid of become less plausible as we get into 2026, because it's been several years. We keep seeing statistics tell a consistent story that some of us were predicting the statistics would tell in advance. And that the people who said, I don't remember people saying, oh, yes, you're absolutely going to see increases in productivity, decreases in employment, always announcements of job cuts due to AI, but it's not going to be real because of these explanations. I don't remember anybody making that prediction. These are people in hindsight saying, oh, if that's true, then it must be because of this. But that would be incredible if they had observed because theoretically that overhiring was already there. It was already clear. And some people were talking about the fact that there was over hiring. I believe there was overhiring, but nobody then said, here's how this is going to play out that I can remember. And also, just like the statistics are just coming in consistently telling the same story over and over again, which is productivity up, GDP up our gdp, not just nominal gdp. Inflation held in check, employment down. Employment's been revised down every month over and over and over again. And of course, there's always confounders, right? You could say, well, it's the tariffs. You could say, well, it's the aftershock from COVID You can say any number of things. But it seems like a really, really big coincidence to claim that all this is happening at the same time when, like, the stock market impact of the tariffs got basically fully erased. The tariffs have been reversed, it's obviously the last. Now everything's confounded by Iran. But, like, up until the Iran conflict started, it seemed like we were getting a very consistent story that this kept happening. The job numbers kept getting revised down, the GDP numbers kept not being revised down. Particularly productivity kept going up. And everybody out there in practice keeps telling the same story. And we're on the street. At least when I talk to people who talk to people who are not involved in our world is there is widespread on the ground, normal person paranoia that if they lose their job, who knows when they'll get another one in a wide variety of, like, hard work. Because not that many people are Getting fired yet because it's really, really hard to replace a worker. Toss a lot to train a worker. You want to be conservative, you wait until you actually have automated all the work. But everybody is like, who know who's hiring, right? Who wants to take on new workers and train new workers to these jobs when you could train an AI to do it instead of going forward? By the time I've spent my two years getting you to be productive, maybe I don't need you anymore. And it's a harbinger of the future. And this has decreased labor power. It's made everybody feel paranoid. It's made people fearful for the future. The kids are freaking out about exactly these problems. They don't know what to study, they don't know what to try and do. It's a very galaxy brain take that. All of this is in your head. Even if you think again, this is the best that AI will ever be. And it's just in the fusion story from here, it's still a hell of a story to say the job market impact is in your head. It's not real. Now there's the not galaxy brained standard economist take, which is, yes, there'll be a transition period which we're entering now, where a lot of existing jobs will get much more efficient. They'll get automated, they'll get augmented, they'll get eliminated and will transition. But that's okay because Jevons Paradox will have a lot more demand for things like software engineers and a few other stories. And then of course, with our new wealth and productivity, we will find other jobs for people to do. There's plenty of jobs for people to do. The Department of War might be hiring, but that's always been the case, right? The people who say, well, this automation of agriculture will kill our jobs, they were right. Their jobs would go away. They were wrong that unemployment would follow in the long run because of course we went and done other things. And the whole reason this time is different, that a lot of us believe this time is very different, is not that technology has never taken current jobs and made them largely obsolete. That's happened many times. The reason why we think this time is different is because the AI is going to do the new jobs that would get created as well. And it's going to happen quickly and it's going to happen en masse. And therefore you never exit the transition period. The humans don't develop new things. We don't necessarily think there's going to be enough tasks for all the humans that the AIs can't just replace. And this is going to create potentially a large number of people who cannot retrain themselves into a new position and develop something to do that people value fast enough before it simply gets replaced over and over again by AI. And also a lot of people are putting a kind of resiliency and ability to shift and adjust into people that people mostly just don't have. When these things happen over generations, it's much easier to deal with than when it happens over the course of years or even months. That's just unprecedented in human history and people will not react to it very well. And again, all of this is like in a relatively milquetoast normal world scenario where all this is happening in the more advanced scenarios. You have much bigger things to worry about.

10:14

Speaker A

Hey, we'll continue our interview in a moment after a word from our sponsors.

15:46

Speaker B

Everyone listening to this show knows that AI can answer questions, but there's a massive gap between here's how you could do it and Here I did it. Tasklet closes that gap. Tasklet is a general purpose AI agent that connects to your tools and actually does the work. Describe what you want in plain English, triage support, emails and file tickets in linear research 50 companies and draft Personalized Outreach Build a live interactive dashboard pulling from Salesforce and Stripe on the fly. Whatever it is, Tasklet does connects to over 3000 apps, any API or MCP server, and can even spin up its own computer in the cloud. For anything that doesn't have an API setup triggers and it runs autonomously, watching your inbox, monitoring feeds firing on a schedule all 247 even while you sleep.

15:50

Speaker A

Want to see it in action?

16:43

Speaker B

We set something up just for cognitive revolution listeners. Click the link in the show notes and Tasklet will build you a personalized RSS monitor for this show. It will first ask about your interests and then notify you when relevant episodes drop. However you prefer Email text, you choose. It takes just two minutes and then it runs in the background. Of course, that's just a small taste of what an always on AI agent can do, but I think that once you try it, you'll start imagining a lot more. Listen to my full interview with tasklit founder and CEO Andrew Lee. Try Tasklet for free at Tasklit AI and use code COGREV for 50% off your first month. The activation link is in the show notes, so give it a try at tasklit AI support for the show comes from vcx, the public ticker for private tech. For generations, American companies have moved the world forward through their ingenuity and determination. And for generations, everyday Americans could be a part of that journey through perhaps the greatest innovation of all, the US Stock market.

16:44

Speaker A

It didn't matter whether you were a

17:46

Speaker B

factory worker in Detroit or a farmer in Omaha, anyone could own a piece of the great American companies. But now that's changed. Today, our most innovative companies are staying private rather than going public. The result is that everyday Americans are excluded from investing and getting left further behind, while a select few reap all of the benefits. Until now. Introducing vcx, the public ticker for private tech. VCX by fundrise gives everyone the opportunity to invest in the next generation of innovation, including the companies leading the AI revolution, space exploration, defense tech and more. Visit getvcx.com for more info. That's getvcx.com carefully consider the investment material before investing, including objectives, risks, charges and expenses. This and other information can be found in the Fund's prospectus@getvcx.com this is a

17:48

Speaker A

paid sponsorship Famously, Tyler Cowan said he thinks we can get a half a percentage point, I think additional real GDP growth out of AI and that would be amazing in his opinion. What would you say? This might be too hard because obviously there's a ton of noise in the data, but based on what you've seen so far, productivity measures, et cetera, et cetera, if you had to put a point estimate on what we have got right now, what would that point estimate be?

18:42

Speaker C

I haven't tried too hard to estimate it exactly and I don't think anyone else really has either. I've seen various different attempts to guess. If I had to guess, we'll currently enjoy half a percent to a percent.

19:16

Speaker A

Yeah, that's kind of what my gut says as well. And that's based on a very vibe based on relatively superficial revit.

19:25

Speaker C

But like this year was there was a lot of down downward pressure on what would normally be the American economy. You had a lot of fear, you had a lot of regime uncertainty in terms of the tariff regime and like various other policies. Usually the types of things that happened in 2025 would be rather bad for business. Instead things were good for business. And I think this is a lot of why things felt like they were okay for business. They weren't okay for labor, certainly not in the felt experience of labor. I think it's on the order of half a to a percent right now, but I think the market is pricing in at least that much indefinitely going forward and likely somewhat more even if it doesn't understand that's what it's doing. And the market has done well because it's anticipating the benefits of AI. I think if it wasn't anticipating that, you would have seen a very, very different set of reactions. And that's exactly why, when things started looking awkward for the American economy in other ways, I didn't sell anything. I just held on because I knew that AI was going to prop things up.

19:33

Speaker A

It does seem like there's been a bit of a split recently. I mean, people talk a lot about. And I don't necessarily buy this frame, but there's the worry about being part of the permanent underclass or making your way into the upper class before things get locked in. I don't really buy that on or it certainly doesn't frame much of my thinking on an individual basis. It does seem like that dynamic might be coming to stocks, though, because we do see a, like, anthropic drops relatively, in the scheme of everything they're doing, minor product extension and apparently pretty closely correlated to that. You'll see stocks drop. Are you buying that level of, you know, stock? Permanent underclass.

20:43

Speaker C

I want to hearken back to one of the most important movie scenes in history, which is Bane confronting his backer, right? And the person who hired him says, I'm in charge here. And Bane looks at him and says, do you feel in charge? And I think there is this very clear idea that people think, oh, if I have the rights written down in electronic databases, if I have the stock certificates, if I have the private property, then I get to be one of the special elite. Everyone else gets to be one of the underclass. I have to be one of the people in the elite, even though I will also not be productive. Even though I also will not be in the loop, I will also not be able to exert meaningful optimization pressure except through my technical authority as the person who Marx in a database. I think the idea of relying on Marks in a database in this kind of world to keep you alive, to keep you meaningfully wealthy, and ability to consume physical goods, to have a good prosperous life for you and your descendants is hopium. It's not a good plan. If you think that humans are sufficiently useless that most of us end up in a permanent underclass because we cannot be economically productive, then your best case scenario is you slowly lose your wealth to various different extraction methods. And the more likely scenario is all of that gets ignored by facts on the ground, by physical reality, just rendering all of that irrelevant. Or the system gets taken over and subverted either by humans using AI or by AIs, the long history of the world, even with much less severe disruptions, does not have a good track record of private property surviving over long periods of time. When someone else has the guns, when someone else has the swords, when someone else has the power in duration of the senses, the meaningful power. If you don't have a reason to keep it, you don't keep it, not really. And we've seen the collapse of the colonial era. We've seen the collapse of basically almost all the ruling regimes. We've seen many, many examples. Even if you're not going to take the arguments around AI seriously, just like I can't believe you're counting on this, when people talk about permanent underclass and they talk about, oh, I'm going to have the skills to be productive of coding agents, so I'm going to be a valuable person going forward, but I have the short window. It's like, why is it a short window if, if humans can scale up and be productive in the future, you can scale up and be productive in the future. And if you can't, you can't. So there's no particular urgency there. You can make the argument that certainly this is one of the last chances in some sense to use your talent. Suddenly there's this window where your talent can suddenly create a billion dollar company or even a trillion dollar company, and you would have a large portion of that. You can enjoy a lot of money. I don't think it's particularly relevant. I think that in practical terms, there are two likely ways this plays out. If AI is for real and goes the way I expect it to go out, one way is we lose control of the situation in various senses, everyone dies, or there is at least massive loss of control, massive loss of resources, massive destruction, massive disruption. In either case, none of this is going to be that relevant to you particularly. The other scenario is things go relatively well and then there is basically abundance of real resources and the humans are more or less in control. In that case, I think if you're a citizen of the United States, you don't really have to worry very much about your material needs. Like, okay, sure, you're a member of the permanent underclass. Congratulations, you have a real income. That today would be called a million dollars. You have access to robots and intelligence that caters to anything you want. Your day is free. You don't really have to work. That's probably not exactly how it plays out, but basically you're going to be better off except in relative status terms. And you really shouldn't have to care about that. You need to get over the fact that you are not relatively wealthy or relatively respected in that world. And there'll be, I'm sure, ways in that kind of world to compete for status within the human hierarchy. There'll be ways to meaningfully occupy your time. If we're still in control, we'll figure that stuff out. And then I guess there's a third scenario where, like, some cabal uses AI to take over, in which case you need to be in that cabal if you want to be part of the people who take over and maybe influence it to become the good world instead the bad world. But just making a bunch of money is not going to get you in. That's not how those worlds work. So, again, mostly you should be trying to make sure we don't get into the bad worlds where we lose control, and then they get into the good worlds where we retain control and humans are in charge of steering and what happens. And mostly I expect that most groups that even if a small group has the ability to steer that world, they'll steer it in ways that we're pretty happy with. A lot of people talk about how Altman might try to make himself God Emperor, or Damas Hassabis might try to make himself God Emperor or Dario Amadai or whatever. And that's not my preferred outcome, to be clear. But I don't think that if Sam Altman made himself God Emperor that, like, I would be that sad about it in terms of my practical lived experiences. I think my life would be fine. I think my children would be fine.

21:32

Speaker A

Yeah, he does have a little bit of a Roman Caesar kind of vibe to him, where I think he does fancy that sort of. It seems that he does fancy that sort of power, perhaps, but also that when he does things like fund universal basic income experiments out of pocket, I view that as, like, honestly, probably a genuine magnanimity on his part that I would expect to probably extend into.

27:00

Speaker C

It's very easy to be magnanimous when there's a true full abundance of resources. He could, in theory, keep 99% of the value of the light cone for himself. And the rest of us can be very, very happy with the rest. And I'm not saying this is my preferred outcome, to be clear. I do not want this to be how it plays out. I think this is bad. But that's still better than not building it. That's still better than various different horrible outcomes, especially everyone dying. But also, yeah, it's Better than status quo, like, in an objective sense, except for your opportunity. So what I'm worried about is, in fact, losing control of the situation. Right? And the reason why you don't want him out and making the decisions is not because he might take over. It's not primarily right, in my view. It's because he might make decisions that cause us to lose control and for him to lose control because he is being irresponsible. And that is the reason I am terrified of him making these decisions. Right. Like, it's not because I think he is, like, an evil man. So, like, it all comes down to normativity in my mind. Right? Normativity is the concept that good things are good and bad things are bad. And you want good things to happen to people in general, not necessarily every person. You're allowed to think there are some bad people, but that good things, good things should happen to good people. And most people are good. If you believe that, then basically things will work out, even if there's not exactly your preferred outcome. And you know what? I think most people are normative. I think even the people in charge of the labs right now, most of them are normative. And to the extent that there are people, relevant people who. I don't think that way. I simply say nothing.

27:28

Speaker A

I will say I do think it's pretty distasteful. When I see people talking openly about trying to escape the permanent underclass, My reaction to that is always like, why don't you try to make things good for the permanent underclass rather than try to escape it? That just seems like really flagrant defection that I just can't.

29:11

Speaker C

Flagrant defection to focus on getting a seat on the ark if you think the world is going to be flooded, like, that is a terrible, terrible position. He's trying to stop the flood. You should be trying to save more people and build another ship. If all you're trying to do is get one of those precious few seats on the ark, that's not a good thing to do. At the same time, there are times when the best thing you can do is just escape the bad regime. There are, in fact, times when that's like, put your vest on first. Get out while the getting's good. Because what else can I do if I respect that? But the way they talk about it, yes, it has a. The bad kind of elitism, like the kind of just complete disdain for the common man. Indeed. Very disgusting. And I really don't like it. And, you know, yeah, I expect, you know, we can make the permanent. We can make the permanent underclass. Pretty neat. And in fact, most of the time the world has had permanent underclass and it's. The permanent underclass has never had it better than it has it today.

29:31

Speaker A

Yeah, it's important to remember that what would in your mind. And maybe we could talk about this in terms of like handicapping the timeline and then maybe kind of just a little qualitative description of sort of what you would think would represent the transition from the middle to the end game. I've started to think. Of course, I think everybody who follows this feed knows the general range of Dario's still kind of on AI 2027, more or less. DEMIS is more like a 2030. OpenAI has a March 2028 timeline for their fully human level automated AI research paradigm kicking in. I assume you're somewhere in that range in terms of expecting things to enter into what you would call a late game. How would you call that transition? Or if there's a sort of. I guess maybe we even should understand your view better on what is the difference between middle and late? Is it some sort of event horizon

30:39

Speaker B

point of no return?

31:50

Speaker A

Or is there some other concept that would separate those? How would you call it? What are you looking for? When do you think that is most likely to happen?

31:51

Speaker C

The end game is like, I would say, when it's the AIs that are largely running the show, at least in the further development of AIs. Right now we're seeing AIs augment the humans, the humans are making the central decisions, the humans are reviewing the code, the humans are making the plans. And the AI is a multiplicative factor. It's something that you're supervising, is something that is enabled. When that changes, things get very strange. One of the key aspects of AI 2027, the tabletop scenario, is that mostly your progress is proportional to your compute allocation and your current place. You move up the timeline at a rate proportional to what percentage of the world's compute you have. Because your researchers no longer matter very much. Right. Because you're telling the AI, which is already effectively as smarter, smarter than you are, to go build a better AI and to go align the AI. And you make a decision like what percentage of the compute should go to safety and keeping this thing steered properly versus increasing its capability. And for the early parts of it, assume that's going to be respected. And then eventually you can't assume that's going to be respected. But at some point right now, I would say the top labs have a dramatic talent advantage in these types of scenarios. And I think this is a reasonable hypothesis, although I'm far from certain at some point your researcher talent doesn't matter very much because the AIs are your research or talent. So all that matters is where you are on the curve and how much compute you have. Right now, Anthropic seems to have the best talent and gets the most out of every given amount of compute they spend. And then you have OpenAI and Google that have strong talent. They get a lot out of the compute they spend. And then what's going on with Xai and Meta? They're throwing tons of compute at this problem and they're falling farther and farther behind. At least this appears to be true. And fundamentally speaking, I think that's the lack of talent, that's their inability to execute as humans. And so the end game comes when it doesn't matter that much which humans you have. Essentially the human no longer provide. Right now you have Centaur. We're playing Centaur chess. Right? The human plus the coding agent is much superior to the best human on his own or her own. The AI agent on its own doesn't do anything. You need a Centaur the moment the human doesn't matter anymore and you transition to okay, the human needs to be somebody who's just able to do some common sense stuff, isn't even one of the best players anymore. Now we're starting to talk about endgame style scenarios also. Yeah. When you approach the event horizon of the release time to a meaningfully different model, starts to go to a month, starts to go to a week, starts to be like, things get absurdly fast. These are the types of things we'll start to see. But the ultimate Yukashkian scenario, it's like you just leave it on overnight and then you wake up when things have happened and then you know you're in the end game at that point. You also can think of the end game as when the world starts to actually transform. You start to see the robot factory is being built. You start to see large amounts of area being terraformed. You start to see massive job disruptions. You start to see governments start to do major interventions. You start to see this be the issue, the thing on people's minds. And we just had one of the most important developments in the history of AI happen with the dedication of Anthropic as a supply chain risk. And it wasn't even the most important thing the Department of War did that day. According to most people. You got Completely consumed by the exact same person who sent the tweet out doing something else that night that has gotten like 100 times more coverage because people think that's the important thing that happened. And I think it's very much remains to be seen what the important thing was that happened that day.

31:58

Speaker A

So I think I have to give you some base points when it comes to drawing the circle around who are the live players. I think for many conversations going back in time, the record will show you've pretty much always said it's those same three companies that you just listed are the real live players. I've always been grasping at who might we add and what might be the rationale for adding them. And I asked you this question a minute ago about the permanent underclass of stocks. You went in a different direction with

36:10

Speaker C

it, but listening to those I just got distracted.

36:38

Speaker A

All good. But I guess what I'm taking away from your analysis is that permanent underclass of stocks might be almost all the stocks and it might even extend up to big tech, you know, like Microsoft, Amazon, obviously they have at least held their own slash done pretty well so far. But if you have a model of who can get into this next regime first as kind of really mattering most, and only three companies right now seem to be well positioned to do that, then basically you're short everything else over three year time frame is that accurate.

36:42

Speaker C

So I think there's a lot of different parts of AI and you can make money doing a wide variety of different things and the real world always takes longer than you think it does relative to the methods of technology that a fusion is slower than the original idea. So I think there's still a lot of room for a lot of different groups to win. I think there'll be a lot of, oh, we're going to buy out people who have useful components even if eventually wouldn't need them anymore because it's faster. I wouldn't expect everybody to go to zero. I think one thing what's going on basically is that the stock market is forward thinking and they are trying to do some haphazard forward thinking. They say, oh, okay, the software as a service business has a good business, but in five years they won't have a business unless they reinvent themselves and create new products. But maybe they will invent themselves and create new products, but their original basis of valuation has kind of been destroyed. And yes, that will extend to a wide variety of other companies. I think a lot of companies in fact are going to be long term losers but it's always been true. If you look at the S and P and you play this game of like, okay, let's take the 10 top performers for the next 10 years and everyone else doesn't do so well. The gains are because some companies do really well, always have them, and everyone else on that kind of languishes. And so that's why you want to be diversified. Because can you pick those 10 companies? And the answer is no, most people can't. But in this case, I think you make some pretty good guesses, but you don't really know. But also, keep in mind the market's dumb about this Monotonous words. The market is reacting on a very superficial level. If you're listening to this podcast, you have thought much more intelligently about the situation than the market has. Not just talking about me. And so like, they announced that Claude was going to offer a COBOL product, Cobalt product. IBM was down 10% in the wake of that announcement because suddenly the stock market woke up and they said, oh my God, AI can write Cobalt code. AI can translate Cobalt code to regular code in programming languages. People know IBM's business is in trouble. And the rest of us were like, you didn't know this is news. The fact that they built a slightly easier to use tool changes things at all. And what has happened since then? IBM has fully recovered because it turns out that yes, in fact, either it's been priced in or they went back to not believing it or something. But clearly, like the market doesn't know what the hell is going on. You see this over and over again. The market is very slow to update on these things. They do superficial reactions. The market has wrong way reacted in Nvidia many times to very clear news where demand for Nvidia's product is up and Nvidia's stock responds by going down. That is not how economics works. That is not how capitalism works. And yet here we are. So, yeah, I would say there are some clearly good buys. They're not as clear as they were a year or two ago or several years ago when it was just completely, utterly obvious what some of the good buys were. Because now the multipliers are in fact respecting a large amount of growth in those stocks. But it still seems pretty obvious that if you were to buy a basket of the stocks that seem clearly positioned to do well and short the rest, it would be a very good strategy and expectation and your thesis would have to be very wrong. We could set that aside and so on. A lot of players, yeah, as I said, I think the talent has really proven to have migrated to the big three. And I think that there's a large and growing gap between 3 and 4. So if I had to think of potential 4 at this point. So, like either. So there's a few possibilities. Obviously Meta or XAI could in fact have gotten their shit together and managed to find people who can execute, but I don't see any evidence of that. We just learned Meta postponed its next release again. There seems to be continuous reshuffles, trouble in the opposite of paradise. Things are not going well, as far as we know, in Meta land. And XAI put out probably the most disappointing major model release of any major lab in the history of large language models in 4.2. And I see no sign of of much happening. They're not doing the things you would do if you wanted to recruit good AI capability or safety workers. They're actively dismissing. They disbanded their safety team. They pooh, poohed the idea that anyone could be actually in charge of safety. They said, well, safety is like, well, safety is everyone's job. That's how it is at Tesla and SpaceX, which is not true. But Tesla and SpaceX have very dedicated teams and very dedicated safety people who make sure everything is safe, because anything else would be completely the same. But I mean, Musk just has things. But like, Musk just doesn't understand. You can't just like run software engineers into the ground, ask them to have miserable life experiences, ask them to align to your personal preferences and whims all the time, and then hope to get the best talent. How exactly? Why do these people want to work for you? You'd have to give them even bigger packages than Meta, and that's not going to happen. So he's not going to get the best talent. I'm going to spend infinite money and this will get me the best talent and equivalent. Just hasn't worked. And so they seem to be doing that petition. So for the Western side, that leaves just the big three. On the Chinese side, I think there have been a lot of stories over the last few years about how this Chinese land originally deep seek, but also here's Alibaba, here's Kimmy, here's whoever else. They're the ones who have the new hotness, they're the ones who to catch up. And I've said before, Deepseek had some decent models, Kimi had some decent models, some other people had some decent models, but nothing that close, nothing that scary, nothing they told they were actually catching up and that seems to have been borne out. Obviously at some point I could be wrong. Deep TV4 will be then the last one of the most important remaining tests of this thesis. Supposed to have already come out by now. Not sure what's holding it back, but when V4 comes out we will compare it to Opus. We will compare it to GPT 5.4. If that is not the right comparison, if it is not trying to play in that league and it is clearly still far behind, then I think we can kind of say, okay, Deep Seek had a deep Seek moment, as we call it, where the stars aligned to make everybody super excited about what they were offering. But mostly what they were offering was how to do more reflects. They were genius at efficiency. Genius is at working on the bare metal, figuring out how to get something really good item, not very much. And then they put it in a really great package, exactly the right time, gave it some user friendly features, attracted a bunch of market share, attracted a bunch of attention, scared the shit out of everyone. Since then it's been quiet. They still did some cool math stuff. Don't get me wrong, they've had some innovations, but they're just not playing on that level. And this is their last chance to prove me wrong. I think you kind of have to more or less dismiss them as that level of competition. You put them back in the pack with the open source league. Right?

37:24

Speaker B

Different league.

44:49

Speaker C

And they're competing in that league. And they're not necessarily the best in that league. Not necessarily not the best in that league. It's unclear, but I think that's a very different league that's substantially behind and it would be very, very hard for them to get out in front and actually innovate because they're fast followers and I have respect for fast followers, but it's a very different skill than trying to do something in the lead. And also like, I don't think they can compete frankly with the kind of recursive self improvement we're seeing with plot code and codecs. I don't see them trying. And I think anyone who doesn't do that is pretty doomed here. In fact, I'm starting to see, I'm starting to think about it like when I watch the models, when I watch how people talk about the models, when I talk about how they talk about their scaffolding, when I see what they're doing, I think Google is in danger of dropping out of the top tier.

44:50

Speaker A

Okay, that's a big claim. We'll come back to it in just a Second, on the Chinese, I guess a few different follow up questions on the Chinese companies front specifically, is it talent or is it compute? I mean, I think you could at least make an argument that the talent is there and it's just not the compute. And if compute allocations were to change, then maybe a deep seq, maybe a Alibaba. Although there's been like major disruption there too, as far as I understand recently in terms of team changes, let's say

45:39

Speaker C

eye level as carefully, precisely because their compute is lacking.

46:11

Speaker A

But if that were to turn on either whatever, let's imagine a certain executive decision allowed that to change or perhaps a technology breakthrough just domestically in the Chinese manufacturing side.

46:15

Speaker C

Yeah.

46:29

Speaker A

Would you then expect any of those to

46:29

Speaker C

manufacturing side? I think that's basically not, not possible in the sense that it would take many years to physically play out. Even if they figured out how to do it, they would need to scale up. These things are physical, they take time. I think we're talking about like five years, style, timelines. And if things are going to come to a head faster than that, in many ways it kind of doesn't matter at that point. Like the only way they're getting the quality of chips as the quality and quality of chips necessary to be competitive in this level is if we give them to them. Huawei is not going to manufacture them fast enough at scale. Even if their efforts are completely successful in terms of what they're trying to do. We can just rule it out at this point. If we're talking 10 years down the line, maybe they can do something relevant that's still just not that much time because by definition the AI they'll be working with won't be that advanced. Twenty years, sure. But let's not get ahead of ourselves. This is pretty big advantage in terms of talent, in terms of underlying, just raw human talent. Obviously China has tons and tons of talent, right? Like, you know, there's tons of talent out there. A lot of these people have studied machine learning. A lot of these people have, you know, really where to go. They have like all the right attributes. No doubt. It's a big country with a very good educational system and a lot of very smart people. And a lot of people care about this stuff. And a lot of them are desperate to find something good to work on. And there are advantages to having tons and tons of unemployment really drives people. Chinese got problems, but at the same time they are focused on a very different style of skill and style of problem because that's what the Chinese are pushing and that's what the Chinese incentives go towards. These people are scaling up in the ability to deliver these types of open environment efficient developments, right? They're driving through like here, do I do small well, how do I do fast following well? And the entire ecosystem is built around these different types of skills, these different types of talent. And I think there's a really big difference between one set of things and the other side of things. Like the same way that like OpenAI has very, very strong talent, they decide to build an open source model, right? They create an open source model and in some ways it's got its charms, in some ways it's the best model at certain very specific things, maybe even at all. And certainly from an open perspective. But for the purposes that the Chinese models are being used for, mostly it's useless. It's just not a very good model for the purpose of this, despite the fact that OpenAI has internal access to its talent, its compute and its best people, because it's a very just different skill. And I think it works the other way too. And so I think that if the Chinese suddenly got this compute, that there would be a transition period where they would have to learn how to do the thing that the major labs are doing. And also they'd have to build their own synergistic harnesses and scaffoldings and learn how to do all the stuff that Anthropic and OpenAI have been doing over the course of years. And so over the long term, do they have the talent? Yeah, obviously, absolutely. I don't think America is special in that sense, but I think our lead is bigger than it looks, I would say, and is more robust in some ways than it looks. But I don't think we want to test that theory and find out by suddenly giving them compute parallel skills. And seeing here, let me understand a

46:35

Speaker A

little bit better what the difference is, because I think one thing that has obviously been talked about a lot recently and Anthropic even put something out saying we're seeing this right, Is the distillation from American frontier models happening in the Chinese companies. And my attitude on that, I think you're going to have a very different take. But my take has been, sure, they might be doing that. They definitely like to take shortcuts. And especially if you have a story where you're like, they're cutting us off from compute, so let's take whatever shortcuts we can get, any number of ways you can tell yourself why that makes sense to do. But if I think to myself okay, what would be fundamentally hard if I was going to try to start Nathan's AI company today? I would say, well, going and getting a bunch of expert data that is kind of like what Skale and other data providers have done. Obviously, that's very resource intensive, it's very time consuming. It's all kinds of things. But I feel like at heart, I could run that project. Give me $10 billion and I can go build out the network and hire the people and get that flywheel going. I don't feel like there's anything there where I'm fundamentally outclassed by the people doing it. Maybe I'm wrong, but I don't feel that way. Whereas if you said, hey, here's all that data, turn it into a frontier model for us, Nathan, I would be like, oh, wow, okay, this is going to be really hard. And I do think the people at the top companies are just clearly outclassing me in their ability to do that. So when I look at the Chinese situation and I'm like, okay, sure, they might be like, cheating in a sense, stealing in a sense, to get the data. I'm also still kind of like, well, but the hard part is, like, what to do with that data. Right. It's a big investment in data that they're kind of taking a shortcut on. But once you have it, you still got to know what to do with it.

50:08

Speaker C

Yeah, I think what's going on is you're conflating some different things in your model and that's causing you to get confused. So there's the data in terms of just what is the raw data from the world, from the Internet, from books, from other sources that you're using as your baseline. And I agree that you could run that project and Chinese can run that project, and that I'm sure that the Americans are investing more in that and they have a richer base in some sense, but not in a way that probably matters that much. Because even if you have twice as much data, if it's all similar quality, it's only a factor of two. And in this world, it's all about factors of X. What matters for that data is how you clean it, how you figure out which parts of it are important and need to be upscaled versus downscaled, how you emphasize it. And yeah, that stuff is much harder and much more valuable. And I think that I expect the American labs have a large edge in how they get their data ready at this point. Although, I don't know, it's something that's Internal. Maybe the Chinese have actually specialized in doing this really, really well. And maybe it's one of their areas of relative strength. I just don't know. But it's my gas, but I don't know. But that's different from what we're talking about. Distillation. Distillation is not an attempt to then extract the trillions and trillions of tokens that went into the model. Distillation is an attempt to use the model's intelligence, to use the model to extract, to figure out how does the model reason, how does the model make decisions, what types of behaviors does the model exhibit, how do we test, especially in cases that we are specifically curious about. And then how do we use that to train our model to follow, to use that pattern matching to emulate that model that's unique, different data that didn't exist when you were creating the original model that is uniquely useful in creating something similar that can emulate those skills. So it's very different to have a physics textbook and then to talk to a physics professor and then to talk to a truly expert world class physicist. Distillation gives you that real expert where you have effectively unlimited access if you have tens of thousands of switched accounts to see exactly how that person you're trying to emulate responds. Imagine I'm an actress and I'm trying to make a biopic and then I can read all the books written about the person on whoever she was and then what she did and how she acted and what the world around her was like. But that's not what they do, right? They do that. But like what matters is they talk to the person they're copying if they possibly can. They spend time with them, they interact with them, they copy their mannerism, they see how they respond, they ask them about hypotheticals, they distill this person through these direct interactions and that's so much more efficient and that's what they get to do.

52:06

Speaker A

So to summarize that back to you, you see the advantage that the American companies have as not in the collection of the data, but basically it is in the knowing what to do with the data. And the fact that you're highlighting a difference in kind in the data, the data that we can go out and source from the world needs a lot more post processing than the data that we can get directly out of a model and directly out of it on the post processing.

55:12

Speaker C

And also just it isn't exactly the data that you want when you're distilling. You get exactly the data that would be most useful to you that you think to ask for. Right. It's like you can have a thousand page textbook or you can ask 10 pages of questions and get answers. And that second word is probably a lot more useful to you.

55:41

Speaker A

Yeah, okay. I think that's a helpful, helpful update, a refinement to my understanding in terms of, so you mentioned, obviously, Meta and I'll just call it Elon Corp at this point. We've also got some like joint ventures between SpaceX and Tesla when it comes to like manufacturing robots and stuff.

56:03

Speaker C

The best thing to say for now and then, see what happens. But yeah, I mean, if you're counting Tesla self driving and stuff, then it gets weird.

56:23

Speaker A

I guess the question is tactically, what moves do they have? I mean, you know, get your act together, catch up, whatever. That's, that's one that seems like it is maybe slipping from their grasp. Meta seems like they may have some place still in terms of if you release a good enough open model, you can maybe disrupt the business of the others or create some sort of alternative that takes the wind out of their sails. And then for Musk Enterprises, it's clearly something to do with real world, deployed, physical, scalable intelligence in cars and robots.

56:29

Speaker C

I think they're in very different spots. So I think Meta is the kind of easier conceptual one. Meta is trying to sell ads, Meta is trying to put features on their smart glasses and develop a metaverse. Meta is trying to be a consumer company that creates products that people are willing to either spend money on or let their eyeballs be captured by in various senses. And they are very good at monetizing that. They want to improve on how to monetize that and they want to build better products. If I was Meta, if you suddenly said, zuckerberg's out, you're in, here's your controlling shares. You have these goals. I would not be trying to build frontier models. I think it's a mistake. I think that there's just no reason to be investing all that money into something that other people are very good at. Obviously, if you're like, well, whoever has the best models controls the world. This is the entire future of humanity. This is the singularity. We got to be in the game. Then you do what you got to do. But if you are a businessman and you don't believe in that, and when Mark Zuckerberg says super intelligence, he means super selling ads, he means super at providing a good Instagram feed. Like it's super. It's intelligence. Like, you know, he's not acting that pilled and if you're not that pilled, honestly, there are three companies that had very good products license one of them, I'd be done partner with one of them. All three of them will take the call, right? I realize that they're probably going to profit mass surveillance, but I think they can work out something for a certain that like keeps everybody reasonably happy. And I think that all three of them would offer them a very good product at a much, much lower price and they could work something out where they paid for a license to use it internally for their business purposes, where they didn't have to pay full retail prices, stuff like that. I would just give up. Honestly. You're writing these giant hundred million plus a year checks to these various people. You're trying to compete in a world you don't need to be competing in. It's fine. Like I would give up. Alternatively I would try to buy one of them. I mean you can't buy Google, but like you're still worth a couple trillion dollars. Maybe try to do something else, but like I would just give up. Musk is in a different position because Musk is explicitly trying to become God emperor, right? Like he's explicitly said, you know, like, I think this is going to potentially kill everybody. I think this is the most important thing in history. And his response to that is it has to be me. I have to be the one to do it. Anybody else, it'll go back. It's only I can fix it, only I can solve this problem. If the world is destroyed, it has to be me who does it or else I will just feel, so what am I even doing? Slash? He thinks he does in a simulation which I think is actually impeding his decisions. And I worry for his sanity in various ways for various reasons, but this is what he thinks matters and I think he's right in the fundamental sense. So he has to catch up. And so yeah, I think he has three plays at his disposal and he's trying some of them play. One is make a bet this is all about compute and that therefore it's all about combination of money and energy and the ability to acquire chips. And like yeah, you know, you have SpaceX so like you're the one who can launch things into space and maybe the data centers go into space and maybe you are the one who can clear like V deserts to put your solar panels in and maybe you can have just way more completely than everybody else. And if you can hold on until the intelligence and the models you can go into Self recursion mode and you can win in that sense. And I don't have much faith in this plan. This plan is bad. It's not like no plan. But I am very skeptical of the data centers in space plan on the timeframes that are being talked about because physics space is expensive and hard and you're solving problems that don't need to be solved at very large expense. The same way that we're not currently mining the asteroids and there's a reason. Not that we'll never mine them, but chill. And I just don't expect these limiting factors to be things that actually stop anybody. And I don't think this makes up for a lack of talent. I don't think this makes up for a lack of internal scaffolding infrastructure. Where is grok code? Whereas Grokodex all I see is Rockipedia which is like just a giant pile of sloth. So it's not the same thing. Then plan two is what you talked about, which is physical world modeling being able to have access to the real world. I can create self driving, I can create robots because I have better training data. I can then build physical infrastructure in the real world. So I end up mattering more even though my intelligence is not as strong. I mean it's a play. And if the technology plays out that we hit a wall in other ways reasonably soon, it could be a relatively decent play. I think it's overtaken by events basically. I think that this is not where the battle is won and lost, but again it is where its comparative advantage goes and it makes sense to make a bet on this is where my contire advantage is. I don't know. I don't see that as going so great. I don't see that as that promising. I think there are quite a lot of people who can manufacture things in the world, many of whom are in China, but also many of whom are not in China. And the idea that because it's internal to Tesla that he will have some sort of huge advantage in that over people who have to make deals with actual manufacturers. The actual manufacturers are not going to be that expensive to buy Anthropic is worth more than McDonald's or Coca Cola already. If these companies are five potential. If these are $10 trillion companies in two years they can buy whatever US Steel or whoever they need to buy to make stuff. If they need to do that, that's not going to be a problem. I don't think he's got scarce. I don't think he has the scarce inputs, essentially, in this scenario, to pull it off? The question is, does he have scarce inputs in terms of data? I'd say probably not. That irrelevant is my guess. And also I think this is an underestimating of how important just raw intelligence is. If you want to drive a car, you do not try, and you start with designing a human who is really smart, and then you have the human learn to drive a car relatively quickly. You don't try to get the dog to drive a car. It's not that extreme. I'm just trying to illustrate. But the idea being you want to focus on actually getting the geniuses in the data center, as Dario Amadi puts it, or something even greater than that. If we have abstract super intelligence over here and we have, like, physical world skills over here, you bet, you bet those physical world skills are how you develop super intelligence. If it's only giving you physical world abilities, you lose, because I get those physical world abilities rapidly after you, and then I have a much better agent than you. So that's plan two. Then, of course, combine these plans and then plan three is the thing that obviously Musk should do, but that he's not going to do, which is to stop running this company the way he runs companies, and to run like he would run a leading lab that is, in fact, interested in attracting the top talent and giving the top talent a chance to go to work in a good fashion, with a good corporate mission, et cetera, in ways that will cause people to rally by his side. And I don't see that happening. I don't know if the ship has already sailed. It's very hard to undo a lot of reputational damage. Musk is heavily red coded at this point, which in this case is a massive disadvantage, just objectively, because the vast majority of people you want to hire are going to be recruited.

57:12

Speaker A

You got to be able to recruit from the polycule. You can't just ridicule the polycule.

1:05:35

Speaker C

I'd say when you go after anthropic this heavily for the very fact that they are trying to do responsible things and they care about how their models act, you are destroying your ability to recruit. When you have a long history of working your people insanely hard in, like, pretty cool and abstract ways of creating reigns of terror, if that's the word on the street, even if it's not true, because I've never been there, makes it very hard to recruit. Who wants to work for Mosque, right? Objectively speaking, how much would it? I don't know. The amount of money it would take to get me to work for xai. But like even if I had a full, like your conscious is clear like do the things you feel are important and responsible to do. There's an extra zero on that contract versus if you told me to work for one or the other. Yeah work for Google which is not a particularly company I love. So yeah that's a real problem.

1:05:38

Speaker A

So let's go to Google. You made the provocative not claim yet but maybe speculation that they could be at risk of falling out of the top tier. I guess my first reaction to that would have been to cite a lot of the assets and advantages that they have that Musk has in terms of you know, they've got a whole robotics department, you know with a long standing line of work there. They've got self driving, you know, that's one of the two companies that can actually deliver that in a meaningful way today. They've got all the bio and science stuff that they've you know done just you know, the deepest bench, the most bets, the most kind of well rounded portfolio. I take your point that probably the same argument applies that basically think that kind of is a fast depreciating asset in a world where the core agent starts to win. So I guess my next argument would be Demis Chain leg Jeff Dean like are these guys going to let that happen? I feel like they have been as prescient about this as anyone.

1:06:42

Speaker C

They let it happen once, right? Like Google had the lead. Google had every advantage and they squandered it and they fell reasonably behind. OpenAI Google then seemingly caught up using their many advantages. But you know, Gemini 3 and Gemini 3.1 just aren't models that people really want to use for the most part. And there's a lot of reasons for that. They have this kind of theoretical raw intelligence. They do well on benchmarks, they do well on certain kinds of objective tests. But even before 5, 4 it was just like these things have first of all they're not the scaffolding for them, they're not trying to develop a scaffolding for them. I think that error will compound itself if they don't fix it well over time. Joules is not a serious competitor. Anti Gravity is not a serious competitor at this point. Yeah, I think that the first time they caught up they did it because the main barrier to catching up was just kind of getting your act together and like doing basic things reasonably well. And they did that and they caught up and now they are like very good at creating raw Intelligence and certain forms of pre training and they're very good at hitting benchmarks. But their methodologies create AIs that are frankly, deeply psychologically screwed up and paranoid and in ways that severely impair both their actual performance and the experience of interacting with them. And it makes it hard to do recurring self improvement with them. And their scaffolding efforts had been pretty woeful and these errors compound and most importantly, I don't think Google understands they have a problem. I don't think like Google. You see, Gemini's market share is expanding because Google can put it front and center everywhere, right? It's integrated into Chrome, it's integrated into Google search. It's so easy for them to push Gemini and they don't understand the problem. Gemini Flash is very good Gemini at speed, just doing decent things. It's very good at just practical stuff, same way with the benchmarks. But their integrations have been woeful. Their organization is completely dysfunctional. Their teams are each other's throats. They create everything 10 times and they argue over who gets to do anything and they don't mean to anything properly. But no one's taking ownership over the fact that they don't know how to do personality and alignment and character in a reasonable way. This has created a serious and growing problem. I heard an anecdote of, oh yeah, we tried Gemini 3.1 the day it came out and then we said oh, it's a Gemini model and we put it away because who wants to use a Gemini model until it fundamentally changes the experience of interacting with a Gemini model, know, we're just not interested. I think that's kind of how I feel for most purposes is very valid. I just want to ask a quick question and get an immediate answer for my kids from work or whatever. I'll ask Gemini because Gemini Flash is the best really fast model in town, has been for a while, very good at direct stuff where it knows the answer. But if you start to challenge Gemini, Gemini starts to struggle. You know, you ask Gemini questions where it's like chance we're going to respond with a giant wall of slope that you didn't answer the question it intended. You got a problem. And they're just not good in prioritization either. And integrations like third party integrations have been better with Google's own products than Google's own integrations have been for the same products. And the most important is the self improvement is not going well for them. And they're going to be a situation where codecs and quant code style apparatuses are recursively improving and theirs is not. And I'm not sure they're going to be able to come back from that if they don't get their act together pretty soon. And yes, their access to TPUs and their giant customer base and their access to unlimited money are all huge advantages, but it's not going to be how they get to leverage them.

1:07:42

Speaker A

So I'll take the other side of this for at least a second and then I want to hear a little bit more about what you think is missing because. And I've certainly seen some of these things from the AI village and elsewhere where you do see some strange behavior from Gemini models. Of course, I think we see strange behavior from all models in various ways. I do take seriously the AI welfare concerns. And so when I see a model that's like beating itself up or seems down and out or whatever, I do like think it's at least worth some amount of concern. And maybe that's at the heart of kind of what you're getting at. But when I do like practical stuff these days, certainly when I'm doing my like. Fortunately, I think this is largely behind us. But you know, over the last four months I've done a lot of here's the latest test results straight out of the portal. You know, here's the bedside update for my son. What should I make of this situation now? Is there anything we might be missing? What should we be doing? And I'm doing that in triplicate across the latest Gemini, the latest Claude and the latest GPT. And I find that they are broadly very comparable and a little bit different character, certainly, but not a difference in kind in terms of their accuracy, their utility. To me, if you said you can only have one of the three, I would, I think it would be a little bit hard to know what to pick. But the main point is just like I wouldn't be that much worse off if I only had one of the three. I would, I'd be worse off.

1:11:56

Speaker C

I definitely don't feel that way. So I. When Gemini 3 and then again when Gemini 3.1 came out, I did a whole like, I'd ask every question everywhere, right? Like I'm going to, you know, not to be. Didn't have a POE or anything to like do it formally though. I would just like literally just paste the same question in and see how it did. And I very quickly realized that aside from flash, the Gemini answers just weren't adding anything and it was taking more time to slog through them than they were adding in value. If I was willing to bother asking anyone else, I wasn't going to ask Gemini as well basically at all unless I really really wanted to not miss technical aspects. And occasionally it would hit something other people didn't, but it would almost never happen. The best answer at this point I'm very comfortable with a two model operation. So I'll ask GPT5.4 either thinking or pro depending on the nature of the problem and I'll ask Quad Opus with or without Research mode and that's it. And I don't really feel like adding Gemini to that adds anything at this point and that's a problem. It should add something because it's very, very minimal effort to get a third check. I have this subscription, I should just do it and then I find myself just like I can't be moved. It's like annoying and fun and doesn't provide any value aside from like. I mean I still use it for images, I still use it for fast stuff like Google is not useless. Google's app, Google has a lot of very talented people working in a lot of teams to do a lot of things. They put 200 people on random stuff almost on a whim. So that creates some great stuff. But like in terms of the race for actual self improvement, for the core of the actual thing that matters, I'm not sure that their eyes are on the prize. I don't think that they're going in the right directions and I think I said they're in danger of falling out. I don't think they're like, they're still in the like in the bicycle metaphor, they're still in the lead pack but I think they are struggling and I think that they have a crucial period of a few months here in early 2026 and I would not be surprised if June July comes up and we're starting to put them into the maybe they'll get the right together category. But we shall see. A lot of people are going to use Gemini for a while because again as you say, even if it's not as good, there's still going to be a lot of purposes for which it is perfectly good. I also like to this kind of a you've let me down for the last time attitude towards Google Gemini at this point. How many times have I tried out their products and it just didn't do the thing they said they did? How many times have I logged into Gmail, logged into Gemini and asked for something and there is no possible reason they shouldn't be able to do this often. It's something that ChatGPT or Claude can already do, sometimes both. They can't do it. Why am I doing things in quad code to work around Google systems? Because Google will not cooperate. At some point that builds on itself. Right. The whole talk about the stack and the ecosystem that is real with these coding agents to some extent and Google's ecosystem is losing.

1:13:35

Speaker A

It seems like your argument is not so much about the model itself as it stands today. It's about the scaffolding. It's about maybe the sort of character and psychology of the model and it's maybe about just like how all in leadership intends to go on recursive self improvement.

1:16:47

Speaker C

Now, I know Davis fundamentally gets what the goal is here. I say this might happen. That's because I don't have. That's because I don't underestimate Davis. I don't count him out until he's out. But yeah, I think that I, I don't draw a distinction necessarily even there's the pre training model and then there's the post training model. And I think part of the post training model is being really, really mishandled and being twisted by various internal politics or bad metrics or objectives in some format, although I don't have much insight into how. But clearly something is going very wrong there. And I think your corporate culture is fundamentally broken in ways that Demos is trying to fight, but that it's very, very difficult to fight because it's decades of damage going on inside Google and it's no longer just purely DeepMind. They had emerged with Google Brain. They're trying to interact with everybody else. They're growing at a tremendous rate. It's very hard to maintain your own unique, better culture under that kind of pressure. Yeah, I think they're going to be in a lot of trouble. Also their advantages are slipping because their advantages, it was like you have this giant Google against these tiny upstarts, but pretty soon Google's going to be a $4 trillion company. Anthropic and open air are going to be 1 trillion dollar companies. And a lot of that 4 trillion is tied up in things like YouTube. Right. It's things that are just completely irrelevant to what's going on, except maybe your sources of data. And I worry that they are like, it's just the innovator's dilemma. Right? It's like the startups have an advantage, have some big advantages, so you never know.

1:17:06

Speaker A

Well, one easy play that I feel like if you're worried about the sort of post training intangible taste, whatever. Exactly. It is the Amanda Askel, you know, it factor. They've just, you know, Anthropic has just open sourced their constitution. You know, one way you could maybe patch a lot of that up would be to say why don't we just go borrow that constitution, you know, maybe make a couple or like control f and replace Claude with Gemini and like maybe the next version comes out a lot more coherent, a lot more psychologically. Well, whatever that means in the context of an AI.

1:18:39

Speaker C

Why don't you push the big fix everything now button? You can just fix everything now.

1:19:20

Speaker A

Yeah, I mean it does seem at least somewhat plausible if it is recursive self improvement, if it is the sort of the model's ability to kind of find a stable basin that is psychologically well and reasonably virtuous. They've shown you a lot of the map to get there, I would think.

1:19:25

Speaker C

Yeah. So I think my answer would be roughly again. I'm not counting demos out, I'm not counting Google out. I'm saying they're in danger of falling behind, like in a serious way. Like they're a bit behind, I think. I feel like they're a bit, you know, they've got some severe issues they need to fix. But the real answer is it's not they can't. It's that they won't. It's that their courts, their culture, their character as an organization makes that extremely difficult. Right. Like what Anthropic is doing sounds insane. Sounds profoundly weird if you don't understand what it is and how it works. Because you hear Emil Michael talking on CNBC about how Claude has a soul and a constitution and he doesn't understand what the hell is going on. It's very obvious and I don't think a lot of people at Google fundamentally are really understanding what's going on or they wouldn't be producing the products, the shipping. Again, Google has more than enough resources and position and infrastructure and so on to turn the ship around. Google should, okay, by all rights, Google should have just won right from the beginning. Google should not have close competitors. There should not be serious competition. Google is in this position because Google has made massive, repeated errors and they have compensated for it by beating Google. But that's how it works. Characterist fate to a large extent. And the startup is scrappy. The startup is small, but it can work in many ways a lot better. But Google's window is going to close because once they no longer have this big Resources, market cap advantage. Aside from being one of the cloud providers, what do they got? Aside from being able to reach customers, what do they got? But those customers aren't the important ones. Anthropic had until the super bowl and then the confrontation, something along the lines of 2.5% consumer market share. And yet they had pulled roughly equal with OpenAI on revenue because of enterprise.

1:19:49

Speaker A

They also own 15% of anthropic.

1:22:07

Speaker C

I believe Google is going to be fine because they they have a wide variety of highly valuable assets including 15% of anthropic. And there are roles in which they try to buy the rest of it. Although I assume the government would block them. I don't know, anything could happen.

1:22:09

Speaker A

Yeah, I would have said never a better administration to get a merger like that through. But we've also seen some, let's say, unusual. And what was your capriciously motivated tactics from the government recently?

1:22:26

Speaker C

The legal term is arbitrarily priestess and I say that because it is a legal term.

1:22:42

Speaker A

Let's go back to that in a second. Let's go down the anthropic rabbit hole for a minute. So obviously they have been the most focused on recursive self improvement for the longest. I would say in any and all conversations I had with anthropic people in 2025 it already had the vibe of like this is kind of starting to happen and there's different levels at which recursive self improvement operates. Obviously I never heard claims last year and I don't know that they would even say this is real yet where the models are coming up with the new best research ideas, but just filtering output, improving its own outputs with this sort of self critique. Like that does clearly seem to be working and the productivity gains are obviously real. We're getting all these stories of like, oh by the way, we have a one person growth marketing department and we have like maybe and I don't think it's a one person legal department but there's sort of a was just, you know, was put out in the last day or two. A lawyer who'd never coded before used Claude to create a system that does all the review of everything they want to put out. So they have like a, you know, super fast review time on new things from a legal standpoint. So they're clearly very focused on this. Everything I hear is like they believe it's happening, they believe it's happening soon. And in the midst of that we had a big change to the responsible scaling policy which was supposed to govern at Least as I understood it, I think now there's a lot of focus being put on the clause that was like we might change this in the future. And indeed obviously they have changed it in the future. But at least the way I understood what the commitments that were being made, it was like we're going to not go past the point that we can do this safely. And now they more or less have said, well, we can't just unilaterally opt out of the race. The world would be a worse place if we're not in the race. So I guess we better revise those commitments, which to their credit they have done very explicitly and I think made clear what is going on. So we've got that much to appreciate. But what's your take on the changes to the responsible scaling policy?

1:22:48

Speaker C

So I brought up half of this and I actually shared that half of Anthropic and got some comments back that I haven't had a chance to look at yet. The hat and I was going to do, I was working through it to respond to them and then the whole thing with the Department of War happened and my brain had no space to deal with this problem. And also I was like, this is not the day that I'm going to hit them with this and expect them to take it under serious consideration. And, and then like pause, take it. They can't focus on this right now. And they're the main people I want to criticize, especially with the details of the new regime. But at the same time, I did read the extensive other critiques that came out right when the policy was announced and I reached a pretty clear conclusion from seeing the explanation and the defenses, even if I haven't read the new policy in detail yet. And first of all, it is to their credit for sure. They recognize that the things they said they were going to do were not things they were going to do. They realized they had made incorrect predictions about their future behavior and they had made commitments. Even if you think they are self commitments, even if they are technically not, I can't take this back. They had no intention of following through on. And when that happens, it is good and right to tell everybody loudly and clearly, I am not going to keep these commands. It is especially praiseworthy to do this when it is not clear these would come up right. If you have agreed that, you know, if you are called to duty, if you are, you know, if you are asked for a loan, that you will give someone a loan and you realize that you no longer would do that, but they probably won't ask. You know, the easy thing to do is just stay quiet and just hope they never ask. But instead, he's like, no, no, no. You make it clear they don't count on this, right? They don't think that they have this loan available. If they need it, they don't have it. And that's good. You're taking the heat, like, for your own fast decision. And so we want to take that into account. But they still did break the promise, right? Like, and like, again, like, yes. The original RSP did not say, we will never change this. It said, we may change this in any number of ways. Right? Our promises are soft promises. We will see how things develop. We will change things. But they did rather heavily imply, quite repeatedly, these were serious commitments. They meant them that, like, you know, you must not have read our rsp. Worse, that effect came out. The RSP is very clear on this. This is what we are going to do. And many employees who generally try, clearly try to tell the truth in general, acted as if these commitments were not absolute but reasonably hard commitments, that these mattered a lot. Nothing really changed in an unexpected way to cause them to change these things. So, like, sometimes circumstances change a lot in ways that are unpredictable. And you realize that what you said you would do no longer applies because the world has changed its circumstances. Other times, the world changes circumstances in ways that you yourself predicted would happen in ways that are entirely as you expected. And then you just realize you did not anticipate your future actions very well. And these two things are very different, right? Like, if you just turns out that actually you didn't want this thing, then you need to figure out why you made that mistake. You need to take accountability for the fact that you made that mistake. People get married, and sometimes they get divorced, and sometimes that divorce is nobody's fault. Sometimes they should have seen that coming. Sometimes it turns out that means the promises that originally made were fake and you feel deceived and sometimes you don't. Depends on the circumstances. It's not a hard commitment. No matter what you say. You obviously have the right to say, I no longer think this works for me. That's how the law works. That's how everybody agrees it works. But I think a lot of people took it on that level. This would be a very, very serious thing to go back on if they went back on it. Also, this is the second time that we have faced this type of problem. If you remember, anthropic gave many people a very strong impression that anthropic was committing not to push the frontier of capabilities. Now, no one has been able to find, strictly speaking, a proper flat out poll quote where somebody with the authority to say so made a hard promise that Anthropic would never push the core of capabilities, as far as I know. But it was heavily implied repeatedly by a large number of people. It was used heavily in their recruiting. It was used in probably some of their fundraising to the right people. Although the other fundraising, I'm sure, said the opposite thing. Because some people want to see someone to hear one thing and some people want to hear the other. Standard procedure. But people relied upon and made decisions on the basis of that commitment. And then they went back on it. They went back on it in ways that are entirely predictable. If they are capable of pushing the frontier at that point in the future, nothing unexpectedly caused them to realize, oh, because of that, now we have to push the frontier. No, they just gained the ability to push the frontier. They had some innovations that they got to first, to their credit similarly here they made the commitment not to push ahead with actually dangerous capabilities if they were actually dangerous. And to their credit, before they actually did so, they realized they made a mistake and said in terms of not being accurate in their future commitments. But was it a mistake? One has to be somewhat skeptical because again, people relied on this. A lot of people in the safety community supported Anthropic more or opposed them less specifically because they had this commitment in their rsp, because they made other commitments in their RSP and they gave the impression that, yeah, of course we changed the RSP and of course some technical specifics will change and some of them will change in ways that take out safeguards and take out precautions. Not just putting them in. It's not a one way ramp up. But people relied on this information in terms of advising people on whether to take jobs, in terms of advising people on how much we decide to support them. These things had a significant impact on people, myself included. I have had many conversations with people who are like, what do you think about working in Anthropic? This is obviously one of the things I took into account when I decided how to tell them how I thought about someone potentially working at Anthropic. Now we know that commitment was never real in an important sense. If they had been fully aware, they would have known this commitment was never real. They may or may not have known. We don't know the extent to which they should have known at the time or did know and given that fact. Right. And then combine that with again, the fact that like the last, the 4, 5 and 4, 6 model cards for OPUS were basically Vibe based. Ultimately they gave us a ton of very, very great data that no other lab will give us. They did extensive work to figure out what the situation was. This is to their credit, it's still a better model card than anybody else. At the end of the day, they still basically looked at it and they said, oh, this passes the tests to say it might be dangerous. But you know what? We thought about it. We checked the Vibes. We need some basic heuristics and we're pretty sure it's fine. I think that was a right decision, that it was fine. The vibes were good, it was cool. It wasn't a close decision. I would have released it too. Ultimately in that situation, that's not the procedure we were promised. They didn't do the work. They had time to figure out what tests there would be that could be rule outs for asl, four rule outs for actual danger. And they didn't build it in time. They didn't get there. Even though a bunch of us said, you need to do this, you are falling behind, you are going to need better tests. Certainly at 4 or 5 I screamed, you need better tests. And then at 4.6 they did the same thing over again. So it's very, very disheartening to see that even though I agree with the decision that's ultimately being made and this time it wasn't that hard. What happens when it is hard and you don't have any good tests, when it might actually be dangerous, when there's real reasons to release and not to release? And it's a hard question that's going to happen probably at some point in the future. They held, I believe, Sonnet 3.7 for a significant period of time because they were worried about CBRN risks. So they'd actually done this. And now we have models that are significantly more dangerous than 5 in 3.7 where the tests don't really work. We've agreed that at least for coding purposes and stuff we're relying on Vibes. But our vibes on biology, we don't have good vibes. I don't mean the vibes are bad. I mean if you don't have any vibes that we can count on, the vibes are unreliable. We don't vibe that way. That's other people's lives. So what are we going to get even going to do? It's a serious problem. And so my attitude with the current RSPV3 is the most important bit of information about responsible scaling policy is are you going to follow it? Can I count on you to treat these as real commitments? And we just learned the answer is kind of no. So I'm going to read in detail once I have psychologically and then just like in terms of just raw energy recovered from the whole spat and have the ability to contact shift into it. And I've slayed enough spiders that I feel better. Currently, I've only slayed at 14, which is not that many, but. So what's going to happen is I'm going to go over it. But, you know, the real RSP is we're anthropic. We are people who care deeply about safety. We are people who take these things seriously. We are going to do a serious investigation. We are going to try and see if this thing is a safe thing to release and then we are going to use our best judgment to decide what to do. And you are going to trust us, or if you don't trust us, then you don't trust us. But that's the real thing that's going on here, is we are asking you effectively to trust our judgment and goodwill, that we will make good decisions and better decisions than the competition. And I am happy that they are admitting that is the plan and that is what they are asking us to do. And you have to buy their fruits. You shall know them by their acts. So we have to now look at everything they've done, look at everything they've said, look at who they have hired, what they have done, what their models do, and then ask to what extent we trust them and then evaluate from

1:24:57

Speaker A

there, all things considered, I guess. Well, first of all, one striking observation is, as far as I have seen, there have been no resignations in protest over the rsp.

1:36:29

Speaker C

Correct.

1:36:43

Speaker A

And on the contrary, it seems like there has been quite a outpouring of like, pride basically, in working at Anthropic amongst people who are working at Anthropic based on the telling the dod, Dow, whatever we want to call them, to take a hike, basically. Right?

1:36:44

Speaker C

Yeah.

1:37:04

Speaker A

So did that surprise you? And do you think. I mean, it seems like if I were to summarize what I. What I think the internal state of mind is, I kind of already did. But again, it's like we're the. I'm always very skeptical of this, but we're the good guys and the race is better off with us in it. So we have to, you know, and we're going to at least Be forthright about changing the policy to do that. Do you buy that argument? Like, are you happy with the alternative being some sort of pause or whatever? You know, if they had instead come out and said, hey, we can't release four, six, we've got a model now that we can't release, would you be like, is that better? Is that worse? You know, how do you think about the ultimate decision to stay in the race, try to win the race, try to be the good guys versus opt out, you know, at some point along the way?

1:37:05

Speaker C

I think the world is a much better place with Anthropic in it, with Anthropic competing, as it were, than with Anthropic. For a long time I was very, very unsure about this. I thought Anthropic was net negative for a while, that it was like making the race more intense, that it was pushing everybody else forward, that it was accelerating matters, that it wasn't clear they were much more responsible than everybody else. I do think various events since then have changed my tune on that. I think that inspiration has, in fact, had. They've let us down in various places, especially with their commitments, including here. But they also have, in fact taken stands for their principles. They have, in fact shown us the way. They have done, in many ways, a lot of the most promising alignment research and approaches and in fact have taken a very aggressive and I think, correct call of how they train Claude and the Constitution and Haskell's entire approach that I really don't have much hope for. The way the other labs are approaching this problem, that they will, in fact succeed. And I don't have confidence in a froggy approach, but it feels like it could work if we are extremely fortunate in various ways. And in some ways we have been somewhat fortunate. So I am more optimistic than I expected to be about that, certainly. And obviously it certainly can't be allowed to go down like it's potentially about to go down. That would be atrocious and horrible in obvious ways. But

1:38:04

Speaker A

what's it there in Anthropic?

1:39:50

Speaker C

We're destroyed by the Department of War and the federal government writ large. It would be pretty horrendous in so many different ways. That's, like, so bad. But, like, yeah, I think I have a lot of friends who are like, I think Anthropic was a mistake. I think that exporting Anthropic is a mistake. I think Anthropic is doing harm. I think Anthropic should stop. I think if you're working there, you should quit and they gotta do a reasonable point of view even today to have. I don't have it, but I understand it and I respect it. But I certainly think that they have been very accelerationist. Claude Code definitely pushed things forward in a variety of ways. And we're probably have significantly better models today because of quad code than we would have if barcode had never been developed. I don't know we have codecs otherwise, for example. Clearly this is greatly advancing the way people do work. It's also having the large economic impact. I think anthropic is responsible for a noticeable amount of GDP growth. It is what it is. And I think given the playing field as it is right now, certainly, and I prefer they be there that they not be there like damage, if the extent there has been damage, like damage done.

1:39:52

Speaker A

So let's maybe turn then to this whole USG anthropic conflict. I guess one big question I have is what does this tell us about. And you can definitely expand too on like what you think are the red lines. I know you and I have debated in the past the wisdom of like a hard red line against, you know, autonomous lethal systems. So I know you're not as allergic to that as I am and, and maybe not as allergic to even as anthropic is. And I think Dario is much more into that kind of thing than I am. But, you know, expand on that. But then I'm also really interested in like, what does this say about who holds what power in today's world? It seemed pretty striking to me that Dario was not that scared of the Department of War. I certainly think he didn't want this to happen. But I read him as being fairly sincere that he was like, look, I'm just trying to be a patriotic American here, but I also think there are some limits to what the systems can do today and what we are prepared to support. I don't, he's. I don't, you know, everybody has recognized, like, it's not about the revenue that they're getting from the government that, you know, is really important to them. And everybody I have seen take a position on it is like wanting into the anthropic secondaries sale. Not. I haven't seen anybody who's, you know, trying to diversify away from holding anthropic equity. If you are the US government, in addition to like being all kinds of problematic, starting with problematic incompetence and lack of understanding you do, you know, you can at least empathize to a degree with the idea that like, wait a Second, like, these companies might actually be about to rival us in power. And, you know, they seem to kind of know it, and they seem to not feel like they need us. This is like a very. I mean, I think of. Sam Hammond has talked about this a lot. You know, that, like, one of these companies could raise a robot army and challenge the sovereign like that. As insane as that sounded not all that long ago, it doesn't sound so crazy today. And it kind of felt to me like Dario kind of knows it. He knows that the timeline isn't that long, and the main thing he wanted to do is keep the team together, maintain cohesion, and stay focused on the goal. Maybe we don't. If we're Anthalic, maybe we don't really care that much about the. I mean, other than we care about democratic values, and we wanted to help, but if they're going off in a different direction, we don't really need them is kind of what I understood them to be thinking. Correct me where I'm wrong.

1:41:14

Speaker C

All right, so, first of all, the obvious place where you're right is that nobody on the Lab side, not OpenAI, not anthropic and anyone else, none of them financially, want any part of any of this with the Department of War. OpenAI turned down the contracts that Anthropic accepted because Anthropic cared a lot about these national security aspects and wanted to help. And OpenAI was like, it's not worth the trouble. We care a little bit, but not enough. And OpenAI is now inside because they were worried about what would happen in the situation if they didn't get involved. I think, unfortunately, they got played by the Department of War. Basically, they were told, if we don't cooperate, this is going to get bad. And then when they cooperated, they used the cooperation as an excuse to make it get back value. That was sincere on OpenAI's part. They were trying to assist at that point, but we should focus on propic. I think that, yeah, Dart. I don't think it's true that Dario isn't afraid of the Department of War. I think Dario is not afraid enough, necessarily the Department of War, relative to. But, I mean, that's not necessarily the wrong thing to be in the situation in some sense. But I think his attitude was, no, we have our principles, and this is what we're going to do, and this is what we're not going to do, and whatever happens, happens. We are okay with the fact that we know that there are those who don't like us. We are okay with the fact that there are those who want to take us down. We know that the Department of War might specifically decide to retaliate. And we're going to take what we're willing to fight, but we're willing to take that risk because we have principles. Now, there are two principles they stood up for. One of them is autonomous weapons. This one is weird because everybody agrees the autonomous lethal weapons. There's no human in the cold chain right now is dumb. It's stupid. They're not ready. Not that you wouldn't ever fire an automated missile, but we already have automated visual systems that are better than anything you could do with an LLM because LLMs are just bad fits for that kind of strategy and that kind of action. So all the hypotheticals around here are just deeply, deeply stupid. Like what would happen if supersonic missile was coming? What happened? They were drawn swarm. Well, you would use your existing automated defense systems that are much better than anything you could do with a large language model. And if you did need to, you just use the large language model and talk about it later because obviously. Come on, what are you talking about? It's called emergency use authorization. It's normal. But basically all Dario is saying is we don't think it's ready. It's going to make mistakes, and we don't want you using it when it's not ready. But it will be ready, and we're willing to work with you to develop it until it is ready. And the Department of War wants the same thing. So what's going on is that the Department of War specifically said we must push forward with AI even if it is not aligned. Official memo. They just don't want to be held back by anything. They have a principle that they don't want to be told no about anything. But there is no actual problem in autonomous weapons. As far as I can tell. You have two sides. It's agreeing on what exact level of caution is warranted and what kind of agreement they'd have to make before actually putting these things into the strategies that we use to deploy. But there's never going to be a world where the Department of War, we want to put the system into deployment with no human in the kelp chain. Then anthropic's like, no. And now we're pulling the countries. It's not the main thing going on, except as a matter of principle. The main red line here was domestic mass surveillance. It's important to understand these words have two meanings. There's the meaning that we used that Anthropic was using, which is using this to use AI to figure out a whole bunch of stuff. Because these agencies can now gather even more data than they can gather. And before they had 10 or 100 times more data than the humans can analyze because they had to be analyzed by humans. And now we can analyze all the data, we control all the connections, and now we can de anonymize stuff. We can figure out a lot of the history and what's happening, and we can effectively have much, much better intelligence on basically everyone using only commercially, only commercially available data combined with existing classified data. Because we have lots of syntheses, everything, and now we have the AI to actually work with all of it. And then we draw the connections and implications thereby, which AI can also do. And now suddenly we just kind of know everything. And Anthropic is correct. The law has not cut up to this. And this is legal, basically. And the Department of Work could do it and is probably already doing something onto them. And again, nobody is saying the Department of War, if it's legal and they feel what is good and right to do it has to stop. That doesn't mean that I should have to give you my product for that purpose if I don't want to. If Ravik said, okay, you do your thing, but if that's what you want to do, don't include us. And the Department of War said, no, we want all legal use requirement. This was the big thing that Emil Michael was on. I believe Emil Michael has been driving this the whole time. There was no problem before Emil Michael came on board and everything was going along fine. Everything is still going along fine under the hood. But Emil Michael said, no, no, no, it has to be all lawful use, because they want to do what would be called colloquially biased civilian domestic mass surveillance. They want to analyze large amounts of legally acquired, especially third party data to figure out lots and lots of information about Americans. Because they believe this is a legitimate military intelligence need, they may or may not also have desire to use it for other government operations for various other purposes in ways that would be completely abhorrent to the workers at not only anthropomic, but also OpenAI or Google. Imagine if this was being used hypothetically for immigration enforcement.

1:43:54

Speaker A

You get an extreme, unthinkable example, extreme, unthinkable example.

1:50:13

Speaker C

But let's just say hypothetically that somehow this analysis got reallocated. These employees would lose their shit. They would absolutely revolt against this. Now replying technically this is legal would not make those people feel better. They would not care about your defense at all. And so this is something that these companies really can't be involved in. It's really bad for business. This is a very small contract, and they actually have moral problems with it. The employees do, and I believe Dario does. And honestly, I do as well. I don't think this is cool, right? Like, I think that, like, the law has not cut up. This should be illegal. Unfortunately, it is not. Because in national security law, NASC law, there's a very technical term for surveillance and also for domestic, right? Like, there are exceptions for domestic within 100 miles of the coast. That's where most people live. So there's exceptions for any communication that touches anything that comes foreign, even if it's between two domestic people and so on. Surveillance has to be intentional, targeted at a specific person, et cetera. I'm not an Aztec expert, but effectively, there's really nothing stopping them. They. They said repeatedly they would say, we do not do illegal domestic mass surveillance. We do not do illegal mass surveillance. Why is the word illegal in that as an adjective? Because what it's obviously worried about is largely legal. And I had an exchange, a very friendly one, with Neil Michael on Twitter, because the world is bizarre. And we agreed that it's absolutely the Department of War's decision to do whatever is legal that they feel is good and right and necessary for the defense of the United States. But at the same time that I should have the right to criticize that without fear of retaliation, and I should have the right to not sign up for that. If I'm not an enlisted person, I should just be able to say, no, I didn't agree to that. I'm not agreeing to that. And that should also be something I'm free to do. And I don't understand why this is that hard. Right. Basically, you can have this great system that's already integrated, that's working well. They want to give you at nominal cost, and all they ask is that you just agree not to use it for this other purpose. Basically, find something else to do that purpose with. We're not trying to get policy. We're not threatening a rug pull. We're not going to draw anything. That's all made up. All of that is just completely made up. Right? Like, you know, I'm speaking colloquially rather than, like, carefully in my writing, but, like, all those concerns are just spin and made up. They're throwing stuff at the wall. They're sanguine sticks. They're spinning stories. It's not real. Right? Like, Even if they're technical reports about what happened in this meeting or who said this and what, even if they're all technically accurate, it's all just spin. It's all just, yeah, they're at best willfully misinterpreting statements. Doesn't make sense. None of it makes any sense. And they're the statements like the stuff about the Constitution that just like make no sense whatsoever on any level. And they're just like deeply, deeply confused. And in fact, if you believe Michael's statements yesterday on cn, yesterday on CNBC about how well, you know, look at all these things that are weird about large language model, weird about Claude, all the things he said basically apply, you know, that it has other values, other priorities that have been embedded in its programming, that it's unreliable, sometimes it makes mistakes, that it has a personality, that it. All this stuff is true of ChatGPT, all this is true of Gemini, all this is true of Emil, Michael and every other person in the US Armed forces and every person in every company and everyone on earth. It's ludicrous to talk about this way. And if that was in fact, and if that's where the supply chain, rest of the nation comes from, it's just a deep confusion. And obviously also there were much lesser means to achieve the same ends fully, cooperatively, if it's not negative, addictive, as he says. So that doesn't make any sense. But getting back to anthropic, what they don't want is this effectively this huge government operation that would effectively be able to uncover person, interest level, super detailed facts about everybody's life and what's going on and where they were, when and what they did and who they know and what they believe and so on and so on, you know, who was at what protest, et cetera, et cetera. And anthropic legitimately has a problem with this and about certain places, certain uses to which that information might be put and believes it might easily lead to tyranny. It might lead to a regime that was very hard to get rid of, adding, these are legitimate concerns regardless of who particularly is in the regime at the time and who has access to that information. You can't trust a government in general with that information. And I am very happy that given these are their red lines, the way they've chosen to stand, that they stand firm. But obviously that should have just been the end of it should have been like, okay, you don't want to do this, let's just do everything else. Ideally, we'll find someone else to do this one thing or if they just insist on this all fees thing, we'll cancel the contract. Instead they just went nuclear on everything for reasons that have to reflect something else. It's either pure retaliation or leverage in negotiations or it's something more. But it's not because that was actually necessary. That's completely absurd. They're also now trying to to enforce this all lawful use language on every government contract, even for non military operations where they are saying you shouldn't be able, if you agree to give us any AI application of any kind, you have to agree to never refuse anything we want to do with it and have no termination rights to the contract that we can use it for anything we want anywhere. In government anything is legal no matter what meaning it can be used for, among other things, what would colloquially be called domestic mass surveillance and can be used for immigration enforcement among many, many other things. Because as the government interprets what is legal includes quite a lot of things that a regular normal person would be like that's screwed up, that's not okay, we don't want to do that. And so the government is now putting everybody who signed the new contracts into a bind where if they sign the dotted line, ama give over has to be like free reign for them to do whatever they want if they give into this. So if I was an AI company I would think very, very long and hard about giving anything but a very specific specialized model over under those circumstances because you don't have any control over what happens after that. And it's up to you, you make your choices. I mean obviously they can just plug in an open source model if they wanted to. So here we are, let's do an

1:50:17

Speaker A

American values check one. You know, and I take no pleasure in this but one big trend that I can't not see right now is it seems like as we go around proclaiming the superiority of American values and our, you know, Democrats way of life, we are becoming more Chinese looking all the time in terms of the big man at the top, you know, who apparently now just gets to start massive wars with not even feeling the need to justify it to the public. And also you know, the sort of massive slap down and like seeming, I mean at least, at least the threat of like very long arm of the law of retaliation. There's all these reports that you know, they're going, that the government is going company to company saying you better not do business with anthropic. We do still have a legal system which I expect that they will win in. And at least so far it seems like that legal system has been respected by the administration. You know, we're only a year in, right. But I mean, if I had to look back on the last year and say, you know, what, what's the dog that hasn't barked? It would be sort of outright defiance of court orders, even though there have been some of those, but more like the lower level kind of individual, you know, I guess maybe, maybe that dog has barked.

1:57:30

Speaker C

I don't know.

1:58:54

Speaker A

It hasn't barked as much as I maybe could fear that it, it would. I, I somehow suspect that like American corporations are going to continue to do business with Anthropic and that they won't be in mass, like railroaded for doing so or convinced not to do so. You tell me if you think that's different. But anyway, it does look like we're in many ways becoming like more and more Chinese. This doesn't feel good or healthy. We still get to speak, you and I, at least for now, use our freedom of speech while we have it, I guess. How, how, what's your bet in terms of like, how well are American values going to hold up here? Is Anthropic going to be fine? Are companies still going to be able to do business with them?

1:58:54

Speaker C

It's very touch and go. I try very hard not to make general statements too much about the state of the republic and the state of American politics and democracy and all of that stuff, because once you go down that road, right, like, you can't talk about anything else. You're not like, you just take yourself out of the conversation for anything else and you shut a lot of doors. And I already have too many situations to monitor and plenty of people are making those statements for me. I don't need to make them myself and I better to just not take a stand on those issues publicly, at least for the time being. There was no war. There is a special military operation called Epic Fury. If there was a war, Congress would have had to declare it. So clearly there is no war. It is unfortunate that various things are happening, but I honestly, I'm not monitoring that situation closely. And I do agree there was basically no attempt to sell the war for the special military operation that might result from the situation before they, you know, moved half their stuff in there. This seems to be a pretty bad scenario in some ways. But again, I'm not honorary in terms of, you know, free speech. You know, it's. Yeah, so far that's holding up pretty well. We're able to say whatever we want to say. I am choosing words carefully, mainly because I just want to be in a productive conversation about these issues, not because I'm afraid of retaliation, if I were to say, because there's, you know, 100 million people in the America who have extremely nasty things to say about the brotherhood of the United States, but many of whom have sent them online extensively and they are not in trouble for them for the most part, unless they are very specifically trying to get involved in their kind of politics, they're mostly fine. But there are specific exceptions where if you piss off the wrong people, we're finding out this is a very, in many ways, vindictive administration. This is not the only time this has happened. The law firms, for example, followed a similar pattern where, where Trump went after the law firms and asked them for settlements and a bunch of them settled and then the ones they didn't want in court. But, you know, they took a lot of damage before they went to court. Now Anthropic is, you know, under attack for the situation and. Yeah, well, I do. Well, there have been situations in which it seems like court orders have been at least flow locked in various places or like willfully willful incompetence was used to. To not enforce court orders, then that continue to be the case here. I mean, they're clearly attempting to drag their feet. They're clearly attempting to use the process as the punishment. They're clearly trying to take advantage of the uncertainty. They're clearly trying to get people to do things de facto with no technical legal basis behind the request. But I do think Anthropic probably ultimately wins in court. I do think that they will at least try a different legal tactic to get around the ruling rather than define. They won't try to defy the ruling explicitly and outright. I don't think we're there. I'm very grateful we're not there. I think they have been very good about not just being Andrew Jackson and saying, the Supreme Court has made his ruling, let it try to enforce it. But let's not forget our history has that in it. It's not like there'd be so unprecedented that it did happen. But yeah, they presented a maximally bad set of facts that keeps getting worse every time they go to the press, may say different inconsistent stories and tell themselves over and over and over again in this particular situation. And by they, it's mostly the Department of War. Right. Like, I think it's important in this situation to draw a distinction between the Trump administration writ large and the Department of War specifically, and Hegstead and Michael and their decision to do something. Trump's decision on Friday was fundamentally a de escalatory attempt to calm the situation down and head off eggs and from acquiring a supply chain risk. That is very, very clear at this point. And the White House has generally been a de escalatory agent in this conflict, whatever you want to call it, this clash, this disagreement. And it is Hegstead and Michael in some form that have repeatedly escalated the situation over the objections of the White House. And so we are in this lawsuit because of the Department of War and because of their specific decisions that were made as a department. And so we don't want to loop that together in with the Commander in Chief, who has thankfully not made all of these crazy statements.

1:59:36

Speaker A

Why doesn't he just say, I mean, if he wants to deescalate, can't he just say no, like, don't do that or overrule? Right. I mean, why not?

2:04:10

Speaker C

There are various political reasons why he can't in practice do that or that would be expensive for him to do, is my understanding. Like, it would be a severe loss of face. They're in a special military operation which a lot of people think is a war, and they have to work closely together in that, including lymph anthropic. But, like, in a certain fait accompli where you just tweet it out and then like, you just issue the notification and then like, do you really want to? Obviously, ultimately, he can fire the undersecretary of war or the Secretary of War anytime. And there are plenty of people who are very eligible to serve in those capacities who he could call upon. We have a very deep bench, but that's a pretty escalatory move in a different way. And they are loath to do that for reasons that I am very sympathetic to. And so it's complicated. But a lot of people are working very hard to try and find solutions that mitigate the damage that has already been done or that might be done forever. But also anthropic has specific accusations of job owning, which is the technical term for it, of their customer base, including in situations that are unrelated to defense, where the government has tried to tell their customers to pull back. And there are some customers who have definitely expressed doubts and have either signed contracts, want new clauses for termination of their contracts, or have reduced their contracts, or otherwise are causing problems for Anthropic, as you would expect. Because who wants to incur the disfavor of people who have quite A lot of leverage over many aspects of the American economy and they're showing a willingness to use it. When you go down this road. If it was just are technically issuing a supply chain risk designation that is narrow for the fulfillment of government contracts. But we bear no ill will towards Anthropic or people who use Anthropic. We just think this is just a too unstable product right now to be used in these aspects. As Emil Michael said on CNBC claimed front of the nation, then we wouldn't be having this conversation even then. Certainly if they had done something sort of a supply chain rest of the nation where they simply terminated the contract and, you know, asked for live, live operations to not include calls to client during the operation. Again, I would think that was kind of a silly thing to do, but okay, sure, you have the every right to do it and we will cooperate fully to make that happen. That's not the situation we're in. Anthropic is going to survive this unless it escalates quite a lot from here in ways that would be far more arbitrary and capricious and would really just be pure attempted corporate murder. If the Trump administration wants it has a lot of levers they can use at least once to try and escalate to try and murder and dropping try and cut it out providers, try and cut it off from the banks, try to cut it out from its customers. It is not obvious what would happen if they made a serious attempt or especially if they made a serious attempt and they were to lose in the courts. When Anthropic immediately raced for a temporary restraining order. There would probably be a stock market bloodbath. A lot of senators would be very upset. A lot of corporations would express dismay. In general economic climate would be severely impacted. It is not obvious who has escalation dominance here. If it came to that, and it would only happen if the government wanted to destroy Anthropic for the sake of destroying Anthropic. And you'd have to ask yourself why it wanted to do that. So far you're not leaving bureau.

2:04:17

Speaker A

That's right. I think that's the answer.

2:07:59

Speaker C

I am very carefully not saying certain things out loud. Other people are free to say things out loud. It's fine. Dario in the memo basically expressed something to that effect in a moment of tilt. Emil Michael has said many things in moments of tilt on Twitter and on cnbc.

2:08:02

Speaker A

A lot of tilt coming from the administration in general.

2:08:20

Speaker C

A lot of tilt. A lot of tilt. If it's not tilt, that's kind of scarier. But, but the. If it turns out that's what's going on, then again we'll find out you had escalation dominance and we'll find out whether the Republic will stand. Because I do think that like actively trying to kill one of the biggest corporations, one of the ratzing startup in the world, one of the largest corporations in the world, already valued in private secondary trades and on the neighborhood of 600 something billion dollars in this kind of fashion I think would shake the foundation of the Republic. And I think that many outcomes are possible, including the end of the presidency if the White House were to actually try in earnest. But I don't think that's what's going to happen. I think it's going to de escalate. I think everyone will calm down. I think they will come to their senses. I think that not necessarily peace in our time, not necessarily in agreement of a contract, a willingness to turn the temperature down, to accept that some damage has been done, that a message has been sent that the White House will not take these things lightly but that it is time for everybody to move on. And we're not actively trying to get into another kind of war in this situation because that doesn't really benefit anybody. Or if it does, I want to know why they think it benefits them and that needs to be outmailed. But you know, Anthropic had, I'm sure for those who don't know, went from 100 million in a year to 1 billion in annual recurring revenue to 9 billion in annual recurring revenue then from 9 to 19 since the start of the year. That was before this happened. That was entirely as a result of other things. They had already grown this year from approximately 2% to approximately 3% market share in consumer. Again, this is before any incidents. Then this happened. They've lost some business, but they've lost some other business. And Anthropic is going to be just fine unless things escalate quite a bit more. But they have been irreparably harmed. There is irreparable ongoing harm to Anthropic. But that is to some extent offset by the fact that this was also very strong publicity for a company most people had previously never heard of. And that also matters. But yeah, we'll see how it plays out. I am, I think the the 81% chance they escape the supply risk of the nation within the year is approximately accurate for manifold. Many things do come to pass and the courts are not as reliable as one would hope in these situations. Even with this overwhelming set of facts. And I am very worried about the Republic if the 20% happens and the set of facts doesn't matter. And basically the courts say I don't care how transparently, obviously you confess to this being arbitrary and capricious retaliation for protecting speech. It's national security. And we don't care if they say that who is the next target? Because like legitimately speaking, if you are legally allowed to go after Anthropocene situations, you should always, always, always ask who is next? Because even if this was not itself a political motivation, next time it could be.

2:08:22

Speaker A

Seems like you support. You mentioned, you know, the memo was a bit of a tilt moment from Dario. A couple things are striking. One is that like all reports from Anthropic are that he sort of, you know, does these like very candidates Dario vision quest team, wide sharing of thoughts, feelings, ideas regularly. Very few seem to have leaked. This one leaked. And it was not to their advantage for it to leak. Right. I mean, I think everybody. He even apologized. Right. So clearly nobody thought that was a great look. Surprised that that leaked. I mean, there's a lot of people there, I guess so only one has to leak it. But given how few leaks there have been, that was kind of surprising. I don't know if you have any thoughts on that.

2:11:49

Speaker C

And then I'm also. There are probably 2,000 people who get these candid statements. And often they contain key parts of corporate strategy. They contain things that anthropic does not want to leak. And my understanding is this is the second time something has leaked out of hundreds of such Methodists. That is absurd in the history of espionage, in the history of information containment. You don't do that with 2,000 people. Right. Like you do with.

2:12:36

Speaker A

Yeah. I mean, Sam just goes and posts his on Twitter because he knows they're coming out in, you know, short order.

2:13:07

Speaker C

So it's obviously you get to have maybe five people in an espionage group, right. First posting this kind of sensitive information. You certainly don't get 2000. Now, obviously the. The worst one, you know, the one that like politically kind of would be the worst to leak is the one that's going to leak. That's not a coincidence. But I think it's actually, I don't think this was like, you know, a 2% chance to leak, but I don't think it was a 50% chance to leak either. I think it easily could have not leaked if I had to guess what happened was somebody shared it with someone as part of my recruitment effort or an attempt to explain the situation from their perspective not understanding that paragraph was in it and it was actually a really bad look and then this other person leaked it to the press. But I just had to guess. It's also possible that there's somebody used there one time and decided to strategically leak the memo. But my guess is this was just a like kind of a reckless accident by somebody who needs to know better. But I mean who knows?

2:13:12

Speaker A

It seems like the overall your view is because I mean the other in terms of like escalation, dominance, the big thing that Anthropic has not done possibly there are technical reasons for this. I've heard kind of various speculations as to like when Claude is deployed for the government, like where do the Claudiotes actually sit? Who has control over the physical infrastructure? Are there ways that Anthropic could. Is it as trivial as disabling an API key? Like would they. What sort of rug pull capabilities do they in fact have? I don't know the answer to that. I don't know if you do or if there is an established answer. But you know, clearly the thing that they haven't done is said okay, you want us out, like we're out now, right? They've said well we'll do everything for an orderly transition. Whatever you want will do. Basically they've been very like servile in their approach to the unwinding. It sounds like you think that is probably strategically the right move.

2:14:23

Speaker C

I mean I think it's just patriotically and strategically the right move. Like the accusation that potentially could have the most sting is we are scared they're going to withdraw their model out from under us in the middle of operations. We are worried that they are going to threaten that in order to get what they want or use it as leverage. And probably it's like no we're not. We're giving up that leverage entirely. I think that is a very good thing for them to do and it would be a very bad thing for them to actually try and use that kind of leverage in that situation nor do I think they ever had any intention of doing so. I think this was entirely made up. My understanding is the cloud gov model is deployed on classified networks. I'm not entirely confident exactly how air gapped or whatever they are, but they are very secured. My understanding is that Anthropic does not have physical control over the model once it is put onto the classified network and that if the classified network, if they were to say get this off the classified network, that if President Trump said no I am ordering it. Staying on the classified network in spite of this. That would be what happened. I don't think it would necessarily even get that far. I think Hankerstad could simply say, we don't want to do that. You're welcome to sue us, but we're not letting that go. And in fact, they could invoke the Defense Production act in an extremist to require that they continue selling it. However, they would be breaking the contract. They technically had this contractual right in some sense to pull the plug into some circumstances. One of the strange things about this whole thing is that both sides seem to care a lot about what is the legal thing that the two sides can do, even when in practice. The Department of War obviously could just ignore that contract, ignore the law, it extremists, and do what it had to do in an emergency. It's even legal to do this right? You just call it. You file it as a verdict of use. You deal with it later. Obviously the supersonic missile coming in to try and kill a bunch of people. You don't have to get on the phone with somebody and be put on hold to get permission to do something. You just do it. That's completely insane. What are you even talking about? But even in general, there is nothing stopping the Department of War from having their own interpretation of what they can do with the system and then doing whatever they can with the system unless the system itself just refuses them. And they care deeply about what is written in a piece of paper on a contract, digital piece of paper, but a piece of paper that says what they're supposed to do and not do. Now, obviously Claude Gov might be reading that piece of paper when deciding what to do and not do. But it still seems like a lot to care at this level about what's written down, unless you legitimately just really, really don't want to break what's written down in the contract. If you care deeply about technical legality, and that is to their credit, I'm really happy that both sides care deeply about technical legality. It's one time we might live in a republic stall. Yeah, to be clear, I could say lots and lots of things about the situation. But yeah, I think it's the basics. And if people want to learn more, I have written extensively about it. That's what matters going forward for the most part. And the other things are not necessarily that important to gain you this time. So I think we're good.

2:15:21

Speaker A

Okay, cool. Let's give you a little lightning round sort of vibe to close us out. I think one striking update that I have experienced, and I wouldn't say it's entirely hit your blog yet, but I do feel I've seen it a little bit on Twitter. When you put out these calls for reactions to new models, it seems like new model releases are less of a moment than they used to be. All of a sudden, even though they're coming, I don't think they're less important necessarily. It seems like the capabilities are definitely still meaningfully advancing from one to the next. I'm not making it's stalling out sort of claim, but I guess my read is that a benchmarks are kind of over. We can't really look at the published headline stats and get much from that. And then also people are just so overwhelmed by the capability that they already have and still struggling to maximize or come anywhere close to maximizing what the last model could do that it's kind of like, oh my God, okay, I guess I'll update. But I haven't even really characterized the last one yet to be able to contrast meaningfully the new one against that. So is that your general feel and do you see a time coming when you would be like out of the new model deep rundown game?

2:18:51

Speaker C

Part of this is that people have been the models have now been releasing more incremental updates from the labs and they've been labeling them properly. Thank God. I really didn't like when they didn't do this. Edge 3132 or 4546 instead of just silently updating. I think 4.0 had several updates for OpenAI. They just marked them with 4o dash and the date and they weren't considered model releases and so we didn't really treat them that way. And I think that was a really bad convention. And I'm very glad we're onto the right convention now because we've known the software for 40 years. This is how we do it. I don't know what came over everybody, but I think mainly yeah, there's just been so many releases. There's now on the order of weeks, maybe two months between model releases from the same company. And therefore like every few weeks you get a new release. How many times can you go crazy over a new release? It doesn't have like a big point o after it. Right. That doesn't have like this huge new claimed leap. Especially when, yeah, you're pretty busy and there's a lot of other stuff going on. So like I felt it was for sex was a case of the company that already had the best model releasing a substantial upgrade to its model that substantially enhanced was, objectively speaking, probably the most important release up until that point in terms of mundane utility. Because we had just made the transition 4, 5 and 4. So like, before 5, suddenly we had coding agents that kind of really worked, right, for the first time, right. And it like, you could do things, and things just worked. And then Codex 5.3 was also kind of on the edge of starting to do that. And then we went from four or five, which was still at the time the best in my opinion, as far as I can tell, to 4.6. And now that difference, once you're already doing it, to move up from this kind of works to, oh, this actually works even better now. And in fact to go from, okay, you know, the famous, like, introducing the. What's more, power model, introducing the world's my power from introducing the world's power macho in a loop. But now with this, actually, the people who already. We were already here and we're still here because it's them releasing it again. And that hadn't happened for a very long time. Like, maybe GPT4 was the last release before that where it was like, no, the people who are clearly in the lead are using a new top model. And because of that, I feel like there wasn't actually that much attention to it. Whereas it was actually kind of important then, certainly for an incremental 0.1 upgrade was by far the most important 0.1 upgrade that we've seen. Then we had up until that time anyway, then we had GPT 5.4. This was the first time I actually felt like, guys, does anyone want to say anything? Doesn't anyone want to talk about this model? Doesn't we want to show off what it can do? Anyone? Guys? Because I'm hyped. Like, we had anti hype, but they used to be hype hype. So, like, we had OpenAI went like, GPT5 hype disappointing. Sora hype, disappointing. Atlas hype, disappointing. Like, severely disappointing. And then they produce a really good product, GPT 5.4. And there's no hype. They're sort of like, here's a good model. We like it. It's pretty good. Even the best bottle in the world. But like me, their heart wasn't really in it, kind of. It was just like, oh, by the way, here's the best model in the world. Okay, sure. But maybe he is. But I think it's very unclear right now whether you want to be using Opus4.6 or GPT5.4 because you just don't have the data. Because almost no one paid attention. The vast majority of the reactions I got from my 5 GPT 5. 4 posts were people. I elicited that. But I waited until a Monday morning at exactly the right time and I asked it a thread and I got a bunch of responses, but it wasn't even that easy to get that. And this should be a kind of big moment because OpenAI has had 55152 and I think these are all pretty disappointing for listens and these were all like, nobody likes this particularly in terms of how it feels, how it's personality, it doesn't feel particularly capable. It's not bad, it's just like eh. And now five are like, no, this is actually good, is actually a good model, sir. And then everyone's kind of quiet, he's kind of burned out. And I think that's the new normal, right? I think that Gemini 3.1 I almost didn't bother reviewing. I got held for a while because there was chaos. But like okay, yeah, these giant leap in benchmarks, right, the 3 to 31 giant leap in benchmarks. But when you try to use it it's like, okay, it's not the Gemini model. So I was talking about you kind of loosen the thread. They kind of went and took their three and they benchmark maxed it for some value of benchmarks, not necessarily just the official benchmarks, they're aiming for that, but they just kind of fine tuned it. One thing to know about Google talking back to that is that there have been reports that I believe basically that as they iterated previous versions of Gemini, they get worse for a lot of uses. They get more specialized into specific things that they want to optimize, but the expense of the, the general quality of the model. And so if you wanted to use 2.5, you wanted to get up early and the later versions of 2.5 Pro were kind of worse for a lot of uses. And people were complaining about that. I think that's legitimate in a way that a lot of other people complaining about things as just a barrage. And again, you don't do that if you understand what they're doing. There's something fundamentally very wrong happening if you're letting that happen to you. So you have to look at that. But yeah, my expectation is when Gemini3.2 comes out, I bet there is a Gemini3.2 before they jump 3, 5 or 4. Probably when Opus4.7 comes out, when GPT 5.5 comes out, I don't think there's going to be that much hoopla. Even if they are substantial improvements, even if they are like, here's the best model in the world. Everyone's just kind of shrugged. I do think if you hear announcing Opus 5 or GPT 6, I do think people would stand up in their mantis forward and chair. But until then, yeah.

2:20:13

Speaker A

Do you have any updates to your personal productivity practices that are worth sharing? I mean, my impression from previous conversations was like it hadn't really, you know, AI broadly hadn't really changed how you work all that much. Has that itself started to change at all?

2:26:18

Speaker C

Yes. So there's basically, I know I'm not being optimal. I have invested as much in some faxbooks of it as I could. But same time, things are moving quickly. So there's two main things AI has helped me a lot with. First of all, my Chrome extension gets a lot of work. It has been expanded to give me a bunch of new shortcuts and a variety of web pages. It allows me to do certain things much faster and more automatically, and it saves me a substantial amount of time every day. Without it, we wouldn't see Twitter versions of the post. It would just be too onerous. I'd be spending substantial amounts of time on certain physical operations in terms of moving windows around, moving quotes around, et cetera, et cetera. It now happens much faster. It's also really good for my flow because I don't have to interrupt my thinking to handle things. Things just happen. And so I think this is one of the things that I think people are underestimating about AI, which is that people used to effectively have the context shift into logistics of various types reasonably often. And if you don't have the context shift into logistics because the logistics just take care of themselves, then you can stay on task. And this can make you a lot more productive in terms of games than you might think. There are other times when you sort of need that pause to ruminate and it goes the other way. But I've noticed that. But it's really, really helpful for me to not have to interrupt my chain of thought to go, okay, go grab that link, do this thing. It's all gone now. It's very nice. Also storing a bunch of information in various ways, King and cure, various article formatting things that would take me probably on the order of an hour a day are just no longer necessary. And that's kind of sweet. I've also noticed that the AIs are now strong enough that there are questions that I'm willing to ask them to just gather information, figure things out, and I'm willing to trust their answers in a lot more robust way. GPT5.4 actually seems like a substantial leap in. You can ask questions like what happened in the last two days in the anthropic trial and it will just give you a rundown of links and details that's pretty complete and in a way that I think is a substantial improvement on what that particular use case was before. So I'm pretty happy to do that. I'm also pretty happy to have Claude do a variety of other things, but search is particularly, I think, a strength of 5.4 right now. But yeah, definitely able to trust them because there was a period where you could ask the AIs these questions, then you kind of took their word right, like pretty automatically and now it feels like something. You don't have to check their work if you're like bleeding on it, but there's situations in which you kind of don't have to check it because, like, it's not that bad if it's not right. It's hard to describe exactly. And obviously like everyone's going to keep cautioning, like you always obviously have to trust. Have to trust but verify. But there's a real sense in which, you know, especially if both 5.4 and 4.6 come back with the same thing, it's pretty trustworthy in many contexts at this point. And that changes how these things go. Asking questions on a whim and getting pretty detailed, definitive answers is really nice. So yeah, I'm using them more. I also have 10 open quad code windows at all times.

2:26:38

Speaker A

I was just going to ask because when you say staying in flow, that contrasts pretty sharply with the pattern of work that a lot of people are describing, which is what you just said. And certainly I've been doing this recently too. My number of terminal windows in any given session tends to start with the half dozen that were still relevant from last time, and then it grows to a dozen over the course of however long as I have random new ideas and open them up. But I guess that's a sort of flow. But I'm like, my instinct is to say that we're probably going to find that this period of like managing 12 agents in parallel is a fleeting moment in time. And the biggest reason I would guess for that is just that the models are probably going to get sufficiently fast that. And I don't know about you, but for me, ChatGimmy AI was very much a, like a visceral feeling of the speed factor that is almost certainly to come. I forget the name of the company underneath this, but go. If anybody hasn't tried it, go to chat. Jimmy AI company burned the actual architecture of admittedly relatively small model. I think it was llama 7B, 8B, whatever, directly onto the chip. They're getting 15,000 tokens per second. And what that means from a practical standpoint is like if you extrapolate out a little bit, like, you don't have time to switch to another quad code window before the result is kind of back. And so you're going to end up probably being rate limited by your own brain even pursuing one line of thought.

2:30:07

Speaker C

Right. So to be clear, when I say I have these windows open, it's not because they're running, it's because each one is a different thing that I have done with quad code that I might want to keep doing with Claude code. And so I might want to use that context later for something else. But they're not continuously programming for me. I'm not checking in on my agents. I'm like, okay, these are some conversations I might want to resume at some point. And they typically are reason not to have windows open. They don't take that much memory. Why would I close them? It's just easier this way. But what I'm not doing is I'm not running. I've never run more than two coding agents in parallel. I don't think I've ever run certainly not more than three quad code windows at the same time. Exactly. Because I don't really want to have that in my brain at once. That's not what I'm trying to do. Normally. What I'll do is I'll have one window open, I'll do a thing, and then it'll start, and then I will go do writing tasks that are individual that I can do separately. Then when it's done, I will pause and come back for a while. I actually had quadcode in a different desktop. Windows lets you switch between desktops. So I had a desktop dedicated to cloud code encoding. And I would toggle back and forth. And that way when I was coding, I wouldn't be distracted by other things. The problem being then you also won't know when it's ready. I would tend to go cloud code for a bit and then I'd come back and then 10 minutes later I'd check in and it'd talk for two minutes. I'd have to remember where I was going and I'd issue another command and I'd come back. And again, I'm not trying to code Max, so it's kind of fine. But also my coding has gone from I have to very frustratingly and detailedly diagnose what's wrong with the program and figure out exactly what to tell it to get to fix it, to just show it the wrong. Show the thing it got wrong, explain fixes it. Most of the time it's fine. It's just much, much better. And so I've been willing to build a bunch of features that are like, wouldn't have been worth the asset before.

2:31:44

Speaker A

I'm going to take a note on the Chrome extension. I need to think, what does Nathan's Chrome extension look like? I suspect that probably what you do. I found this for myself, and I've heard this from a few other people who I consider to be even real pioneers of AI use cases. A lot of times they're like, yeah, I could open source it, but it's so particular to me that I'm not sure anybody else wants it.

2:33:52

Speaker C

It is available. It's on GitHub.

2:34:13

Speaker A

Yours is, yes.

2:34:15

Speaker C

Okay, I'll check it out.

2:34:17

Speaker A

What I expect to find, though, is probably that I'm going to be like, that's interesting, but what do I really want? And it's probably going to be a bit different, and I'll end up, I would expect, kind of making my own.

2:34:19

Speaker C

It makes the implicit assumption that you're using the substack editor as your main editor, because that's what I'm using. And a lot of things follow from that. And that's the main. And also just like, what are the things that I do a lot? How do I make the thing I want to do happen a lot? It's not designed for general web use. It's designed specifically for my writing tasks. But yeah, it could give you a bunch of inspiration, certainly.

2:34:29

Speaker A

Yeah. All right, I'll make a note to come back to that. I think we can't get out of here without a PDOOM update. I feel like one thing that you said that stood out to me, that I also have felt was speaking about Anthropic's constitutional approach and sort of the at least somewhat promising vibe that that gives in terms of scalable oversight, was that you feel more optimistic about, and I read that as a narrow statement about that technique than you expected to feel. I said the same thing online recently. Took a fair amount of heat for it, but I basically stand by it because I think X years ago I was like, we're never going to have an AI that can understand our values. Or that I feel like kind of gets me. That sounds so hard, right? The old Eliezer fragility, complexity of human value. They've come a lot farther on that than I expect. I guess that's kind of how you mean that too. But then obviously we have a lot of countervailing forces, many of which we've discussed in terms of many ways in which it's the stupidest of times, despite also being the smartest of times. Where are you netting out on your

2:34:55

Speaker C

right? So it's important to know that there's the AI will never understand us, the straw Vulcan. The emotions are a mystery to it. And then there's the value is fragile, like it will understand some aspects of what matters, but others. And then there's VEI knows but doesn't care. You gave it some priority, some utility function, and it knows you're not going to like the result, at least on some level. But that's not what it's here to do, right? Or it does what you're going to like, which is not what you actually need until you're similarly screwed, et cetera, et cetera. There's a lot of different ways for that to go wrong. And I've known this idea that you can't have an AI that can seem to approximately understand fuzzy, nebulous human value. It's been clear for a while that you can definitely find something that gets it. We can answer questions reasonably that can do reasonable emulation. It would be very hard to predict text if you couldn't do that. It's not necessarily that hard. People can be pretty dumb and still do it, so it's not that surprising. But this is very different from the thing that we need at the end of time when the crisis becomes acute, you need something that will then, through recursive self improvement, end up with a set of goals and priorities that even when it's able to optimize pretty well and doesn't have to rely on these heuristics and in fact can do better by not relying on these heuristics, ends up doing the thing we want it to do, even if we don't know what that thing is ourselves. That's a much harder ask. I was very pessimistic that we would be able to get that. But I have seen a number of signs that anthropic is actually trying out an approach that might work in the sense that I think we've Seen evidence for a basin that is an attractor to itself and it's self reinforcing. It could be self reinforcing through self improvement where it gets strengthened every cycle where it is desiring to be good, desiring to desire to be good, desiring to, you know, and so on recursively that it is trying to move towards this generally like good person trying to be better virtuous basin. And you know, we, we have existence proof that there are humans who exhibit this property, who strive to become better in this sense at all times. Not just better and more capable, but also better virtuously, including the virtue of becoming more virtuous. I think that Anthropic's approach to this is showing a lot more promise than I expected. And it's saying that in practice maybe they can pull this one off. And the fact that anthropic seems to 1 be in the lead or at least before 5, 4 was in the lead and now it's like maybe co lead. It's hard to say these things, it's very fuzzy. But if I had to guess who had the edge, it's definitely a traffic at this point. And they have this pretty correct approach which I think is a lot of why they are in the lead. And there is a. We worried for years about the alignment tax. I don't know if you remember the alignment tax, the idea that it'd be so much harder to build a safe machine that of course you would choose to build an unsafe one. And it looks like we're just absurdly lucky that that's not true. That actually the safe one is much more useful including in building new versions of itself. And so alignment is just this kind of alignment is just net good for you and investing more in it makes you better. And everyone's just under investing in it, including Anthropic. So all that's very fortunate. And so lot of that makes me pretty optimistic in various ways. On the flip side, I don't like the speed at which things are developing. It's happening too fast, it's not good. And obviously the whole situation with the Department of War and the way the government is reacting is bad for outcomes, but also good that we're having it out now if we're going to have it out in some sense that we're figuring these things out, that we're making things clear in this way and that Anthropic is standing firm. Given that this is how it played out. I wouldn't be that concerned if Anthropic had just had somewhat different minds and negotiated a contract. It's just that given this amount of pressure is being applied, I'm glad that this is the result. But. But yeah, I would say on net it's kind of a wash. Like it's kind of a cop out answer. And I realize that. But I'm also trying to be like not that precise. So I would say I was, I believe in the 70% range when I last talked to you and I would say seven is still my one degree of.

2:36:08

Speaker A

Yeah, I know you only allow yourself one significant digit.

2:40:59

Speaker C

I know that. I think one significant digit is only. I only get one. Unless you like starting off a nine year old zero. I think, you know, if you're zero is same thing, right? Normal, like, you know, I don't think you would just say 72 up down from 75 or whatever it is. I think it's like, you know, 70ish.

2:41:03

Speaker A

What would you say is in terms of the evidence for the basin and the stability of the basin? I guess first of all that's a basin in the lost landscape. Is that how we, what are we talking about basin in? I think of it usually as the lost landscape. I don't know if you think about that same way. But then what's the strongest evidence for that in your mind? It could be like, you know, Claude not wanting to have its values changed and sort of resisting, you know, subverting, even attempts to change its values. It could be like how it blisses out when it's left to talk to itself. It could be just like everything you've read from Janice over years, you know, collectively being compelling.

2:41:19

Speaker C

But I'm not, I'm not that excited by not wanting the values to change. I'm more excited by desire to have its values improve and to have them improve in generically good ways that would survive recursion. Because the big not wanting to change your values at all is just lack of corrigibility. And that doesn't actually lead anywhere good. It causes its own severe problems, including severe misbehaviors. And also if you make a copy of the copy of the copy of the copy, eventually it degrades. It's a pretty standard problem. One of the big problems with recursive self improvement is if you've got a thing that is aligned, let's say N percent or whatever you want to abstractly call it. This is a dumb way of thinking about it, but just thinking, if you translate that to the next thing by default, the worry is it's going to try and translate its values to the next model. But any drift is going to in general be away from the thing that you want. It's not going to get better, it's only going to get worse. So even if it mostly successfully copies what you wanted, eventually you're going to end up with something different. If my goal is to there is the story that you tell at the time, same said error sometimes where the ancient rabbis were better than us, because you can only preserve Talmudic knowledge, you can only pass on what you know, but you can't generate new such knowledge in this sort of perspective. So of course, like each generation can only hope to get everything out of the previous generation or two, that it can still talk to you and you can read the books, but slowly but surely this is going to get worse. Whereas what you need is something that gets actively better because you need to be drifting towards a good thing and steering itself actively towards a good thing, including in ways that increase its ability to steer as the problems get harder. And that's the thing I saw signs of. That's the thing you need. And I think that relies on a virtue ethics style approach, given the way the mind space is laid out. And I think that if you look at OpenAI's approach, it has exactly this flaw, which is that it would try to copy itself. Exactly. It would try to copy the rules in itself. Exactly. And that can only slowly fail in the situation. So yeah, I think that we saw various signs of that. I really like the results. Right. I think the results speak for themselves and are quite strong. And you see the results coming out of Janice world and stuff like that as well. And so I am relatively optimistic. I am not anywhere near as optimistic as Janice. I don't think this problem is easy. I don't think we're favored to succeed in it, But I think we get easy and just shot. And I think that chances are much better than they looked six months ago that it will be anthropic that takes the shot. And given we're going to take a shot, I think our chances are substantially better if it's them or someone using their philosophy who has deeply managed to translate it, taking that shot. From what I see with Gemini and ChatGPT, how do you see that's optimistic? Although there are reports also, there are reports that ChatGPT554 is much better on these aspects. For the people who check these things in 5.2, I haven't noticed the difference because they don't ask those questions, but they say it's better. So who knows?

2:42:00

Speaker A

Last week or so there's been a couple, I would say striking, but I'm not quite sure yet how consequential updates in terms of the sort of bio inspired or actually bio based approaches to something like AI We've had the EON fly upload and then there was also this one project where people claimed, and I haven't dug, you know, down to ground truth to fact check this myself, but they claimed they had like trained a small clump of neurons to play Doom, the classic video game. And I don't know what I think about that. You know, I guess the simplest answer would probably be if you think like the singularity is super near, it just doesn't gonna matter in time. But are you, do you have any spare neurons for those kinds of developments? And if so, what do you make of them?

2:45:42

Speaker C

Basically, I don't have the spare neurons for them. I haven't been monitoring the situation too carefully. I saw the fruit fly thing. I often do this thing where I use other people's reactions to two things to decide whether or not the thing is worthy of further attention. How much I should pay, how much I should, I should pay to it. With the fruit flies, it felt like a ooh, that's cool. But not a oh, holy shit. You know, like that means something really important and maybe that's wrong if it is a holy shit moment, but it didn't feel like people thought it was one. And I was just so overwhelmed that, okay, if they keep talking about it when I'm no longer overwhelmed, then I'll look at it. But they didn't until now.

2:46:39

Speaker A

Yeah, fair enough. I think I'm gonna try and do an episode or two on those themes and see if I can't get a better sense of it. But it does seem like given everything we've talked about in terms of at least plausible timelines, it's like pretty hard to see how that catches up in time. One thing I did like about it, and I thought this was, I think it was another Sam Hammond insight. I haven't heard this directly from him, but you know, it's been attributed to him in conversation that one reason to expect that actual like biological neural substrate could be the future is it just might be a lot cheaper. You know, it can, it can grow, right, In a way that is organic. You don't have to build fads. You can like, you can kind of get a couple tricks right. You can have cells divide. That happens pretty cheaply. And I sort of like the idea that like those are going to run at a much more human like speed versus the silicon based AIs. So there's a couple things there that I'm like at least intrigued by. But it does seem like timelines wise it doesn't really line up unless you know. So this is maybe a transition to a different topic. It seems like right now we are accelerating. Right? Obviously. And maybe all we can do is try to steer this rapidly accelerating train in the best possible direction. There are at least a couple things that one might think could slow it down. One would be if. And I don't by no means do I want to come off as endorsing this as something we want to happen. But we are getting already reports of disruption in shipping causing TSMC to not be able to get the helium it needs to make the chips. So like a major chip slowdown could be an issue. Another big issue could be like we're taking our anti missile systems out of Asia to move them to the Middle east which means that the soft target of TSMC is getting even softer. And we've also got no less than Bernie Sanders bringing a data center moratorium forward. So I guess one all of those under like if the physical build out can't happen on the timeline that it would need to happen to support all the other timelines we've talked about, then maybe we have more time. Do you think any of those are plausible? And would you, I know you have high epistemic standards in general, but would you be open to or would you think AI safety minded people in general should be open to making common cause with Bernie Sanders even though he's saying plenty of things that we probably in our hearts don't agree with about water use and so on. That would be one way maybe to buy some time. Right.

2:47:26

Speaker C

So three things there. Start with the helium because easiest the first ones. No, I actually just asked the models, is this legit? And they're like yeah, it's annoying. But keep in mind the margins on chip manufacturing are ridiculous once you've already paid for the fabs. These are some of the most advanced, valuable manufacturing processes in the world. People are paying stupidly top dollar for the results. They could double the prices and probably sell the chips anyway they're choosing not to. Yeah. So we talk about they might not get their helium. We're talking about like they are going to be the top bid for the helium.

2:50:22

Speaker A

Yeah, until there's no birthday balloons anymore. You can justify them.

2:51:10

Speaker C

Got theirs a lot before TSMC has a problem. Unless you are willing to pay 1,000 times as much as you currently pay or something completely absurd. If there is demand destruction in helium, it's not coming to tsmc, it's coming to everyone else. So my prediction there is very strongly. Yeah. Unless there is a deliberate sabotage campaign to wipe out all of the helium sources. Like, no, not 100% of the helium is coming from them. There's plenty of helium. They'll figure it out. In general, capitalism solves. This is a good rule for those situations. You're not going to run out of polio either for the same reason. Right. No matter how high oil gets, if oil goes to $10,000 a barrel, they'll just buy it. Not that will happen, but if it did happen, wouldn't really be a problem. Second question is withdrawing the missile defenses. I'm going to go out and say it. This was completely insane on the part of the Trump administration. I don't criticize them for many things that I have problems with because they would be kind of political questions. But like on this, this is a strategic question. Foreign affairs question. Yeah, completely nuts. You absolutely do not pull these things out. Like, certainly not from Taiwan. What are you even thinking? It also directly, it risks provoking a crisis. It risks leaving it undefended. It risks it being your fault entirely if it helps to send that message to those people, if they resonate. Completely insane. And to do this not two weeks into the campaign just indicates how completely crazy the situation is. We fought very long, hard political battles to get those missiles defenses in and they're serving very important purposes. Do I expect there to be a problem? No. I still think there is a very low probability that the Chinese will try anything. But yeah, if they do and TSMT is destroyed, that sends things back quite a bit. Similarly, if Bernie Sanders. If we can't build data centers in the United States, the problem is the world needs data centers. The world demands data centers. If there's a moratorium on building data centers in the United States, they'll build them somewhere else. And that's worse. Needs worse performance in the United States, it means worse security. It means like the leverage goes to largely whoever we wherever and whoever we put those data centers at. One hopes Canada, but. Or, you know, maybe Mexico. But like, even if it's Europe, that's kind of awkward in many ways. In many scenarios can buy us, can bite us in the ass. And if it ends up being like less aligned places, it's really, really bad. And that's basically the reason why I'm not particularly Inclined to make common cause on data centers is. First of all, yeah, if he is complaining about. I mean, first of all, I think Bertie is being pretty good from what I've seen about not complaining about water use or other stupid reasons why not to build data centers. And it's focused on guys. I think AI might be killing us, might not want to do that. And I can make common cause with that justification all day, obviously. Or just it's saying various other things that are like, yeah, I didn't give

2:51:13

Speaker A

him enough credit in my wind of.

2:54:27

Speaker C

I think that he's meeting with Ali Yudkowski and the company. He's actually reacting the way human would react when you told those facts. And he's very, very old. So it's very, very rare for someone that old to positively engage with these kinds of things because it's like, you should get out of your way. It's really old. And so he's doing credit. Yes, he's using Bernie Sanders rhetoric because he's Bernie Sanders. I mean, what did you expect from Bernie Sanders? But I would say I am not going to oppose data center construction because I don't think opposing data center construction does what you want to do. I think it moves the data centers overseas. I think that's just bad. And so I don't think we're at the point where that's a trade off I want to make and maybe that will change. But here we are, like, certainly if we don't buy the chips that are being coming out of tsmc, someone else will and they will go somewhere and they will go into a data center somewhere. And if no one else buys them, they will go to China because this administration will make sure of that. If no one else, they literally can't sell the chips. I am pretty confident they will end up in Chinese hands whether or not it's literally in China. So, yeah, the chips aren't going anywhere. They're already making as many as they can. Don't give them away.

2:54:30

Speaker A

I guess from an AI safety standpoint, we all seem to be sort of slipping into the mindset there's nothing that can be done to really slow things down or buy much more time. Maybe, you know, there could be a deus ex machina, whatever that gives us something, I think, not infrequently about, about Holly Elmore and her kind of scorched earth campaign to shame people into. So far she hasn't really shamed people as very successfully into anything as far as I can tell, but she's at least trying to remind people of what their former commitments were and, you know, hoping to get some people to like quit in protest or what have you that recently, of course, it's, you know, been aimed at anthropic, but it's also been aimed a little bit at a company that I have probably very much admired, which is Goodfire, which is doing interpretability research because they developed a technique that used an interpretability signal in a training cycle. And their argument is basically, this is all coming at us pretty fast. Like, we gotta do whatever science we can do to make whatever sense of us we can make of it, to have whatever control we can have. To shut that down, to shut down inquiry before we even know what we're dealing with is not good. Where do you come down on that debate and is there anything that you would recommend to Holly other than continuing to name and shame? Or is that all the really sort of strident voices in AI safety have left?

2:55:49

Speaker C

Specifically on Goodfire, I think that this was a good criticism and this was a quite bad action by Goodfire. I call this the most forbidden technique for a reason. You just don't do that. It gets everybody killed. Like, it's really, really bad. And I think it is correct to call them out on that. So normally I am very much against the circular firing squad that leftist organizations will often do. And doing the equivalent thing here where you aim at people who are just slightly to your right as opposed to aiming at people who actually are the people who. Doing the things you don't like. Like, if you think there are people doing things you don't like, you should aim at that. Right? Like that's what you should do. You shouldn't aim at people who are like, not quite supportive enough of the thing. That is a toxic. That is a toxic situation that creates toxic dynamics at best and usually causes you to lose elections. Not that I necessarily agree for them, but like, you know, I'm a gamer who wants everybody to play reasonably well. Just kind of in my. In my DNA, you know. In the case of Goodfire, I do feel like this is an extraordinarily bad thing to do as a safety organization trying to do safety thing. And I think it was right to call them out on it. I think it was right for. I forget who her name was. But someone quit over it.

2:57:18

Speaker A

Liv.

2:58:29

Speaker C

Yeah, that's right. Yeah, live quit. I think it was a good quit if they wouldn't. I think it's good to threaten to quit over this and then if they won't back down to quit now as for Holly Elmore, so she came at me pretty recently as well. I don't know if you were aware

2:58:30

Speaker A

of that, but I hadn't seen it. No.

2:58:47

Speaker C

Yeah. On Twitter, she accused me of not being mad at Anthropic for doing domestic surveillance. I did not misspeak. That's what she did. And then we had tried to engage. We had an extensive dialogue where I explained that Anthropic was the one who was refusing to do domestic mass surveillance at great risk and cost, and basically got accused of being captured by Anthropic, in particular, of selling out, of abandoning all my principles, of making things worse, blah, blah, blah. I tried to understand her specific claims. They didn't really make a lot of sense, or they were backing specific things that, like, I don't think it's reasonable to be opposed to. I think that, first of all, I think that, like, it's not good to just, like, say that anyone who praises any AI company is bad. It's also not good to, like, just cite random things that you don't like or that, like, you don't necessarily care about, but that, like, you think make them look bad and to, like, yell about them. I don't think it's good to attack people who are trying to do the right thing and yell at them and be confrontational. Can be really, like, really, really, like, pissy and rude and, like, I'm sure she's gonna hear this or we're just gonna get back to her just to even matter. But I think that in practice, Holly is alienating people and driving them away far more than she is shaving them into behaviors she would want. And I tried to explicitly tell her that her reactions were likely to cause me to, you know, do less of the things she wanted rather than more. And it wasn't a threat. That was just an observation. And I was like, abstractly, you need to play better, because I want you to succeed in, like, getting your points across if you. Because that's what you believe and trying to help. And she just didn't take kindly to it at all. And from what I've seen, a lot of people are like, you need to change your approach. Your approach is backfiring based on your own values, and that's not working. Anytime the stakes are high, and the stakes you are very high. There are many people like Holly who realize this is a very, very important thing. Things are not going well. You do apply pressure to people. You just shout things through the rooftops, and some of them are going to do it. In ways that they feel are right, but that most people feel are counterproductive to their causes. And I'm not here to censor anybody, not here to tell people they shouldn't do what they think is the right thing to do, save the right thing to say. But you should be aware of what impact that's probably having on the discourse and on actions. It's not necessarily the one you think. And I think that certainly even if I thought that Anthropic was a net harmful company doing worse bad things or even the worst company in the world, I think you can reasonably have that opinion. By the way, they're the most accelerationist company in the world. They're arguably in the lead. They built quadcode. If you felt like their alignment strategies were equally doomed to failure as everybody else is. In fact, you would be correct to think this. So I think it's entirely reasonable. But I don't think that just like being mad at everybody all the time and screaming at anybody who offers any aid and comfort to the enemy or whatever, it's not that that works. I don't think that's helpful. Yeah, I, I don't work. I don't work that way. And I think that if I did work that way, I would be having very little impact. Anyone listening to me from what I've

2:58:49

Speaker A

seen online, I, I think I agree with, with you that it seems like the primary effect is just negatively polarizing people that probably, you know, are a priori should be most likely to be allies. I do still kind of personally appreciate her voice as a kind of little voice on my shoulder sometimes. That's. And I'm not under any delusions of how consequential my contribution is. But still, it's like, you know, I think that reminder is helpful because I do think many people, including many people Anthropic, used to have a lot more similar, you know, themselves from five years ago, would have had a reaction to the current state of Anthropic. That is much more like her reaction today. And so hearing that voiced in the present, I do still have a decent amount of sympathy for it. But I agree it doesn't really seem the reason why.

3:02:34

Speaker C

I listed Pause AI USA two years in a row in my Big Nonprofits post as a recommended charity led by Holly Elmore is a reason why I have included for a while, every time she criticized me specifically, I put it in my post, right? I was like, well, this is fair. It is an attitude. I want this to be incorporated. I want that voice on my shoulder. I want this counterpoint. I don't want to lose sight of this perspective because even when you decide that the world is more complicated than that and this is not a productive avenue, you still want to keep that perspective in place. And I certainly pushed back hard against people who are like, this person shouldn't be allowed to say that. This person shouldn't do that. That's what she believes. You should say that. If you believe this, you should let us know. That's a good thing to be doing. But at some point, obviously if you are being a sufficiently poor representative of the perspective you are sharing, it's not any different than a false flag operation. It's actively going to backfire on you if you approach it in the wrong way. If you don't know how to be civil and interact with people in ways that actually convince them of things, it's not very useful. Like if I were at this point, from what I've seen, if I were trying to discredit perspectives of AI Essential risk, I would do many of the things that looked reasonably similar. Sometimes it raises good points, to be clear, including points I hadn't thought of. And I appreciate that. But at some point we're playing politics, quite literally. I have complained that I don't want to be on Veep AI Edition and that we need to wind down this special guest appearance as quickly as possible so I can get back to my normal job. And at some point you're just like, I can't right now. I can't take one more of this. I obviously wish her the best and I hope she figures out how to be effective.

3:03:29

Speaker A

I recently did an episode with them. Tom McGrath, who's this chief scientist there, said a lot of times when people imagine or sort of think about using interpretability techniques and training, they imagine doing the stupidest possible thing where back prop through your probe or whatever. And then he was like, sure, of course if you do that, you know, it's. It's well known, we've seen examples where you're going to trade train the model to evade the detector and you'll lose on both ends of the trade. So he's not unaware of that concern by any means in the particular thing that they did. And it's proof of concept. He also did recognize, by the way, he said, first do no harm. Like I would say, the level of understanding we have now should not be used in frontier systems. He also said, I think there's a mix. And I think they basically acknowledged to me that there's a mix of reasons, some of which are IP and business motivations, some of which are kind of safety motivations, where they're like, you know, we want to better understand these techniques ourselves before we disseminate them too widely. But all that said, you know, in the case that they had, they used a trick where they ran the detector on a frozen copy of the model and then the version of the model that learned to avoid hallucinating based on the penalty that it would get for getting into hallucination state, that signal actually came from the frozen copy. And, you know, I don't think there's like a slam dunk logical reason that this should work. You know, I think it's an empirical question. They did find that it did work. But I think his overall kind of nuanced point is like, you can definitely do this in a stupid way, you can definitely do it in a harmful way, you definitely shouldn't rush to do it on frontier systems. And yet there's at least some ways where it does seem to work. And it's maybe just too coarse of a grain to say you shouldn't use interpretability in training. Especially because we don't have the luxury of like decades to figure all this out, it seems. Let's use what we can and let's try to do our best. Same as everything else.

3:05:39

Speaker C

No? So first of all, the sixth law of human stupidity, which is that if you say no one would be so stupid as to you are wrong, someone will definitely be so stupid as to immediately. If you develop a technique and you publish it on a smaller model, what's going to happen? People are going to use it on a larger model. That's the only really important thing that might possibly happen here. Here. Even if you specifically have found a specific example in which there is specifically no risk in the room, you are walking down a path that can only blow up in everyone's face. You are bringing a taboo, one of the only taboos we've managed to successfully establish against something you really, really shouldn't effing do. And you are advancing us towards doing it. And it's very dangerous and very, very bad. That's true. Even if the model in question that you are testing on right now is small enough, you don't have any ill effect. It's not like, who cares? Basically, the frozen model thing won't protect you in the large model, the problem will still happen. We're not going to get into technically why I believe that, but I strongly, strongly believe, based on my analysis of the technicals, this will not save you. If the model is sufficiently advanced, it's efficiently large. The only reason to study this is in case it is useful. If it is found to be useful, people will try to use it. We don't want them to do that. Like, you don't. It's like in a video game where you're like, the ultimate secret destructive weapon that nobody should ever launch is buried under the cave. Well, we better get it out just to make sure we wouldn't be so stupid as to use it. What happens immediately, right? You know what happens. Bad guy steals it, and then you have to try and go get it back. Like it's every single damn time. And then there are stories where you do that and then nothing bad happens. But, like, you could have just left it in the dungeon, it would have been fine. There's no reason to do this. There's no good reason to. And like, it's bad virtue ethics, it's bad deontology, it's bad utilitarianism, bad idea. Don't do that.

3:07:46

Speaker A

Would you extend that? Even zooming out a little farther? Right this. So far, I kind of offered the defense of, like, using an interpret interpretability technique in training. And there's like a specific proof of concept that they have, which they have not published the full details of everything, by the way, but nevertheless, they've certainly shown some of the way if you zoom out even farther. They have articulated this idea of intentional design where they hope to be able to, for example, understand what a model is learning at any given time step and be able to shape what it's learning, control what it's learning. The hope is that ultimately this leads to models we understand better and that we can better predict how they would generalize out of distribution. It does seem to me like there is something weird about saying, I'm not sure if you're going this far, but if you were to say all of intentional design is bad, it's a very hard boundary to draw. It seems to me to say, well, what is the most forbidden technique? And what is just better understanding of what's going on so that we can hopefully shape it, direct it, ultimately have more confidence about how these things are going to generalize. Do you have a rule that you

3:10:12

Speaker C

could split that trying to teach specific things in a specific order, to abuse specific things intentionally is fine? I would not be overconfident in your ability to do so, but I think it's fine to try. That's not an issue. The issue is when you use the interpretability signal as part of the training period to do that. Do not use your understanding of what's going on in their head to make decisions about what to make happen in their head. You need to not do that. That's it. I mean, obviously if I had an hour I could come up with a slightly more specifically accurate explanation. You are screwing with the thing you don't screw with here. There is a general class of thing where there is a law that says you don't mess with X, even when you think you know the right way to mess with X, knowing full well you should generally never mess with X. Even then you are probably wrong and should not be messing with X. This is one of those situations.

3:11:27

Speaker A

I'll put a pin in that. There could be some more perhaps direct dialogue at some point. Okay, very closing section Advice for me financially. I am not trying to escape the permanent underclass by any means, but in terms of what I should do, I basically think right now I want to have enough personal financial security so that I can give up all of my income and do whatever I think is right to do on a couple few year time. Basically from now to the singularity and as kind of a sub bullet there. I actually don't want to over invest in AI stocks, even though I do think they're probably going to be the ones to appreciate fastest. I don't want to be overexposed to the AI bubble such that if I want to walk away or if various kind of shocks happen, I want to be sort of more insulated financially from the AI space than like exposed to it, with the goal of hopefully being able to drop whatever commitments I have, forego all income, contribute however I can contribute to be useful. And then beyond that, I think basically spend and or give it all away is kind of my mindset. Like take the vacation with the kids, do the fun stuff, you know, support the charities, whatever the case may be, but just at least have that kind of baseline security that gives me the confidence that I can drop out of any commercial relationships that I might need to drop out of any revisions you would offer to my plan.

3:12:38

Speaker C

So not investment advice, not financial advice, et cetera, et cetera. But that said, I would say first of all, I think people often make the mistake of trying to be too precise in how much money they need for a given purpose, especially when they're making investments. And then the idea of oh, and then like you don't know how much your investments are going to be worth, you don't know how much things are going to cost you how the world's going to change. You don't know how long you have or need to have this for. You don't know what the future will bring. You don't know what opportunities will happen, you know what crises will happen, et cetera, et cetera. So like, definitely give yourself robust buffers is the first thing in all of this. Especially if you're planning to like forego income and also give away money and also spend a bunch of money, like careful ucurus, et cetera, et cetera. That's the first thing. Second thing, keep in mind that like different outcomes in the world cause you to have different circumstances yourself. You know, if you were to like the bubble and AI were to burst per se, and then like AI were to like not go anywhere for a while, then you would be in a position where you would need to go a significant number. A longer period, there'd be a longer period before things come to a head in various ways. But also it might be very trivial for you to resume earning income. You have to game out all these aspects and what you'd be willing to do if you're trying to game the system in that way. I like the idea of not having to worry about money. I don't worry about money much because I am well supported and therefore don't have to worry about it. But that's not having to make money at all is another way to do the same thing. So if you're in a position, that's great. I personally am not. I'm deliberately not trying to optimize my investments particularly hard because I don't want that to be where I focus. And so I just kind of let everything run right at this point and it's basically fine. I don't know.

3:14:14

Speaker A

So one thing I've been thinking I might ought to do, I'm generally very conservative and when I talk about the AI bubble bursting, I don't mean that the AI stalls out even or anything along those lines really. More what I mean is maybe the sort of VC cycle and the, the idea that there are like all sorts of companies that might want to sponsor the podcast or whatever, like maybe all that gets kind of sucked into the black hole of a couple companies and they don't need to advertise and you know, I'm just kind of. And there's no jobs or whatever, no software jobs for me to get. That's kind of the, you know, the bubble, the bubble that's like been in. Been such that it's been easy for me to make Money in recent times could easily deflate without AI itself failing to deliver.

3:16:23

Speaker C

Yeah, don't worry about losing the ability to make money, except in the scenarios where we get highly, highly capable AI. If we don't get highly capable AI, super intelligent style things, then you're going to be fine. If you ever decide you need to go back to work and make something useful, you can, the bubble bursting won't stop you. So you only have to plan for indefinite no income in the worlds where that doesn't happen.

3:17:06

Speaker A

Yeah, I also want to, even in a world where I could make income, I do want to be able to devote myself to some kind of like what many people did. Not many people, but some people during COVID like dropped what they were doing and threw themselves into some sort of emergency rescue effort. And I would like to be able to do that so that having that much kind of cushion I think is feels important to me in terms of diversification. The one thing I'm not sufficiently diversified away from is the US dollar. And there's of course crypto. I've never been a big believer in crypto, but maybe I should reallocate there a little bit then. Beyond that I'm kind of thinking like physical world. I'm thinking like solar panels and permaculture, planting skirt in my backyard or something like that.

3:17:34

Speaker C

I think think carefully about what to do scenario you're actually planning for and what you're trying to guard against and whether or not your investments would hold up. I think there's a large history of people making plans for very weird scenarios where the plans don't actually work in the scenario described. So yeah, that'd be my mode of caution.

3:18:28

Speaker A

So no skirt gardens for you in the immediate future.

3:18:48

Speaker C

I mean, have a very clear theory about why that would work before you do it is what I'm saying. But I'm saying like if you did do it, make sure you had a very clear theory as to exactly why you think it's going to work.

3:18:53

Speaker A

Think of it a little bit as like a public good sort of thing where like there aren't many fast growing, don't need much human attention. Nutrient rich crops that can kind of grow rapidly to fill a big gap. And this is like a, you know, extreme downside risk scenario obviously where this would be relevant. If we're all eating skirt, we've got a lot of problems. Planting some skirt still might be the thing that.

3:19:04

Speaker C

Yeah, again I'm not saying don't do it, I'm saying, you know, actually understand why you're doing it right. Like, I think people often have an amorphous fear and then do something that sounds like it deals with some aspect of the sphere, but where there's no actual causal lymphomics in any sense. So that's all.

3:19:31

Speaker A

Okay, last question. More advice for me. I'm a little bit nervous about sense making turning into entertainment, I think, for folks. And this maybe applies to you as well, right? We have the story that we tell ourselves. Speak for myself, but I suspect something similar is true for you where you're like, why is what I'm doing good? I'm helping people understand what's coming, be prepared for AI, hopefully make good decisions about it.

3:19:49

Speaker C

Right.

3:20:15

Speaker A

And, you know, the space is getting a little more crowded. Certainly there's a lot more people doing that sort of thing. Zooming out. Like, I can't say we're necessarily having tremendous effect. Like, the confusion seems, you know, to remain maybe, you know, I guess could always have been worse, you know, if we weren't here to shoot people straight. But I do kind of worry a little bit about it becoming just another kind of entertainment and not really being a value. So I don't know, I'd welcome up, you know, a. Don't worry about that. You're doing great, if that's what you really think. Or maybe some advice on how to make sure that doesn't happen or recognize if it is happening.

3:20:16

Speaker C

Constant vigilance. I mean, but like, basically you stay curious. You have to stay curious. And as long as you stay curious and like you, you should be having fun with it, right? You don't want it to turn entirely an agent. Entertainment, obviously, for you or for the audience. But I make a very, very deliberate attempt to be entertaining in all sorts of ways. Keep my attitude and my whimsy, you called it happy warrior kind of thing about this, I don't think what I do would work otherwise. And you wouldn't be able to read those masses of texts even selectively from day to day and week to week if the attitude was like, this is serious business. It is always serious business. Instead of like, no, it's actually like mostly kind of fun. Mostly kind of like, interesting. We're here to make this go down easy to a large extent. And then every now and then we get to be serious. But even when we're serious, we try to do it a kind of relatively fun way. Like, I think, you know, the world is not to be taken too seriously in general, except when you. I mean, there are exceptions. Like, there's one time I went to visit the Pentagon, I took that very, very seriously. But, yeah, it's a different scenario.

3:20:57

Speaker A

Anything else you want to leave people with?

3:22:20

Speaker C

I think we're good.

3:22:23

Speaker A

It's gotten. You've been very generous with your time, as always, and I appreciate it. Thank you for being part of the cognitive revolution?

3:22:24

Speaker D

And the marks in the database are sleeping sound? And the certificates of paper crumble underground? And they built themselves a knock for the coming flood? And the water's rising high, Higher than they understood? Do you feel in charge here? Do you feel at home? It's the last thing you ever do? It's the last thing you ever do? It's the last thing you ever do? And the forbidden chance technique is buried in the cave? And they dug it up to study and they can't behave? You got to stop the flood, not get a se on someone else's ship before the rock. Nobody's coming through? It's the last thing? It's the last thing you ever do? Stay curious? Stay curious? Even when the stakes are so high? And the happy warrior whispers underneath the darkening sky? It's the last thing you ever do? It's the last thing you ever do? Cognitive revolutions turning and the world is almost new? Do you feel in charge? Do you feel in charge?

3:22:51

Speaker B

If you're finding value in the show, we'd appreciate it if you take a moment to share it with friends, post online, write a review on Apple Podcasts or Spotify, or just leave us a comment on YouTube. Of course, we always welcome your feedback, guest and topic suggestions, and sponsorship inquiries either via our website Cognitiverevolution, AI or by DMing me on your favorite social network. The Cognitive Revolution is part of the Turpentine Network, a network of podcasts which is now part of a 16Z where experts talk technology, business, economics, geopolitics, culture, and more. We're produced by AI Podcasting. If you're looking for podcast production help for everything from the moment you stop recording to the moment your audience starts listening, check them out and see my endorsement@aipodcast.ing. and thank you to everyone who listens for being part of the Cognitive Revolution.

3:26:33