Ben Thompson: Anthropic, the Pentagon, and the Limits of Private Power
Ben Thompson discusses the tension between Anthropic's AI safety principles and government demands for military cooperation, arguing that private companies building powerful AI systems will inevitably face pressure from governments with 'guns.' The conversation explores the implications of treating AI like nuclear weapons and the challenges of maintaining democratic oversight over transformative technologies.
- Private companies developing powerful AI cannot avoid political pressure - governments will ultimately assert control over technologies they view as strategic
- The economic model of AI development requires massive scale and consumer markets, making government-only solutions impractical unlike nuclear weapons
- AI safety alignment must consider geopolitical realities, not just technical safeguards, as unilateral restrictions invite adverse outcomes
- The current legal framework for surveillance and military applications assumes physical friction that AI eliminates, requiring new legislation
- Democratic processes may struggle to keep pace with AI development, creating pressure for unelected executives to make consequential decisions
"You may not be interested in politics, but politics has an interest in you. What is politics? War by other means."
"If AI is as powerful as its builders claim, the people with guns are going to want to have a say."
"If you're not going to be subservient to us, you're not going to be allowed to build a power base, period."
"I would rather have Amodei making these decisions than whoever comes out of our screwed up democratic process."
"All these rights, all these laws are subject to the agreement of those governed by them, and the final say is those who successfully inflict violence."
You may not be interested in politics, but politics has an interest in you. What is politics? War by other means. You might not be interested in that. It is going to have an interest in you. If we're going to analogize it to nuclear weapons, as Dario Amadei has done repeatedly, you have to think through what would happen in a world where a private company developed nuclear weapons. It's a tension that's been brewing for years, which is, are you an American company sub subject to American law and even beyond law, just morally compelled to support the US military or not?
0:00
A private company built something powerful enough that the government threatened to destroy it for not cooperating. That's not hypothetical. It happened last week when the Department of War designated Anthropic a supply chain risk after the company refused to remove safeguards against mass domestic surveillance and autonomous weapons. Ben Thompson's response wasn't to defend either side. It was to point out what almost no one was. If AI is as powerful as its builders claim, the people with guns are going to want to say whether that means the US government compelling access, or China deciding to act because America is getting too powerful. These are no longer theoretical questions. In this conversation, previously aired on tvpn, John Coogan and Geordie Hayes speak with Ben Thompson, founder of Stratecherie, about his essay Anthropic and Alignment.
0:41
We have Ben Thompson in the restream waiting room from sritechery. Welcome to the show, Ben.
1:37
How are you doing? I'm good. Hopefully I have the right microphone turned on this time.
1:42
You do, and it sounds fantastic. Thank you so much for joining on short notice. Thank you for writing Anthropic and Alignment. It is a fantastic piece that I think covers all of my questions. But I want to start with just how did you process the weekend? How did you get to this particular place? And then, like, what is your key thesis with anthropic and alignment?
1:46
I mean, this is one of those ones. I don't know if it's good or bad that it came out sort of at the end of the week. So I had a lot of time to think about it. Ultimately, I think it was good because I'm not sure anyone very as explicitly made the point. I did, and maybe it was bad because I feel there's a lot of, like, caveats. Maybe in retrospect I should have put in the article that would have addressed a lot of the points that people are upset about. Yeah, basically zooming out. This was not a normative article where I'm saying what's happening is good or bad. And that's really the one caveat I really wish I would have put on there. I mean, I'm being out there accused by like a Neeli Patel the full throated fascist endorsement of fascism or something like that. And it's like, relax, okay? Can I get some credit for the last X number of years? Basically the. And there is a deep rooted concern that I've had for a long time about. And I'm now hesitant to even use sort of EA as a term because it's kind of now politicized. Thanks. Thanks to the events of the last week. But a failure to grapple with a world of guns is basically the long and short of it. And I actually think Eliezer has been the one guy who's been honest about this, where he wrote that Time article about potentially bombing data centers someday. And that's actually a. A point worth bringing up, which is all this stuff is right now in the digital realm with robotics and potential other applications. And it's obviously being used for military operations. It's crossing over into the physical realm. But if AI is as powerful as people say it's going to be, then there are going to be real world reactions to that. And if we're going to analogize it to nuclear weapons, as Dario Amadei has done repeatedly, you have to think through what's. What would happen in a world where a private company developed nuclear weapons. What would the government's response be? And that's not to say that the government response in that case is good or bad, or does it follow sort of constitutional principles or whatever it might be. Obviously I want them to. On the surveillance point, I've been concerned about the application of computers to our surveillance laws for years. Like so many things in our society assumed a certain level of friction in doing things that computers already obviated. And AI is going to just do that on steroids. I do think we need new laws. I think all this stuff is correct. And I think the idea that AI being applied to these commercially purchased data sets, for example, is a huge problem that I don't want to happen. The concern I have is that if this technology is as powerful as it is on pace to be unilaterally imposing restrictions, even if those restrictions are good, isn't just an issue as far as who rules us. The democracy issue, that sort of Palmer Luckey I think very eloquently raised, it's inviting very bad outcomes for those asserting that in general. And I feel there's been a lack of Awareness of this. That's why I brought up the Taiwan China thing. This has been a frustration I've had with Anthropic. Generally they talk about, you know, Amade's been very outspoken in terms of opposing selling chips to China for. In a narrow, you know, aspect. Very, very good reasons. My pushback has always been, what happens if we get super powerful AI and China doesn't? What are they going to do? Sure, the optimal thing would be to just bomb TSMC out of existence because suddenly that becomes optimal even with all the costs that that does. And then what then are we going to do? Like, we're entering this. Like, I don't like getting into political posts. It's not fun at all. I'm not having fun with this. It's not enjoyable. I could promise you this. And some people are like, well, you should have just made the post private. I'm like, no, I actually, I really want Anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful, and now it's starting to happen for real. And I've, I guess over the weekend part of it was just I felt compelled to say this and girding myself to do so. And even then I still wasn't. I haven't. I haven't waited in this for a while and it's. It's no fun, but it is what it is.
2:09
Can you unpack a little bit more of that, that tweet that you posted where you did the find on the Dario article for Taiwan and saw that wasn't mentioned?
7:04
Oh, I mean, I've just got. I've sort of griped about this in general. I think that.
7:14
So do you just think he should be. He should be talking about the Taiwan issue more deliberately. He should be messaging that like, why is it important that. Why is it. Why is it significant that he doesn't mention Taiwan?
7:19
Well, I think the position about not selling chips to China is a totally legitimate one. I understand the argument. I could make that argument if I needed to. Like, I have advocated the opposite. That, number one, not only should we should we be selling chips to China and a generation or two behind, which has always been sort of our standard practice with chips, we should also be allowing Chinese companies to fab with tsmc. That is a restriction that has come down now. These Huawei chips are somehow manufactured by tsmc. Let's not look too closely at it. But we should explicitly be allowing it.
7:31
Okay.
8:09
And the Reason for that is I think it is a safer equilibrium to have China dependent on Taiwan than to try to cut them off from Taiwan while we are dependent on Taiwan. Taiwan is 70 miles off the coast of China. It's not an ideal position in the world for us to have a dependency on it and China to not have a dependency on it. So this, and this is the problem. All this stuff has everything going forward has massive trade offs. Yeah. The implication of letting China fab with TSMC or the implication of letting them buy Nvidia chips is that they gain these incredibly powerful AI capabilities that is driving this entire debate. That is in a vacuum, not a good thing. But nothing's in a vacuum. Everything is a trade off. And in that specific area, I think that just, it's repeatedly again and again being absolutist about the chip issue when I am frustrated to not see any public comment about the. That's not quite fair. He has made comments about oh yeah, that would slow down sort of the adoption of AI the long run if Taiwan got. Got bombed. And like that's in my mind that's an insufficient consideration of the possibility of Taiwan getting bombed. Now again, I bias in that regard. I lived there for nearly two decades. But it's just the reason I brought it up in this context is if AI is what it is, the people with guns are going to want to have a say whether that be domestically, whether that be internationally. That might be in the context of the US government just taking it, trying to kill your company because they feel you're not cooperating. Or it might be the context of China deciding it has to act because the US is becoming too powerful. Because, you know, and it's not a fun debate. It does. I do think the nuclear angle is a good one. It has echoes of the proliferation question of mutual assured destruction, all those sorts of things. And that's just going to be the reality of the debate going forward. And again, it's not very fun. But I think it's also irresponsible to sort of run away from it.
8:09
How much attention or what kind of factor do you think the information asymmetry between the Department of War and Anthropic played last week? It felt like in hindsight Department of War knows they're headed into a major, what is now looking like a drawn out conflict anthropic sitting there thinking, hey, that we have this like arbitrary deadline. Why do we need to renegotiate this now? And then if, if going off of Emil Michaels timeline, it sounds like they Were still in the final hour trying to make a deal happen. And according to Emil, Dario was in a meeting and was busy and wasn't really respecting the deadline, which maybe he felt was kind of artificial. But in hindsight, now looks like it was a significant. Because the Department of War was, you know, taking the country into a conflict and wanted to know, hey, can we lean on one of our, one of our AI partners?
10:25
I don't know. I mean, I think that seems pretty arbitrary to have cut. I mean, I'm hesitant to speculate. I don't know what was going on. I don't know the angles, I think. And that's why I didn't sort of delve too deeply into it. And I also think some of the specifics, like this supply chain risk, probably overbroad.
11:27
Yeah.
11:48
And almost certainly the way it was stated in the tweet is definitely overbroad. If you actually go and read the statute, I think the goal that I was. And again, this is where I wish I had sort of put more caveats to say, look, I'm not actually talking about all that stuff. I don't really care. I do care, but that's not the point of this article. The point of this article is there's all this talk about alignment. That's why I put that in the headline. And on one hand, alignment is aligning AI with humanity generally, but for the foreseeable future, and you could have a philosophical argument about the long term viability of nation states in the age of the Internet, much less the age of AI and whatever that might be, that certainly is a more pressing conversation than probably ever before. Anthropic exists in the context of the United States. And that's why I put that quote. You may not be interested in politics, but politics has an interest in you. What is politics? War by other means. You might not be interested in that. It is going to have an interest in you. And there is a, like I said, a certain long standing frustration of not fully grappling with that fact, having dorm room theoretical arguments about AGI. You go back to that post over Christmas about like AGI in like 100 years and no one having any jobs or being worthless or pointless or whatever, which included some implicit assumptions around property rights existing in 150 years as they exist today. News flash. If that happens, property rights as they exist today are going away. All these rights, this is a philosophical argument. That's why I started with the international law concept. All these rights, all these laws are subject to the agreement of those governed by them, to follow them. And the final say is those who successfully inflict violence. And again, this isn't fun to think about. It's not pleasant. You would like to assume we operate in a world of laws, that everyone follows them and goes by them. But to the extent AI is as impactful and powerful as it is, the. More these questions, fundamental questions that we thought have been settled for hundreds of years, if not thousands of years, are going to be raised. And this is just the first of several episodes where I think that's going to happen.
11:49
I grew up in sort of like post Cold War, no ducking cover, didn't have a lot of fear of nuclear Armageddon. But Dario Amadei is a fan of this book, the Making of the Nuclear Bomb. And it seemed like he sort of predicted that if AI becomes super powerful, the US might take a similar approach that they did with regulation of nuclear weapons. And as I was thinking about that, I feel sort of good about the way nuclear weapons are regulated. I feel like we got the good ending and we haven't had nuclear weapons drop in 70 years. And it seems like things are going well there as well as they can, considering that there's this amazing, tremendous, dangerous technology that exists, but it hasn't been deployed and it hasn't actually bombed anyone. But how do you think he's processing that book? How do you think we should be processing that idea of the government running the same playbook that they did with nuclear weapons?
14:18
It's pretty interesting. I mean, on one hand, just from sort of a physical perspective, dealing with weights and software is very different than dealing with fissionable material. Or I guess the super bombs are like. They're actually like fusion devices. Right. And that is trackable, it is interceptible. You know, when Iran, to take a pertinent example, is trying to build enrichment facilities. All of which makes the problem easier to solve.
15:25
Yeah.
15:55
So that's difference number one. Difference number two, and I really wish I would, I had this included and I cut it so that sort of the. The article will be tighter. But there is a very interesting point in technological history, which was the early days of intel. And Bob Noyce made the decision that we will sell to the government, but we're not going to design chips for the government. And the distinction there was you had guaranteed orders, which was great, the government would take your ip. And there was. And in his mind, the more important thing is there was limited volume. And the way that he foresaw correctly that this was going to be a very upfront Capital intensive process of designing shapes. You have to design them, you have to have the equipment. All of which is in the billions of dollars today. Back then was in the tens of millions and hundreds of millions is you need to find the largest possible market which was the consumer business market. You design for that that will accelerate your improvement and your capabilities so much that you will end up having better devices than the government could have ever requested or made for itself. Yeah, that is at stake on steroids with AI people like I was talking to someone, they're like why doesn't the government just have just get someone to make their own model? It's like because it's like you talk about government contracts were like single digit billions we're talking about for the amount that's going into CapEx, the cost of these models we're talking hundreds of, you know, hundreds of millions of dollars for the models and hundreds of billions of dollars approaching a trillion dollars a year in capex. That is only sustainable and viable if you're selling to everyone. And but that introduces these entire new dynamics where the government built nuclear. It started there and it started with a lot of assumptions because it was a government program. We are necessarily for economic reasons because of all the upfront costs entailed starting with private companies of which the government is one of many customers. And that introduces the assumption that well, it's a private company with private property rights and all those sorts of things. All of which I want to be true. Again, I don't like how this is going down at all. The point here is to say there's a good reason why it's not going down that way and there needs to be cognizance that even though this is a private company that is building the model general purpose and for very good reasons wants to put restrictions again I think the surveillance one is a very powerful argument that I agree with. The problem is that you just need to be aware of yes, the government is a small customer. The government is also the entity again not to be but with guns like they, you know, like why do I pay taxes? Because the law says pay taxes.
15:56
Yep.
18:50
No, at the end of the day I pay taxes because you know, if you really want to distill down. If I don't, someone with guns will come to my house and throw me in jail. Right. Like we don't think about that. But at the end of the day where do these assumptions and laws and rights flow from? And as long as that is still the case that it needs to be a decision making factor for these companies.
18:51
How do you think this plays out for Anthropic? It's such a small contract, but it's so important in the zeitgeist. There's a lot of people that are rallying around Anthropic because of this. There's a lot of people that are pulling away from Anthropic because of this. It feels like there is a business to be built that doesn't work with the government, but delivers coding models and knowledge retrieval systems and a whole bunch of really valuable products and technology and it winds up being fine. But at the same time, you don't want this like hairy relationship with the government adversarial to go on for a long time.
19:14
I would like them to sell to the government and I would like Congress to pass a law addressing these digital surveillance issues.
19:50
Yeah.
19:57
And a lot of people are like, that's unrealistic. Which I'm amenable to. But at the end of the day, if you don't have it's legal or not legal as your guiding standard, the only alternative is someone has to decide. And the implication of that not being a sufficient justification is that means a private executive is deciding. And if AI is what it is, I think that's going to be. I use this word intolerable. I didn't mean intolerable to me. I meant intolerable to those with power to have a private executive making those decisions or not. And if you think about if power. If we're going to have this very sort of brute analysis that power flows from or laws flow from power. AI is a source of power. So it's not just that. And I think this is where the supply chain again, which I'm not endorsing, but I think that's where the motivation is coming from. The goal isn't to find. We just won't use Anthropic. I do think the goal is to hurt Anthropic. And if you're not going to be subservient to us, you're not going to be allowed to build a power base, period. And again, I'm not endorsing all this. Yeah. It's just a matter of. It's not a surprise this is happening.
19:58
Yeah.
21:22
And it. This needs to be just a. Is a real risk factor. A real. That has to be considered in all these decisions.
21:23
Putting on my Dario hat, I'm thinking about a different way to achieve the goals with maybe less acrimony. And I threw out this idea that maybe the better solution is like, work with the government, but then lobby for a surveillance act. And actually, yeah, I mean, I wish
21:32
the White House would come out and say, yeah, there's a digital surveillance problem. Let's work on a bit. Like, I, I don't. Yeah, probably another regret I have is sort of putting this all on anthropic. That was sort of the angle I was concerned about. And that left me, I think, fairly open to the critique that this is just like defending the White House's approach. And that was, again, that was. I was trying to be a higher level that's saying, look, this is what's going to happen. But, yeah, the, the.
21:53
I'm just thinking from the perspective to
22:19
find a middle ground here.
22:21
I'm just thinking of, like, from the perspective of like, the. If the White House is like this immutable thing, but you are, you know, involved in anthropic, like, the one advice would be, hey, okay, instead of going in and having this confrontation with the government directly, go and start a political action committee that lobbies for change in the way that you want through the democratic process.
22:23
Yes, that is the ideal process. I understand why people are frustrated and skeptical about this.
22:47
Okay.
22:53
I used to have this debate a lot in the context of antitrust and aggregators. And one of my sort of theses about the aggregators and antitrust is that the, the antitrust laws are fundamentally unsuited to dealing with aggregators because antitrust law has historically been about control of supply, and the power of aggregators flows from control of demand. And so you end up with all these solutions that I call pushing on a string. You're just trying to get people to change how they behave. And that doesn't work very well. Like, Google has always been right. Competition has always been just a click away. The problem is people aren't clicking. And like, so the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it. And therefore, my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work. And the reaction is always, that's impossible. We can't pass new laws. And. Okay, but realize the implications of what you're saying. I mean, I saw a tweet again. I didn't like it, so I lost it forever. One of the most inferior things in the world. But someone was like, I would definitely rather have Dario Amadei make these decisions than. And he. And to this tweeter's credit, he wasn't limited to Trump because to me this isn't a Trump issue. This is a any politician issue.
22:54
Yeah.
24:14
He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process.
24:15
Yeah.
24:20
And points for the honesty because that's the actual choice that is, that is being put forward. And you could say Congress isn't going to do anything, therefore Amade should just appreciate that is giving up on the democratic process and saying we should have unelected, unaccountable individuals making weighty decisions. And again, I understand the sentiment. It's hard to imagine Congress passing laws about anything, but just realize that's like that implication is quite fraught.
24:20
Yeah, it's a huge change from. I mean, I just spawn in and believe in democracy and then understand it and study economics and just have reinforced my belief in the American project throughout my entire career. And now it really is people discussing an entirely different world of governance, which has been not something people have talked about publicly for a very long time, but it is here for sure.
24:57
Right. And, and they always come in on these Trojan horses that are eminently defensible. Again, I'm with anthropic on the digital surveillance point. I've been, I've been concerned about it for years, been writing about it for ages. And it's similar. There's an analogy to the, to the monopoly. Like, you have all these laws that assume someone has to actually physically go somewhere and tap into a phone line, but if you can do it with computers at scale, like suddenly you had all these assumptions that limited what the government could do, that magically disappeared, not because of the law change, but because we got computers that can do the job of an individual at scale infinitely. And AI again, is going to the idea that the nsa, by the way, this is my sort of. Like, I had to admit this in the article. I was so confused why the Pentagon was so obsessed with domestic surveillance.
25:21
I know.
26:12
I didn't realize the NSA was part
26:12
John and I part of the God. John and I had the same moment.
26:14
Yeah, yeah, yeah.
26:17
You just sort of thought about it as like an independent agent, like the CIA. But, but that's. That made a lot of this story make more sense, right? No, exactly.
26:19
Yeah. I feel like a lot of tech people are like reading the fourth Amendment today and understanding like some of these, like pretty basic processes.
26:26
Well, yeah, but like, it's pretty, the loopholes are massive. Like, I'm not denying it. Like, like. And it's similar to the chip thing with China. Like My prescription for Anthropic to give in is to allow these massive loopholes to be exploited and for the NSA to allegedly in the service of investigating foreign adversaries but by, you know, the process basically surveilling the domestic population I think is bad. And the reality is the nature of trade offs is you're choosing between multiple bad options. And at some point it's like, which team are you signing up for? They both suck.
26:33
What do you think of the messaging around the models themselves not being capable enough to be used in the context that the Department of War asked for? Because that felt like Dario was sort of speaking for all Frontier labs. He said that these technologies broadly are not suitable for these missions just yet. I'm not sure that he has all of the information on the other side to know about the efficacy. He certainly understands his models and what's capable in the frontier.
27:24
I mean, I think that, yeah, I would assume they're definitely not capable. I think that point is more of a precedent setting one. I think Anthropic's position is significantly weaker on that point. Like at the end of the day we either trust the military or not to make these sorts of decisions. That's why we have a military. And so I have a harder time. And I think the digital sailors point is so compelling for them because I think it may cause my personal biases.
27:54
Totally.
28:25
I think it's a huge problem. The you. There's various anecdotes. Again, I hate the reporting from these because you can tell like the weeks coming from which side for each of these. But you know this idea that putting forward these hypothetical examples of like, oh, you could call us and we'll figure it out. Then it's like, no, come on, serious about this. Like, like. So yeah, I think that's a weaker argument for them. So that's why I almost focus more on the digital surveillance one just because I think it is a very compelling argument in favor of the anthropic position.
28:26
Jordan, anything else?
28:58
Oh, there's a lot more. What are you, what are you going to be tracking going forward? Obviously the story.
29:00
Yeah, good luck. Stay strong.
29:07
No, I mean I. The open eye angle is obviously interesting. I didn't really get into OpenAI. It's hard to parse exactly what's going on. It seems to me they have agreed to the, to the Pentagon that they will be. The Pentagon will be limited by lawful capabilities and they make their own judgments about weapon usage. And as I understand it, OpenAI is like we will on our side Be free to stop the model from doing digital surveillance, which sounds like you're in sort of a jailbreak competition. It's like we're going to agree to have a jailbreak competition with the U.S. government, which I again, it's an example of how fraught this is, that that's probably the good place to come down on. Now. There's obviously these dynamics of competing for the same talent base being in San Francisco. This is part of Think Anthropic's Anthropic has a local advantage in that most people, I think, in the industry are with them and they have a national PR problem in that I think a lot of folks outside of tech don't understand why tech companies always try to or resist helping the US government. And so it's kind of an interesting dynamic where I think OpenAI is in step with the broader public and very much out of step with sort of their talent base and in San Francisco. And so that's going to be very interesting to see how that plays out.
29:11
Yeah, it's remarkable that Google has stayed out of the fray given all the Project Maven background and stuff. Like, they must be so happy that they're just like.
30:45
Well, that's the other thing is this is actually goes back to Google, I believe, where Google had the project. I think this is right.
30:54
Yeah, but.
31:03
But I think Google had Project Maven, which their employees objected to, and therefore that went to AWS and then some combination of. I think the Pentagon is using Anthropic because that's what AWS is.
31:04
What AWS is a higher, higher fed ramp designation.
31:19
That's right. That's why Anthropic was already allowed for classified content and OpenAI wasn't. Again, I don't know the.
31:22
It was. I've studied, but it does pretty. It's a wild story. I mean, it was similar, like AI for the military. The same like killer robot fears. The actual. I mean, Google was a subcontractor on that project and what they were actually exposing to the government was tensorflow APIs that would run on Google hardware. And so they weren't actually writing any AI software, but they wanted to effectively, like classify images from drones in the Middle East. See, that's a car, that's a house. And previously they had Air Force airmen just sitting there, like, clicking. And they were like, okay, we're going to automate that.
31:29
Right.
32:09
But. But it was still like, scary. Don't be evil. Working with the government, military. And then there was a backlash. They pulled out. Then eventually they Went back in and had a new head of Google Cloud. Yeah.
32:09
I mean, this is, you know, it's hard to. And I speak for myself personally, I obviously have the biased angle because of Taiwan. I have the bias angle where I think there are, you know, just in general, there is this very naive view of the world that doesn't understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving in to this naive mindset that we have no duty to support the military. And there's this tension's been so. It's a tension that's been brewing for years, which is, are you an American company subject to American law and even beyond law, just morally compelled to support the US Military or not? And there's an equally American sort of idea of moral consciousness. I'm able to say no. That's why we have the First Amendment right. This goes into the can the government compel a company to do something? It goes back to some of the questions that happen with the first Trump administration. And I've been on both sides of this, which I.
32:21
And this is what in CBS interview, he said, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. He's already sort of like, making this case.
33:26
Yeah.
33:36
Which again, is a case that I support.
33:38
Yeah.
33:41
But the point here is there's always the question with like a bubble or whatever. Is it different this time?
33:43
Sure.
33:49
And I guess that's sort of the question I'm raising.
33:50
Yep.
33:52
Is AI actually applicable to every other technology that's come along, or if it is the potential to be a source of power going forward, it's going to be dealt with as such.
33:53
Yeah, that would make sense. Last question. We'll let you go. How happy should Ted Sarandos be right now?
34:05
I mean, I think he had the killer quote the last couple of days where I think someone's asking if this is such a jewel and it's so rare, like, isn't it a problem that you're missing out on it? And he's like, well, have you seen the history of Time Warner? Which I think sounds about right. I'm not sure how the entity with all the debt that Parabount and Warner Brothers is going on, like, who else are content companies going to sell to? I feel like they sort of. I feel like they've been spooked by YouTube a little bit and they felt a need to push forward.
34:14
Yeah.
34:46
That. Bring the. Bring the future forward. That was not allowed to happen. But that means their original plan I think still in place. So probably. Probably pretty happy. All things considered. I'm going to say it's great.
34:47
Well, I'm excited to get back to Netflix coverage and more anodyne topics.
35:01
Yeah, remember it was on Cheeky Pipe you were talking about getting sucked into the Idol and here we are.
35:06
So I put that quote at the beginning of my article. You know, you may not be interested in politics, but politics is an interest in you. That was about anthropic and it was also about me.
35:13
Yes.
35:20
What did you do?
35:21
Welcome. Welcome to 2026. Well, we thank you for taking the time to come chat with us. Great to see you and fantastic article. We appreciate you Ben. Talk to you soon.
35:22
Thank you. Have a great day.
35:31
Thanks for listening to this episode of the A16Z podcast. If you liked this episode, be sure to like, comment, subscribe, leave us a rating or review, and share it with your friends and family. For more episodes go to YouTube, Apple Podcasts, and Spotify. Follow us on X16Z and subscribe to our substack@a16z.substack.com thanks again for listening and I'll see you in the next episode. This information is for educational purposes only and is not a recommended recommendation to buy, hold, or sell any investment or financial product. This podcast has been produced by a third party and may include paid promotional advertisements, other company references, and individuals unaffiliated with A16Z. Such advertisements, companies and individuals are not endorsed by AH Capital Management, LLC, A16Z or any of its affiliates. Information is from sources deemed reliable on the date of publication, but A16Z does not guarantee its accuracy.
35:34
Sam.
36:29