Pentagon Insider: What's Next For Anthropic and The Department of War — With Michael Horowitz
The episode explores the breakdown between Anthropic and the Pentagon over contract language regarding AI use restrictions, featuring insights from former Pentagon AI policy expert Michael Horowitz. The dispute escalated from a trust breakdown to Anthropic being designated a supply chain risk, despite their technology still being used in military operations.
- The Anthropic-Pentagon dispute stems from personality conflicts and trust breakdown rather than substantive policy disagreements over current AI applications
- Pentagon views AI vendors like traditional weapons suppliers who shouldn't dictate usage terms, while AI companies see their technology as ongoing services requiring oversight
- Current military AI applications are primarily for intelligence analysis and decision support, not autonomous targeting or mass surveillance as commonly feared
- The supply chain risk designation appears legally questionable since it's typically reserved for foreign adversaries, not domestic companies with contract disputes
- Government agencies are still in the 'chatbot phase' of AI adoption, lagging behind commercial agentic AI developments by 1-2 years
"This is about personalities and politics masquerading as a policy dispute"
"The feeling inside the Department of War right now is they want to destroy Anthropic"
"Crushing one of the most innovative companies in the world and salting the earth is not good for American innovation"
"When you release stuff that doesn't work well in the military, people die"
"Nobody wants America's military systems to work effectively more than the war fighter, because systems that aren't reliable don't work"
Where do Anthropic and the Department of War go from here now that their relationship's exploded? Let's talk about it with an actual expert who's designed AI policy for the Pentagon, especially regarding weapons systems. That's coming up right after this.
0:00
Fiscally Responsible Financial Geniuses Monetary Magicians these are things people say about drivers who switch their car insurance to Progressive and save hundreds because Progressive offers discounts for paying in full, owning a home and more. Plus, you can count on their great customer service to help when you need it. So your dollar goes a long way. Visit progressive.com to see if you could save on car insurance, Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states or situations. Starting a business can seem like a daunting task unless you have a partner like Shopify. They have the tools you need to start and grow your business. From designing a website to marketing to selling and beyond, Shopify can help with everything you need. There's a reason millions of companies like Mattel, Heinz and Allbirds continue to trust and use them. With Shopify on your side, turn your big business idea into Sign up for your $1 per month trial@shopify.com SpecialOffer welcome
0:14
to Big Technology Podcast, a show for cool headed and nuanced conversation of the tech world and beyond. Well, many of you have asked for an expert who's worked intricately on matters that might involve the Anthropic Pentagon dust up, and we definitely have the right person for you today. Professor Michael Horowitz is here with us. He's a Professor of Political Science and Economics at the University of Pennsylvania. He's also a Senior Fellow for Technology and Innovation at the Council on Foreign Relations. And importantly, he was the Deputy Assistant Secretary of Defense for for Force Development and Emerging Capabilities at Department of Defense. And as I said in the intro, he worked on policy at the Pentagon, especially on weapon systems. So this is going to be a discussion that will take you deep inside what might actually be the mindset of the Pentagon and where we will end up with this. Dust up with Anthropic. Professor, great to see you. Welcome to the show.
1:16
Thank you so much for having me. Looking forward to the conversation.
2:12
We have been surmising what might actually be the meat of the matter between Anthropic and the Pentagon. And I've gone back and forth on Friday. I thought maybe it was a marketing move by Anthropic. Then it became clear that it's a little bit more serious than that now that they've been deemed A supply chain risk. And our audience is basically centered around three different potential scenarios. I want to throw them at you and see which one you think is closest to the truth. And by the way, what happened? For those who are just reading in, although I'm sure many of you are caught up, Anthropic and the Department of War, they had this contract where the Department of War would use their technology and Anthropic was looking for a carve out saying that we don't want our technology used for mass surveillance or autonomous weapons. And then that blew up. The Pentagon not only canceled the contract, but declared them a supply chain risk, which we'll get into. So here's my three options of what's going on in this conflict. One is maybe it's just a culture clash over really inconsequential details and it's just an ego blow up. The second is that potentially, is it the Anthropic CEO Dario Amodei, valiantly standing up against mass surveillance and the potential of mass surveillance through AI? Or third, is this what's really happening? If the Department of War valiantly pushing back against a private company dictating it how to run wars, what do you think is closest to the truth in this scenario?
2:16
I mean, there's probably like a little column, a little column B, little column C going on, like fundamentally. But to me, this is about personalities and politics masquerading as a policy dispute, although it raises really important policy issues. And let me, let me tell you what I mean by that. If you look at the relationship between Anthropic and the Pentagon, Anthropic was the first frontier AI lab willing to do classified work to support American national security. So starting right there, like, Anthropic was ready to be behind the scenes with the Pentagon in a way that other frontier AI labs weren't ready to do yet. And Anthropic was also. There was no dispute between Anthropic and the Pentagon about any current projects that Anthropic was doing. It wasn't like the Pentagon asked Anthropic to do something and Anthropic said no or had hesitations. It also seems as though there were not any upcoming projects that the Pentagon was going to ask Anthropic to do that Anthropic had had questions or concerns about. It seems like this kind of started when, after the Maduro operation, when the United States plucked the leader of Venezuela from, from that nation and brought him back to the United States, that somebody from Anthropic basically called somebody from Palantir and said, like, hey, was our tech involved there? And that's because the way that Anthropic technology is often integrated within the Pentagon is through a Palantir product called Maven Smart System. And so Anthropic calls up Palantir, is like, hey, was like, our tech used and not saying it was bad. And the Pentagon finds out and is offended that Anthropic even asked. And that was essentially the trigger behind this. So that combined with the fact that there was no actual current thing under dispute, makes me think that this is at least as much about personalities and politics as it is about substantive disagreements.
3:43
So how do you get from there, then, to this dispute over the language around surveillance? I mean, it was really one word, right? It was. They. The Department of War wanted Anthropic to agree to. To language in the contract that said that they wouldn't use the technology for mass surveillance, consistent to some laws that are already on the books. And Anthropic wanted that to be pursuant to some laws on the books. And some people say that's a very, very big difference. Not a big difference. But how do you get from sort of point A to point B where Anthropic says, how's our technology being used to all of a sudden a litigation of a single word in a contract that's not even related to the Maduro thing?
5:45
Totally not related at all. I think it may be. It reflects the Pentagon updated its artificial intelligence policy about, like, a month or so ago. And one of the things that it did was say that all future contracts that it signed with any AI vendor, so not even necessarily just a frontier AI lab, would have to follow a quote, all lawful uses provisions, meaning that they were comfortable with their technology being used for, like, wait for it. All lawful uses. Now, meanwhile, like last summer, Anthropic and the Pentagon signed a deal that the. That the Department of War was happy to sign that said that contained these provisions that, you know, made Anthropic comfortable surrounding the use of its technology. And so then the Pentagon updates its policy and starts, you know, talking essentially about renegotiating this. This contract more or like, more or less. And the. Then this, you know, Maduro trigger essentially happens. And what you end up with, I think, is fundamentally a breakdown in trust between Anthropic and the Pentagon, where the Pentagon decided that it didn't trust Anthropic to be there for important national security use cases. Like, some side note, we can talk about Iran in a couple of minutes. And Anthropic didn't trust that the Pentagon would use its technology responsibly. And the mass surveillance debate in some ways is a good illustration of this. The Pentagon's been very clear that it follows the law and that mass surveillance, like, not surprisingly, violates the Fourth Amendment. Like, that's not like a thing that the Pentagon is like, thinks that anybody should be worried about the Pentagon doing how much you trust the Pentagon in general might reflect your views about that. And so they think that Anthropic's provision on that point is unnecessary because it's already covered essentially as a lesser included in the obligations that the Pentagon already has. Anthropic wants these assurances because they're worried about the way that advances in artificial intelligence could lead to things like de, anonymization of anonymized data and create real mass surveillance issues, including for American citizens. And so you have a conflict there. And the crux of that conflict in some ways is that the Pentagon is thinking about artificial intelligence of vendors and services the same way they think about buying weapons. And when, say, like, Lockheed sells an F35 aircraft or a missile to the Pentagon, Lockheed doesn't get to tell the Pentagon, like, oh, you could only use it against like, this country, but not that country. And so from the Pentagon's perspective, what Anthropic is asking for is like, unprecedented. Like, how could they even from Anthropic's perspective, AI is a service. It's a constantly updating technology that they need to be involved in. It's not just like selling a missile to the Pentagon. And so that, that's, that's like a bit of, I think, what's going on behind the scenes.
6:29
So I just want to clarify here, and this is important, when we're talking about this dispute, we are not talking about Anthropic being used, let's say, in strikes, like to pinpoint autonomous strikes on Iran. And we're not talking about the Department of War wanting to like from now, start to create a surveillance database. Right. This is simply language that was surfaced after the Maduro thing. And it's almost a, A, a dispute that seems to have, I don't want to say, come from nowhere, but it's not like a critical war fighting capabilities that are being discussed now, nor are these, these programs in the works.
9:34
I think there are a couple of different ways to think about, about this. I'm not sure that the dispute necessarily came from nowhere. If you, you know, Anthropic has been very public in its criticism of some other Trump administration Activities unrelated to defense, such as sort of easing up on AI export controls with regard to China. And so one wonders, although, like, who knows, whether in some ways there were maybe some bad feelings between Anthropic and the White House that could have played a role here. But, but shifting back to the defense kind of side of the House, the right, like, I think, I think there are like, reasons why people may want to worry about, from my personal perspective about artificial intelligence and the way advances in AI could enable mass surveill violence. I'm not sure the Pentagon is the right locus for that concern fundamentally. Like, I might worry about like other departments and agencies like first in that context. And the, the interesting thing about Anthropic's other objection, you know, surrounding autonomous weapons systems is the, you know, the statement that Anthropic's leadership made on, you know, Thursday evening suggesting they actually don't have a problem with autonomous weapon systems, they just think their tech isn't ready for it yet. And let me tell you, as the person that drafted the Pentagon's policy on autonomous weapon systems, Anthropic is not wrong there in that the, if you were going to train an autonomous weapon system, what the kind of thing that you would want that weapon system to do is generally not the things that like, people fear the most, which is like, can this algorithm tell whether like an individual is a legal combatant on the battlefield? Like, that'd be super hard. Like, we can talk about that more if you want. What you're generally going to be doing is training an algorithm to do something, do something like, say, target Russian tanks or Chinese fighters. Something very, very specific and bespoke data. And often the kind of algorithms that you're going to be most likely to use in that context are much more deterministic than say, like Claude trained on the slop of the Internet. And so Anthropic is not wrong that their tech, like, isn't ready for prime time for autonomous weapon systems. And, and they even offered to help the Pentagon get their tech ready for that kind of use case in the future. Which makes this all the more puzzling, like how this escalated.
10:16
Okay. And by the way, you're bringing up an interesting perspective here, and this is one of the reasons why I was so thrilled to have you on the show is because you have actual knowledge of how this technology is being used, which by the way, up until this point, at least for me, has been sort of this, this, you know, big cloud because we don't fully know exactly what's inside the Pentagon and, and you know, there's been talk about how, you know, despite this dispute, the Pentagon still used anthropic tool in tools in the Iran strike. And. Well, does that mean, you know, like, some people have implied that CLAUDE is out there targeting, you know, combatants on the Iranian side, or is it just like there are they querying, you know, some, some databases and then going to triple check after Claude makes, you know, some assumption there. So, and maybe that could be significant. So I'd love to turn it to you and just get your perspective on how are anthropics tools being used inside the Department of War?
12:43
Great question. Anthropics tools are being used in a bunch of different ways inside the Department of War. And what we're focused on most now in some ways are the uses in the context of the Iran operation. Because that or like something like that is probably like most illustrative for, for thinking, for thinking this through. And on the classified side, a tool like anthropics is going to be, as I mentioned before, plugged into something, plugged into another tool called Maven Smart System, which, you know, imagine essentially a dashboard that help, designed to help a combatant commander, like the person in charge of all US Military forces in the Middle east or all US Military forces in the Indo Pacific. Like a dashboard designed to help that person understand what's going on in the region and understand all the different kinds of things happening, processing unclassified data feeds, classified data feeds, putting all that information together, like, trying to help that commander make good decisions with regards to American forces. And CLAUDE is one, one of many different inputs essentially into that, into that system. And I have no doubt, and there's been, you know, reporting suggesting that the, you know, there are a couple of different ways that something like cloud could be used in this context. One is just querying public databases, querying public information, like building like, what are the most important news services in Iran. Like, what is the chatter like in Iranian media right now? Like all of those like kinds of things. Cloud could also be doing things like helping with, with simulation, you know, helping more rapidly generate simulations of what might happen in the context of, of an attack. A thing that cloud is definitively not doing, at least as far as I know or like, I would be genuinely shocked. Is autonomous targeting on the battlefield today like that? I would be astounded if, if that was a Claude, a cloud specific, A cloud specific task, again, for reasons that have to do with technological readiness as much as anything else. And here I think is important context. There's often a lot of concern that the Pentagon is going to take new tools like AI and use them inappropriately, be sort of overly aggressive with their, with their implementation. And, and don't get me wrong, accidents will happen when you integrate new technologies. That happens all the time. That's happened for sort of like hundreds of years. But nobody wants America's military systems to work effectively more than the war fighter, because systems that aren't reliable don't work. And systems that don't work, they get you killed. So nobody wants our tools essentially to be effective more than the war fighters. And so the US Military has actually been very conservative in some ways when it comes to the integration of AI in general, let alone a tool like Claude. And so I have no doubt that any information that's coming out of Claude in this context is going through layers of review by humans prior to that influencing anything happening close to the battlefield.
13:39
How much of a leg up do you think using Claude would give a military? I mean, this is sort of going to the importance of it in, in the battle. I like sort of summarizing media clips from Iran seems like something that technology's been able to do for a long time. I mean, may maybe, and I'm curious to hear your perspective. Here's one example. Like, it's been reported that the agencies had, you know, traffic cameras throughout Tehran hacked and were able to see movements. But is that something that you would use like a large language model for or just as, you know, sort of more traditional computer vision system?
17:00
Right? Like, I guess like you could, but you could do it with computer vision sort of as you, you know, like, as you said. And the military is often pretty ruthless about using the best tool for the job. And, and in this case you have tools that have been like, proven out over years able, like especially, and especially computer vision tools, like less sophisticated in some ways, AI tools proven out over years able to do a bunch of these tasks. And so you, you wouldn't, you know, might you throw cloud at that? In some ways maybe, but you wouldn't throw cloud at that. Instead of using computer vision, you might throw clot at that maybe to see how, how those things compare to each other, perhaps in what the assessment looks like. But honestly, this is all speculation in some ways. In one thing I think that it's important for people to keep in mind is that because this is filtered through a platform like Maven Smart System and all of these tools, whether like Maven Smart System or anything else, they're always on the back end like More user intensive than it looks like in the movies and in television for the military, they're always a little clunkier. They're always a little bit more user intensive. So it's not like humans are being cut out of this process. And note that the use of Claude that we're talking about in this context is what we would say in military parlance is more operational, more looking at what's happening on the battlefield. It's a decision aid, essentially for a commander on the battlefield, which is neither the mass surveillance objection that Anthropic had nor anything involving an autonomous weapon system.
17:36
Right. Yeah. Just knowing what I know about these LLMs, to me the guess was always, I mean, maybe it was an educated guess that this was tangential. Now maybe useful, but largely tangential versus core to what the military is doing today. Seems like you, I think that's core. Mostly agree with that.
19:26
Yeah, 100%. I mean, it wouldn't even surprise me if Cloud's being used in a way that's a little more experimental. Like one of the other things behind the scenes here is that the, you know, because of this conflict is in, is with Iran, it's US Central Command that is running the, that's running the show for the United states military and U.S. central Command, of the various U.S. combatant commands around the world has been arguably the most forward leaning when it comes to experimenting and prototyping and innovation. They've been the most excited in some ways to like, let's see what we can do with emerging capabilities. Like, I worked with them a lot with my old hat on in, in the Pentagon and they, I have no doubt that they are taking, that they are taking lots of things out for a test drive, so to speak, including but not limited to Cloud. Even while they're like keeping it on the straight and narrow and using the more proven capabilities to, you know, make the big decisions.
19:44
Right. And I think Dario, I mean, you referenced it, Dario said, we don't believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. That seems very reasonable to me. We were talking about it on the show, like whether you let the LLM take the shot. And you know, for anyone who's in these tools, it's like, Claude, code is an amazing tool. You can build software with it without knowing how to code. But the amount of time you spend debugging is almost, is certainly longer than the amount of time you spend giving prompts. So it seems like a reasonable objection from Dario there.
20:46
All right, Public service Announcement.
21:21
Okay.
21:23
The phrase fully autonomous weapons, if there's anything I wish Anthropic would stop doing, it's actually using the phrase fully autonomous weapons. Here's why. It's not a term of art. And so it, from the perspective of the Pentagon. And so when Dario says, you know, we don't want to do fully autonomous weapons like this or like that, it frankly can be, it can be confusing in some ways for, for some of the defense community because the, the terminology in, in US Policy is autonomous weapon systems. And there's a, there's a difference between those and, and, and here's, here's what it is. The US Military has been using autonomous weapon systems for more than 40 years. I think people really underestimate in some ways the degree of autonomy built into modern weapon systems. Even in a world like before, what we would call like AI today, like a good old fashioned AI kind of world. Like, let me give you two examples. One is something like a homing munition or a radar guided munition where somebody may believe that there's a radar over the horizon and they fire a missile at that radar. There's no human supervision of that missile after it's launched. It just turns on a seeker and it goes and hits the radar. Is that, what if that radar is on top of a school? What if that radar is on top of a, you know, on top of a hospital? Like, you don't know, it's gone. Second example is something called the close in weapon system, which is a weapon system that protects ships in some military bases from essentially massed attacks. So if There are like 10 missiles coming in and you couldn't even point and click at all of them if you were an operator, you can flip on essentially an algorithm that automatically like detects and shoots at those. The U.S. military has been using that system since like 1980, as have, you know, like dozens of militaries around the world. And so we need to be careful then when we talk about autonomous weapon systems. And to be clear about like, what is the thing that we are worried about and what is the thing that we think the technology is ready for or not ready for. And as I said before, I think Anthropic is absolutely right that they're like, tech isn't ready for prime time and incorporation at the edge in an autonomous weapon system. Also, if you think about the compute at the edge, how would you even fit that into a missile? I don't know. But there are so many other ways if you want an autonomous weapon system There are so many ways you would do that that don't involve LLMs essentially, but the public service announcement. The phrase autonomous weapon system is the appropriate term of art. An autonomous weapon system is a weapon system that after activation, selects and engages targets without further human intervention. Like period dot that, that is the way that the Pentagon, at least. Different people, different definitions, but the way the Pentagon at least defines what an autonomous weapon system is.
21:24
Can I tell you where, where I think so much of the confusion is coming from now that you explain this. So this is. So I've worked a couple years in the government and you talked about the technology. We both know that the government technology tends to lag behind commercial use cases by a couple years.
24:20
Just a little bit, right?
24:41
Just a little bit. Okay. The AI industry has gone through two phases over the past year and a half. There was a chatbot phase of AI, right. And that also includes content synthesis, summarization, these type of things. And now they're moving into an agentic moment. Right. I think there is a misconception that the government is already on agentic, right. Where the technology takes its own decisions. But really what I think I'm hearing from you is it's in the chatbot phase. It's still this year, two years behind commercial. And, and this, this worry about the technology getting too agentic is sort of misplaced because of where the government is.
24:42
I think that that's probably broadly right. Although frankly part of what Anthropic was trying to do in doing classified work with the Pentagon in the first place was fix that in getting and getting in behind the scenes and ensuring that their, that their tech, that, you know, that, that America's war fighters had access to things closer to the cutting edge. But, but the, you know, another thing to keep in mind here and is the way that testing and evaluation standards, or what the military calls T and E standards differ from what you would need to maybe like toss like a piece of technology out in the commercial market. You know, imagine you were, you're releasing either like last gen, like chatbot kind of system or this gen kind of agentic system into the marketplace. As a, as a company, if there are errors and problems and whatever, like those are embarrassing, but you fix them on the fly. And frankly like getting there first can get you market share. There's all sorts of like economic reasons why like a for profit company might do that. When you release stuff that doesn't work well in the military, people die. And so the incentive structure is very different. And so the testing and Evaluation of these systems is thus very different in a military context, like the level of reliabilities and cybersecurity, et cetera. You need to hit for something to be like fieldable. Is is very different. So people should, at least in theory, like if the system's working properly, like be reassured on that front.
25:23
Exactly. Okay, I want to talk now about the government's perspective and what this supply chain risk designation might do to anthropic. Let's do that right after this. If a driver in your fleet got in an accident tomorrow, could you prove what actually happened? Without footage it's much harder. So your insurance rates spike and you're stuck paying for it. That's why so many fleets choose Samsara's AI powered dash cams, clear video evidence, real time alerts, and coaching tools that help prevent accidents before they happen. Samsara AI helps reduce crash rates by nearly 75%. For instance, the city and county of Denver saw a 50% reduction in false claims against them and a 94% reduction in safety events overall. This is the kind of visibility that every operation manager needs. Don't wait for the next accident to take action. Head to samsara.com bigtech to request a free demo and see how Samsara brings visibility and safety to your operations. That's samsara.com bigtech samsara operate smarter. You want to eat better, but you have zero time and zero energy to make it happen. Factor doesn't ask you to meal prep or follow recipes. It just removes the entire problem. Two minutes you get real food and you are done. So remember that time where you wanted to cook healthy but just ran out of time. You're not failing at healthy eating. You're failing at having three extra hours every night. Factor is already made by chefs, designed by dietitians and delivered to your door. If you inside there are lean proteins, colorful vegetables and healthy fats. It's the stuff that you'd make at home if you had the time. There's also this new Muscle Pro collection for strength in recovery. You always get fresh and never frozen food. It's ready in two minutes and there's no prep, no cleanup and no mental load. Head to factor meals.com bigtech50off and use code bigtech50OFF to get 50% off your first factor box plus free breakfast for one year. The offer is only valid for new Factor customers with the code and qualifying auto renewing subscription purchase. Make Healthier eating easy with Factor and we're back here on Big Technology Podcast with Professor Michael Horowitz of the University of Pennsylvania, also the former Deputy Assistant Secretary of Defense for force development and emerging capabilities. All right, let's talk a little bit about the government's perspective. Is there validity in the government's perspective of telling Anthropic? You might, you know, you might have these thoughts about how to use your technology, but you don't tell us what to do. We are, we should be trusted to be the ones who determine that, not you.
26:46
I think there are, the government has a point in some elements here. And let me, let me tell you what I, Let me tell you what I, what I mean. And you know, I hinted at this before. When the government, the government's used to buying a technology to think about. When the government buys hardware, the government buys a fighter jet or a submarine or a missile or something, the companies that build those technology don't tell the government how to use it. The assumption is that the government will follow the law when it uses those technologies. Since, like, otherwise, like, kind of, what are we doing here? And so the government viewed these requests from Anthropic and their refusal to yield on them as essentially challenging the Pentagon's authority. And this is, I think, part of where the, what is a little bit the, like, culture and personality clash that we were talking about before, like, where it comes from, because the Pentagon's saying, hey, like, we follow the rules. Like, that is a thing we definitively do. Like, you don't need to worry that we won't follow US Law. You don't need to worry that we will go do, like, go do crazy things that the technology isn't ready for. We have law and policy and process designed to ensure that that doesn't happen. We don't let other vendors tell us we can use their tech and scenario X but not scenario why. So what you're asking for is unreasonable. And I understand that from the government's perspective, like, why they might, why they might say something like, why they might say something like that. That's also why, as I suggested before, I think what we're really seeing here, you know, just to start us off in this part of the conversation, but what we're really seeing here in some ways is a breakdown in, is a breakdown in trust.
29:29
Exactly. And so the question is, what happens next? And yeah, in some ways, I do believe that if you're a government and you think you can't trust your technology vendor, you should probably swap them out. But that's not exact, that's not where the government stopped here. What they did was they. They deemed Anthropic a supply chain risk. And that means that the company cannot work with US Government agencies and Defense or war. Secretary Hegseth went further. He said, effectively, effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic. That includes Amazon, by the way, who is a US Government contractor and also hosts Anthropic models. I have this from a source with knowledge of the department's thinking. The feeling inside the Department of War right now is they want to destroy Anthropic. What do you think about this reaction?
31:16
I have a lot of thoughts about this. Let me, let me start with the bottom line, which is like crushing one of the most innovative companies in the world and salting the earth is not good for American innovation or the American economy. And it's like, dear God, let's hope they work it out. But like backing up a little bit the right. Like you would imagine in a normal marketplace situation, like one can think that the Pentagon's view of this is reasonable or unreasonable, but it is what it is. And in a normal market view of this, the Pentagon would do one of two things. Either it will say we would, we'll work with Anthropic on these use cases, but not those that, you know, like they don't want to do. And if we want to do those in the future and reminder they're not doing them right now. So there was no dispute about a current, current or planned future use, then we'd find another AI vendor to do that. And that, you know, whether it's Xai or OpenAI or like somebody else like that would do that. Or the government could have said, you know what, it's not worth it for us to do business with Anthropic. Cancel the contract, we'll off ramp them and we'll low, you know, we'll bring Xai or OpenAI or like somebody else on to meta whatever like to address this. That's obviously not what happened. It's not just that the government has labeled Anthropic as a supply chain risk. It's in some ways even more baffling than that. And the supply chain risk designation is for companies believed to present a sort of clear danger to US national security. Examples of companies labeled as a supply chain risk are Huawei, you know, like Chinese companies where the fear is that if a US government agency worked with them, they might insert backdoors or vulnerabilities that could place US national security at risk. That's not really what we're talking about here. And so I think a lot of people have wondered whether that that designation would hold up in court. And also it's not clear that the supply chain designation has actually been delivered to Anthropic yet. It hadn't as of about a day ago, although it's still been threatened. I mean, Anthropic I'm sure will be in court as soon as the like, as soon as like they get the letter and like actual designation. And it was striking, of course, that, I mean, no pun intended, that less than 24 hours after the supply chain designation, the US government was using anthropics technology in the context of Operation Epic Fury against Iran. Like how could they really be a supply chain risk if you are using them in the context of ongoing military operations? But the government's gone further. They've on the one hand said they could label Anthropic as a supply chain risk or are labeling Anthropic as a supply chain risk. They've also said that they're considering using the Defense Production act to compel Anthropic to work on use cases with the government that Anthropic might not, might want to. And the Defense Production act or DPA was designed to ensure that, say the government was first in line for, for vehicle manufacturers if there was a war going on and you needed more tanks or something like that, it was not designed for like this kind of environment. But that the government's thinking about these two different things, both the Defense Production act designation and the supply chain designation. And they point in opposite directions. One says you can't work with the government and one says you have to work with the government. Like points to some of the confusion here.
32:19
You've worked within government agencies. You've worked within the Department of Defense. This is from Reuters. State department switches to OpenAI as US agencies start phasing out Anthropics. And this article says leaders not only at the Department of State, but treasury and Health and Human Services have directed their employees to to abandon Anthropic's language trained chatbot platform. Claude, on orders from President Trump, they joined the US Military in dropping use of the platform. I'd love to get your perspective just about the speed that governments move and when you think about governments evaluating certain technologies because you've been inside one. What sort of damage do you think this has already done to Anthropic now that we're seeing so many agencies move off?
35:42
There are a couple of different pieces here I would Say and again, a lot of people seem to, I'm not a lawyer, but a lot of people seem to think that this, this won't stand up in the designation, won't stand up in court.
36:29
Right, but even so.
36:41
Oh absolutely. The, the use capable. But it matters insofar as it's not like Anthropic can't work with us. It would mean that Anthropic couldn't work with like AWB government. The it's not, it's not in theory like a death blow to like working with AWS or something like that. But the, but from a government, from the government agency side. What this implies to me actually is that LLM integration in U.S. government departments and agencies is still behind the power curve and behind where frankly somebody like me would want it to be. And it's been sort of, it was much announced over the context of the last year that, that you know, all the frontier AI labs like made their, made their technologies available either for free or for like a penny or a dollar or something like that to the federal government trying to ramp up, trying to ramp up adoption. And so government employees then at these agencies in theory have had access to multiples of these for a while and are like choosing whichever ones they like want to use for various, for various tasks. And it sounds to me like on the unclassified side then that CLAUDE is being, people are getting instructions like don't use claude, use something else, use something else instead. This is pretty fast moving frankly for the government. But it was notable in the announcement, both the Trump announcement and the Hegseth announcement that they laid out this six month off ramp period for like real national security use cases in part because they rely on Anthropic's technology right now because Anthropic is the only vendor behind the current curtain in a classified environment. So I think what we're seeing is that real bifurcation where for these unclassified use cases the, you know, essentially like flip this like use, you know, use ChatGPT instead or use like GROK instead or something like that. And frankly if there's a deal in the future they'll just like flip back to using cloud if they want. The, on the classified side it's going to be a much harder slog because of the integration of CLAUDE and the fact that it was the first mover because Anthropic was the first company willing to do that kind of work with, with the defense establishment.
36:43
Then the question is also in terms of what this means for companies thinking about working with the government that you could potentially be declared a supply chain risk. This is from Dean Ball, who I think worked on some AI policy with the Trump, Trump administration. He goes even in the narrowest supply chain risk designation, the government has still said that they will treat you like a foreign adversary. Indeed, they will treat you in some ways worse than a foreign adversary simply for refusing to capitulate to their terms of business, simply for having different ideas, expressing those ideas in speech and actualizing that speech and decisions about how to deploy and not to deploy one's property. Each one of these is a fundamental to our Republican. Each was assaulted by the Department of War last week. And basically the worry is that companies will be wary of working with the Department of War if this is what could happen to you. I'm less worried about that. But I would love to hear your perspective as someone who's been on the inside.
38:59
I mean, this is a rough look for a Pentagon that has worked really hard across multiple administrations and in a bipartisan way to build ties with, build ties with Silicon Valley across the board. And obviously this, this administration, the Trump administration has some like, deep ties with Silicon Valley in some places, like less deep ties in, in, in other places. But certainly the notion that if you sign, you can sign a contract with the government, they might ask you to change that contract and if you don't agree to it, they might attempt to destroy you is very different than in terms of the, the risk then for a company and getting involved with the Pentagon in the first place. Because going back to something that we were talking about before, when it comes to the use cases that anthropic may be concerned about in different, in different kinds of ways. I mean, the thing to remember is like, if you do business with the Pentagon, the business of the Pentagon is war. So you shouldn't be surprised then that the Pentagon wants to do all the war things with your technology. Because like, that's like the thing that the Pentagon does. But you, but the idea that if you have a contractor sheet with the Pentagon that they might know, attempt to annihilate your entire business, not just cancel the contract. I do think in some cases could lead to questions about, for companies that might be on the, making a kind of like, marginal choice about whether they wish to work with the government or not. That being said, you know, the other frontier, some of the other frontier AI labs like Xai and OpenAI are already, already now willing to work on the classified side. And you know, Sam Altman is attempting to broker a piece essentially and Create a deal that perhaps Anthropic could join as well. Now even if he succeeds at that, will Anthropic then walk through that door? I mean obviously like there's, there's beef between OpenAI and anthropic, but the, as well as with OpenAI and XAI. But the, the, the. There are other vendors that clearly wish to do these things. But it's also true that America's warfighters have said very clearly through what we see in, up in, in Operation Epic Fury that they think Anthropic is delivering a good product and they wish to use it.
39:53
Right. I, I think and I'm curious to hear your perspective on this. This does do long term damage to Anthropic because even if these laws or even if the let's say the supply chain risk designation ever makes it to them or is overruled, public sector companies, contractors will just in the back of their mind think twice before rolling out Anthropic technology in the future.
42:02
I don't know, it kind of depends on how you, I could imagine that scenario if the narrow, even if this right if the supply chain designation gets struck down but all of the contracts are canceled and after six months the Pentagon's using other kinds of things and Anthropic never gets back into that business, then one could imagine that occurring. Although the, you know, in the context of what we end up seeing in the midterm elections or a future presidential election, like the politics could change in a way that also like rejiggers this. But it's also possible that this six month off ramp period, I mean, I mean maybe I'm just being, this could be like wishful thinking frankly from a national security perspective could allow for some bargaining potentially to occur.
42:25
We've seen that with TikTok, the six month never happened.
43:11
Yeah, yeah, exactly. And the, and you know that the, that the supply chain letter wasn't delivered on day one made me wonder like oh like maybe is there an opportunity for bargaining here? Like who knows? I mean another challenge here is like there is if there's like any organization in the US Government that is like full send all offense all the time, it is like the, it is like Secretary Hegseth's Pentagon. And so it, it would be, it would be challenging I think to figure out what the win win looks like for both Anthropic and the Pentagon from a public perspective. But there's probably a lot of utility in that. And it wouldn't surprise me at all if there are negotiations that like Maybe they take a couple weeks to start, or maybe they're happening right now, but if they're, there are negotiations that lead to some kind of deal eventually.
43:14
Okay, last question for you. You're someone who's thought a lot about autonomous warfare. And so I don't want to end this episode without asking you, how do you think AI is going to change warfare? Now, I know it's not like a. Just a couple minute answer, but.
44:03
Yeah. How much time you got?
44:20
I mean, as long as you have. We have. But I'm just curious to hear your perspective on where, where things go from here.
44:22
So I think about AI as a general purpose technology. It's, you know, it's not a widget, it's not a, it's not a weapon. It's a general purpose technology. Which means the analogies to me, if we want to imagine the impact of AI on militaries or on the balance of power, say more broadly are other general purpose technology. So think like electricity, combustion engine, airplane, like those kinds of, those kinds of computing, like those kinds of things. And there are three different buckets that I would put the impact of AI in. So one is a bucket that is analogous to the commercial world, which is the military's use of AI for payroll processing, logistics acquisition, paperwork. Like, lord knows the military could be more efficient from that perspective, having spent a couple of years recently in the, in the Pentagon bureaucracy. And so there are potentially massive opportunities there just in the bare minimum. Second bucket is in more that intelligence, surveillance and reconnaissance kind of category, like, like bleeding into something like the decision support we were talking about before, where you already had things like computer vision algorithms that were helping the military and intelligence agencies process all the data that they get about the world and like separate the signal from the noise. But there's a real opportunity with some of those LLMs if their reliability can be improved to make that happen much faster and much more accurately. Because while people worry about errors from AI in this context, and it's often the AI industry frankly, like speculating about potential errors in accidents, sort of from AI, like humans, definitely error prone and which we've seen like all the time. And like think about like in 1999, for example, in the context of the Kosovo bombing campaign where the US by accident bombs the Chinese embassy, the like, I don't know, maybe the computer vision algorithm or LLM like might have got that, like there, there's like lots of opportunity essentially in that like second bucket for, for more effectiveness and essentially for buying decision makers time because we tend to think in the military context that the more time people have to make decisions, and this is like a behavioral science insight, like not a military insight, more time people have to make decisions generally, the better the decisions that they're going to make. And so that's another way that AI can be helpful. Then the third is like, close to or on the battlefield. And autonomous weapons systems, frankly, could be hugely important for militaries, especially if you imagine future conflicts with great power adversaries, say, if there's like a US China conflict or something. One thing people worry about in the context of that kind of conflict is, say, losing access to satellites, losing access to space, and in what the military would call a degraded or denied communication environment, something like an autonomous weapon system will be essential for lots of different kinds of weapons to be able to operate, and algorithmic operational planning to help commanders. That may be part of the way that a military like the United States can still compete and win in the worst case kind of scenario. So there. There's a range of different, in some ways, uses of artificial intelligence. And what I would leave you with is like, macro. I think we're talking about enormous consequences for militaries. Like this is why. This is one dimension of that macro US China AI competition. It not the only dimension, certainly, but that when we get into it, I would encourage people to think about AI in the military in the context of specific use cases rather than as a monolithic technology. Because the kinds of AI you would use and what you would use them for will vary a bunch depending on the use case.
44:29
So, like autonomous robot wars. Not exactly around the corner.
48:17
I mean, not. I mean, you know, like, I. I'm ready for our robot robot overlords like I have been for years. I just, it's. I'm not. Not in the short term.
48:21
Okay. All right, Michael, thank you so much for coming on. This was so illuminating and definitely gave me a deeper understanding of what's going on than any conversation I've had previously. So thank you so much for coming on the show.
48:29
Thanks for having me. Happy to chat.
48:41
Anytime. Awesome. All right, we'll take you up on it. All right, everybody, thank you for listening and watching. We'll be back on Friday breaking down the week's news. Until until then, we'll see you next time on Big Technology Podcast.
48:42
Guys, it's no use putting it off. The best time for an underwear refresh is now Tommy John. Underwear is designed for a perfect fit that stays put all day. There's zero chafe, thanks to four times more stretch than competing brands and their innovative horizontal quickdraw fly is a game changer. With over 30 million pairs sold, there are thousands of men out there more comfortable than you. Don't settle for less. Go to tommyjohn.com today for 25% off your first order with code comfort. That's tommyjohn.com comfort tommyjohn comfort perfected Michael Lewis here. My best selling book the Big Short
48:54
tells the story of the build up
49:31
and burst of the US housing market back in 2008. A decade ago, the Big Short was made into an Academy Award winning movie and now I'm bringing it to you for time. First, the first time as an audiobook narrated by yours truly. The Big Short story, what it means to bet against the market and who really pays for an unchecked financial system is as relevant today as it's ever been. Get the Big Short now at Pushkin FM audiobooks or wherever audiobooks are sold.
49:33