Hey everybody, this is Andrew Egger with The Bulwark. It has been an absolutely insane weekend at the Pentagon. Obviously, you know, the breaking news right now is that we have, we may or may not now be at war with Iran. I guess we're going to find out a lot more about that. We did a live stream about that earlier this morning. We're here to talk about something completely different happening at the Pentagon. A very crazy story. We thought it was going to be the craziest story out of the military, certainly this week, until it was superseded by events, but maybe a story with just as important of long-term repercussions. We're going to find out. We're talking about the Department of Defense's SPAT, which is now sort of blown up into an all-out war against the AI company Anthropic, perhaps better known to you guys out there as the producer of the chatbot Claude. They have gone basically nuclear in this fight with this AI producer that was formerly a top defense contractor for the DoD. And I have just been following this story in my newsletter in Morning Shots for the last week or so, but it's obviously a much longer story that's just kind of coming to a head now. So I'm very glad to be joined by Hayden Field to talk all through this. She is a senior AI reporter for The Verge. She knows all this stuff backwards and forwards. So Hayden, thanks for coming on to talk about this story today. Yeah, thanks so much for having me. It's been a crazy week for sure. Oh my lord. Okay, so let's just start. Maybe you can give us just kind of the pre-existing, like two weeks ago, We were at a very different status quo. There were like inklings that this fight was happening. But can you just kind of walk us through how things have fallen apart between Pete Hegseth and Anthropic over the last 10 days or so? Absolutely. So it all really starts back on January 9th when Hegseth sent a memo saying, you know what, I want to renegotiate all of our existing AI contracts to be for any lawful use. So before, you know, AI companies could and did put their own terms into these contracts, Like you can't use it for this. If you're going to use it for this, you have to do X, Y and Z. He was saying, no, I want to take all that out and just remove all the barriers. So, you know, obviously that kicked off a bunch of negotiations. Now, apparently my sources told me they're in pretty good faith for a while. Then, you know, up about 10 days ago, like you said, things seemed to turn pretty ugly. I think that negotiations had stalled a bit. Emile Michael had been tweeting a lot of stuff. There were a lot of public social media posts kind of being traded back and forth. There were public statements being traded back and forth, lots of insults, things were getting more inflammatory. Anthropic was standing by its guns where it said, hey, you know, we're not okay with domestic mass surveillance and we're not okay with lethal autonomous weapons, which basically means AI being used to kill people with no human oversight. So those were their two red lines. They were like, we're not budging on these two things. That was already in our existing contract with you guys. So let's just keep it the same or let's try to find a compromise here. It seemed that the Pentagon really wanted any lawful use and they were not okay with any type of exception to that. So yeah, things have gotten a bit uglier. I've been covering this. Yeah, every day something's different. And on Friday, honestly, every 30 minutes something changed. For a while, we thought that they were going to reach a deal, a compromise of some sort. Even when I spoke with people at Anthropic that day, pretty late in the afternoon, things might have gone differently, they thought. But 5 p.m. rolled around the deadline that they had been offered to either acquiesce or else. And then a social media post went out saying that they were going to be labeled a supply chain risk, which it's strange because that's something that usually they would never label a U.S. company. It's something that usually it's reserved for like foreign adversary companies or ones that might have some type of like cybersecurity risk. Usually companies in China, for example, are on that list. But yeah, never a U.S. company that we know of so far. So it was an interesting thing. It raised a lot of red flags. I heard from on both in both parties, people were worried about, you know, if you disagree with the Trump administration, could you just be labeled a supply chain risk just randomly? Those were the kind of questions that were being asked. And yeah, basically what that means, we're not 100% sure on what the exact granular parts of this would entail, but essentially it looks like if you work with Anthropik and you're a defense contractor or something, you'd need to provide a version of your services without any involvement with Anthropik to the Pentagon or the DoD. So that's going to impact Anthropics business a lot. Not their consumer side, but they do a lot of enterprise business too, a lot of military business. So yeah, it's definitely going to be an interesting couple of days. I'm sure we'll both be working all weekend on the updates. It is hard. It has been hard for me really to process just how surreal this whole story has been because one of the defining characteristics of Anthropics amid sort of like the leading AI labs, as far as their relationship with the government is concerned, just in the last couple of years, has been that they have been sort of at the forefront of leading the charge to integrate with the Department of Defense to say, yeah, we think it's great for our models to be used for these national security purposes. We think that's very pro-democracy for the American government to be able to deploy these things against authoritarian regimes around the world. They were previously the only lab that had a contract to deploy these AI models in classified settings, period. Like XAI didn't have one, OpenAI didn't have one, Google didn't have one. And these things had gotten pretty integrated in the Pentagon's war planning and the work they're doing behind the scenes there. They were reportedly used for instance when the Pentagon went in and got Nicolas Maduro the dictator of Venezuela a few months ago now And so it because of how tightly they have been integrated so far like because it seemed like this was such an edge case dispute right Like about hypothetical potential future lethal autonomous weapons. I mean, maybe, correct me if I'm wrong here, my read on this, and I think a lot of people that I was talking to, a lot of my sources read on this was that Hegseth's threats of labeling them a supply chain risk over all of this were so sort of over the top and histrionic that a lot of people just thought, well, this is just sort of the biggest stick he has to sort of shake at Anthropic right now to try to bring them around. And that if Anthropic still stuck to these red lines, that Hegseth might tear up the government contract. That would not be very good for Anthropic, might move to some of these other models. But that that almost wasn't even in people's real sense of a genuine realm of possibility. Am I crazy about that? How was Anthropic feeling about that at the time? No, that's exactly what I was hearing from all my sources. They thought that it was, you know, kind of just like everyone was playing chicken. You know, it's everyone sticking to their guns saying, oh, we're not moving at all. We're not moving at all. And that by the time the deadline rolled around at 5 p.m. on Friday, that everything would change and there would be some type of compromise of some sort. Or at least, like you said, maybe he would tear up their government contract. That wouldn't put them out of business. I mean, you know, it's a lot of money, but it's not, they have a lot of other business that makes them just as much money. So that's really what people thought was going to happen. Now, this was crazy because I had also been hearing that XAI and OpenAI had also signed the terms already, no problem. That's what some reports were saying. Now, when the things got really intense yesterday afternoon and all the negotiations were coming to a head and we didn't know what was going to happen, OpenAI CEO, Sam Allman, apparently sent out a memo internally and said, hey, I'm working on a deal here. Stay tuned. We have the same red lines as Anthropic. That's what he said. Now, last night, he said that they did end up coming to a new deal with the Pentagon and saying, you know, we got the same terms. Basically, he's implying that Anthropic was fighting for, but we also got to keep our contract. You know, we're going to try to make sure that the DOD can kind of give this same deal to all the other labs and we think they should sign. Now, what he's kind of implying there and the way a lot of people read it was that I think he's trying to get the clout of saying, hey, like we got the same terms just because we were playing nice. And now we're going to, you know, be the heroes and kind of offer like encourage the DoD to give this same deal to all the other labs. But if you read the fine print there on his statement, it looks like he signed maybe a lesser deal that Anthropoc was fighting for. Maybe the domestic mass surveillance or the lethal autonomous weapons. Things seem to be worded a little bit differently in his statement. So I'm working on that now. But it seems that, yeah, the deals were maybe a little bit different. We just don't know how it changed the wording and the definitions of these things and what exactly they acceded to here that Anthropic was kind of fighting against yesterday. Yeah, I'm glad you said that because I'll just say I have zero sources at OpenAI. I cannot call up anybody at OpenAI and get their kind of read on how this is different. But my just sort of my personal read of what Sam Altman has been saying about this is that it was sort of weaselier in its language in talking about that they believe that there is a need for human oversight of any potential lethal autonomous weapon systems. Well, the DOD already has a policy that there needs to be human oversight, appropriate human oversight of any potential lethal autonomous weapons systems. But what that means in DOD policy is they need to – humans need to supervise the training of the AI. They need to test the AI. There's more red tape for a potential lethal autonomous weapons system than there would be for a different weapons system. and that ultimately then anything that that weapon system does, some particular operator in the system is ultimately accountable for that legally. But that's not what Anthropic was making their problem. They were saying, don't use these systems at all with our current models. We cannot consent to our models being integrated into these sorts of things. And so I very much read Altman's statement as not in keeping with the same red line. But it's interesting that he kind of made it sound as though it were. Like, I think he, I mean, these other labs are in a weird place because Anthropic is, like, getting a lot of plaudits out there right now for sticking to their principles on this. And they don't want to seem like scabs or something like that, right? I mean, what's going on in the mind of your, like, median open AI engineer right now as they're watching these things play out at the leadership level? Totally. I think that they don't really know what to think right now. And again, I, of course, I haven't spoken with a lot of people, just some, but it seems like, you know, they're just kind of listening to what Sam is telling them. And then they're like, they're not really allowed to see the actual term. So they kind of don't know what to think. They're trying to gather information, you know, anthropic employees. It seems the same thing. Like they're like, well, okay. Did opening, I just get the same deal we were fighting against. How did they do that? And then they're thinking, okay, I guess not, especially because of the wording. like you said, it seems like from the reports I've been seeing, Sam may have agreed to human responsibility for lethal autonomous weapons, meaning that, yeah, that could come after the fact, maybe, you know, not before. Anthropic was pushing for something, like you said, not at all right now, maybe later, and kind of figuring out the terms as the technology progressed. The other funny thing is, I think Dario, the CEO of Anthropic, has been painted as like, kind of just like an anti-war hero in this way. But like you mentioned, Claude has been used in the DoD for a long time and it was pretty much the most trusted technology of its kind The other funny thing is that in his statement Dario statement a couple days ago he mentioned that he is totally fine with lethal autonomous weapons He just isn't fine with them right now. And he even offered to speed up the R&D on that technology with the Pentagon and said, hey, I'll work with you to get these systems up to part of where I feel comfortable saying you can use them. But they apparently, according to him, did not take him up on that. But yeah, it's been interesting how it's been painted kind of as a black and white thing when really, you know, he, he's a lot more okay with lethal autonomous weapons than the average person might think just not right now. Yeah. I mean, it's almost hard to like communicate to people who have not been following this chapter and verse, just how strange all of the different lines are. Cause like you say, I mean, like this is like the 99% to where Hegseth wanted it to be like defense contractor AI company up until now. Like they were hand in glove with the DOD in ways that maybe other labs would have liked to have been, but we're nowhere near integrated with the Pentagon at the same level as Anthropic had been. And yet you get this, I mean, all the rhetoric coming out of the defense department right now is just 100%, you know, Dario Amadei has a God complex and he, you know, he wants the United States to fail and he wants China to succeed. It's a radical left company. And so you're left in this weird place where like Anthropic One is not a radical company. They're not really a left company at all. I mean, there's this stuff doesn't map on perfectly to like just sort of US domestic political stuff, but they were happy to participate in, you know, national security stuff. And yet, you know, I guess they do really have these genuine red lines that they're willing to let the companies suffer rather than violate. So I guess like, it's not like people are wrong to credit them with that, even though it's not true that they're like, you know, some sort of like, they're not going around putting flowers in the barrels of the guns, right? I mean, that's not what this company is all about. Very, very weird, very, very strange situation. Let's talk just a little bit about what is coming next. I mean, Anthropic has said they're going to challenge this supply risk designation. I don't really understand. Maybe this is not your beat either. Do you know what that would look like? What we're expected to see in terms of legal challenges going forward? Yeah, I don't know as much about it because it's a court battle and it's going to be really interesting. I think that it is a little bit unprecedented because I have not ever heard of the supply chain risk designation being made public before. I think for a lot of people that I've spoken to, this is a little bit unprecedented. So it's going to be interesting to see how it plays out in the courts. It very well may not hold up, but we just don't know. So and the thing is, in the meantime, what kind of business are they going to lose? You know, so far I've seen they that Anthropic put out like a kind of Q&A, I think, for clients like being like, you know, here's what you should do. Here's the next steps here. Here's what you can and can't do in terms of your work with us if you also work with the Pentagon. So, you know, I think they're kind of scrambling to try to explain this when people don't really get it already. I also think it's going to be interesting to see if they will acquiesce to the OpenAI led deal that OpenAI is trying to get the DoD to offer to all the other labs. You know, I think that if they do stick to their same terms, they probably will not acquiesce to that. But, you know, if they're really feeling the pain of the business being lost, maybe they'll capitulate. We really don't know. But yeah, it's going to be a really interesting couple of days slash months slash however long this takes to kind of play out. And if they go to court. Can I just ask you a little bit as well about the sort of workforce angle? I know that there have been a number of these, I mean, first of all, it's just sort of like a longstanding, low level anxiety for a lot of people who work for these companies. Like maybe we went into Anthropic to or, you know, any one of these tech giants, like because for sort of like principled reasons, like make humanity better off with the newest technology and all these sorts of things. And maybe we're not necessarily super comfortable with doing a lot of defense contracting for the Pentagon to begin with. So what is the dynamic of just sort of like the and again, it's it's hard for me to really ask, like, like, what's everybody thinking like? But in terms of the people that you talk to inside these companies, how are they sort of balancing these different pressures as this defense contracting fight so much occupies the news and even like the fate of these these companies potentially in this moment? Yeah, no, I'm so glad you asked. That's such a great question because, I mean, that's what people are worried about. At these companies, I've spoken with people like Microsoft, Amazon, AWS, Google, YouTube, OpenAI, everyone. And I know that some of those places don't have the same type of deal that Anthropoc does. Some do, but all of them are doing work with the military or with federal agencies in one form or another. And what I'm hearing from employees of these companies, especially engineers, is that as the years go by, they're just less and less able to square the work they're doing with some of their values, especially because a lot of times the companies they work for are kind of changing the narrative about how exactly the technology is being used. And, you know, they feel like they're sometimes not getting the full story. So it's tough because, you know, a lot of these people sign on to the tech industry, you know, back in the day to make people's lives better. They really thought that they were working in a company, for example, that had a slogan of like, don't be evil, or that was really, you know, beloved. Like we remember, you know, 10 years ago, if you worked at like Google or Amazon, people were like, wow, that's so cool. They wanted to hear all about your job. Now there's a much different reaction a lot of the time. So I think it hard for a lot of these people to square the work they doing every day with the general fatigue and burnout that comes from not knowing if you making the world a worse place actively every day And you don really have all the information to even be able to make that decision And so that's why I think we see a lot of people leaving the tech industry, especially the AI industry and going to become like goat farmers or poets. Like we're seeing this left, right. Those are like real examples. It's interesting because you work at these places and then every day you're not only doing your job, but you're also kind of like having this existential crisis. And, you know, I just don't think it's something that a lot of people can do long term unless they get answers that they're looking for. Yeah, yeah. I mean, it's such an insane. I mean, I keep saying this. I keep saying this. There's a reason I've gotten obsessed with this story, even though it's your beat and not mine. I mean, the one angle of it that we haven't even hardly talked about, and we've talked about like 16 different crazy angles so far, is to zoom back to sort of the 10,000-foot view of like, okay, we're building these insane AI models. We don't really understand how they work. The people who are building them don't even fully really understand how they work. Nobody knows what sort of the ceiling of their capabilities are or how disruptive they're going to be. Obviously, the technology is insanely powerful and very cool. And like there's a lot that can happen. We can sort of hazily imagine a future where a lot of things are really different because of this, but we don't really know what. And probably, hopefully, at some point, there are going to be public policy questions to answer around like how the democracies of the world regulate them. these things, right? And right now, not only is there basically zero actual legislating happening, all of the actual agency of government is like to pour gasoline on the fears rather than anything else, right? It's the government sort of stepping in to try to knock down like state level curbs on some of these technologies. They're saying, no, no, no, no, no, we're going to maybe deal with this at some point at the federal level, but you sure can't do it down there at the states. And then it's stuff like this. It's stuff like the Pentagon basically saying, like, look, whatever else is going to happen with AI, one thing's for sure. You're not going to let us you're not going to stop us from using it to make, you know, self target selecting death robots and things like that. like most people's like worst fears or like the worst thing they can imagine about some of these technologies. These are the ends that the government is explicitly pursuing. They're saying whatever else is going to happen, we're absolutely not going to let you stop us from having this. I mean, how, where are we? What's going on? I don't know. That's just a rant of mine. I don't know if you have anything to say about that. I just can't believe this stuff. That's exactly right. I mean, I started covering AI six years ago and I remember this was like a far off fear of everyone's. It was like, whoa, what if this happens one day? no one could have ever expected it would be this quickly or this egregious like out in the open they're like the only things that anthropic was saying no to were please just let a human have some oversight of these autonomous killing systems and please let us not do domestic mass surveillance like on actual americans and they're like no that's that's a deal breaker for us like we can't sign with you like it's so it's just interesting that like this is all playing out in the public too. You know, I mean, the piece I wrote the other day was called like, we don't have to have unsupervised killer robots, because that's how a lot of these engineers and a lot of these companies are feeling. They're like, we don't have to do this. And also, it's interesting that I think legally tech, like AI companies in this situation that we're all negotiating with the Pentagon at the same time, we're not allowed to come together and like strategize or make a plan or anything. But since this was all playing out in the public, I do think it would have, I mean, it would have been pretty easy to just adopt kind of the same red lines as each other independently. I don't know. I mean, yeah, it's been really interesting, not only the thing that's happening and the tiny like red lines that are being fought over, but also the fact that it's all happening in public via like X posts that are everyone insulting each other. Yeah, yeah. And I mean, to that point there about the fact that we could see a situation, we could have maybe seen a situation of other companies adopting these same red lines. I mean, plainly, there is a certain amount of public pressure to do that, or Sam Altman would not have kind of fudged what he was saying and made it sound as though they were honoring those same red lines, even though, as we discussed a minute ago, it really doesn't seem like they are. And meanwhile, you know, XAI, Elon Musk's, you know, the Grokbot, they just signed a big contract to potentially do classified work for the Pentagon as this was all going down just a couple days ago. Now, OpenAI appears to have done the same. So it's, I mean, I guess I'm happy on some level that Grok is not going to be the only robot operating in there. I don't know. This is not my field exactly. The right-wing lobotomized AI making these decisions. But I mean, it's plainly a road not taken is I think the point that you're saying. We could have seen this sort of like, amid this public outcry, we could have seen a certain amount of solidarity among these AI companies to say, well, no, we're going to honor these red lines too. We're going of throw Anthropic a little bit of support in this. Haven't seen that really at all. So it's pretty grim. I can let you go. We can probably leave it there. Unless you have anything to add on any of this stuff before we split? Yeah, I think it's just, it's a dystopian situation, both what's happening and the way it's playing out publicly. And I also think, you know, it's just going to change every day. So I'm glad that people are talking about this. I'm glad that, you know, the public knows more about this because, you know, a lot of these conversations usually take place behind closed doors. And, you know, now I think it's important for the public to kind of see the terms that are being fought over here and how it affects them. And we are glad that you came on to help us get our brains around all this stuff. Hayden Field, senior AI reporter with The Verge. Thanks for being with us today. And thank you all out there as well who are watching on YouTube, on Substack. Thanks for subscribing. We hope you'll continue to Follow us at The Bulwark on all our channels. Get my newsletter. Go follow Hayden. Hayden, where can people find you, I should ask? Sure, at Hayden Field on most social media or just Hayden Field on The Verge. All my bylines are there. Perfect. All right, thanks for watching and we'll see you all next time.