Hard Fork

At the Pentagon, OpenAI is In and Anthropic Is Out

33 min
Mar 1, 2026about 2 months ago
Listen to Episode
Summary

The episode covers the dramatic 48-hour conflict between Anthropic and the Pentagon over AI military contracts, where Anthropic refused to compromise on red lines against mass domestic surveillance and autonomous weapons. OpenAI subsequently swooped in to secure a Pentagon deal claiming the same safeguards, raising questions about whether the companies actually agreed to different terms or if this represents political favoritism.

Insights
  • The Pentagon's threat to designate Anthropic a supply chain risk represents unprecedented government punishment of a major American tech company for ideological differences
  • OpenAI's ability to secure a Pentagon deal with seemingly identical red lines to Anthropic's suggests either political favoritism or substantive contractual differences hidden in legal language
  • The conflict highlights how lack of AI regulation enables 'all lawful use' standards that could permit mass surveillance through legal data broker purchases
  • Employee activism across AI companies shows internal resistance to military applications, but effectiveness depends on scrutinizing actual contract terms
  • This represents the feared moment when AI becomes powerful enough for governments to demand control over private company technology on national security grounds
Trends
Government weaponization of supply chain risk designations against domestic tech companiesAI companies using political relationships and charm to secure favorable government contractsEmployee activism spreading across AI companies to resist military applicationsConsumer switching between AI services based on ethical stances of companiesRegulatory capture through government contract negotiations rather than formal rulemakingLegal domestic surveillance expanding through AI analysis of data broker informationTech executives making large political donations to avoid government retaliationAI safety debates shifting from theoretical to immediate national security concerns
Companies
Anthropic
Refused Pentagon demands, faced supply chain risk designation for maintaining AI safety red lines
OpenAI
Secured Pentagon deal claiming same safeguards Anthropic refused, raising questions about terms
Pentagon
Demanded AI companies accept 'all lawful use' standard, threatened Anthropic with punishment
New York Times
Host employer, currently suing OpenAI and Microsoft over alleged copyright violations
Microsoft
Being sued by New York Times alongside OpenAI over alleged copyright violations
Perplexity
Also being sued by New York Times over alleged copyright violations
Google DeepMind
Employees signed solidarity letter supporting Anthropic's stance on military AI use
Kaspersky Lab
Historical example of supply chain risk designation typically used for foreign companies
People
Dario Amodei
Anthropic CEO who refused Pentagon demands, citing conscience in rejecting military AI terms
Sam Altman
OpenAI CEO who secured Pentagon deal, previously criticized for telling people what they want to hear
Pete Hegseth
Defense Secretary who posted about designating Anthropic as supply chain risk on social media
Donald Trump
Posted Truth Social statement banning federal agencies from using Anthropic technology
Emile Michael
Pentagon undersecretary negotiating AI deals, reportedly has bad personal relationship with Amodei
Dean Ball
Former Trump administration member who called Pentagon actions 'attempted corporate murder'
Greg Brockman
OpenAI executive who donated $25 million to Trump's political action committee
Tim Cook
Apple CEO mentioned as example of tech leaders aligning with Trump administration
Katy Perry
Pop star who publicly switched from ChatGPT to Claude Pro in apparent protest
Quotes
"the United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars"
Donald Trump
"these threats do not change our position. We cannot in good conscience accede to their request"
Dario Amodei
"This is the same company that told us it was going to build safeguards to make sure that SORA couldn't be used to make images of Brian Cranston"
Casey Newton
"this is effectively a company realizing that if it wants to do business with the US Government, it has to essentially abide by the terms that the US Government has set"
Kevin Roose
Full Transcript
2 Speakers
Speaker A

You've made it on time for the McDonald's breakfast menu. You think to yourself, finally, I could start my day with my perfect breakfast. But what if breakfast could be even more perfect with the hot honey sausage egg biscuit? It finally is. This won't last forever, so go to McDonald's and get it while you can. Ba da ba ba ba. Casey, where are you? That beautiful background does not look like your house.

0:00

Speaker B

I'm in a ski chalet, in keeping with the hard Fork tradition of recording bonus episodes in the strangest places possible. But here's the good news, Kev, because while I was invited on a ski trip, I've never in my life had any intention to ski. And so my plans for this morning were either to talk about AI with my fiance or talk about AI with you. And we flip the coin and it's you. How are you doing this morning?

0:43

Speaker A

Wow, I feel so honored. Well, we have a lot to talk about today because it has been a very crazy 48 hour period in the AI industry. And this dispute between the Pentagon and anthropic, and now OpenAI sort of came out of nowhere at the 11th hour and is now involved. It has been truly an insane day and a half in my life. How has it been for you?

1:05

Speaker B

Well, let me put it this way, listeners, Kev. Imagine you get engaged and then one week later, your fiance is declared a supply chain risk. So, yeah, it's been a really, really crazy few hours over here as well.

1:33

Speaker A

And just because we are going to talk about anthropic and OpenAI and all of this today, we should make our AI disclosures. Mine is that I work for the New York Times, which is suing OpenAI, Microsoft, and Perplexity over alleged copyright violations.

1:45

Speaker B

Yes. And if you missed the other big breaking anthropic story from over the past week, the man that I am now engaged to works there.

1:58

Speaker A

Well, where should we start, Casey?

2:04

Speaker B

Well, look, I think if you're tuning in, maybe you've heard the biggest headlines, but I think it's worth hitting you with maybe just a few key bullet points. One is that in the story that we've been covering over the past couple of episodes, it has come to the point of crisis where Anthropic said it had two red lines that it would not cross. The Pentagon said that it was going to move to declare the company a supply chain risk. And then Somehow it Within 24 hours of that happening, Sam Altman and OpenAI swooped in and signed a deal that they say will observe those safeguards and so it was just a truly chaotic 24 hours and we should dig into it.

2:07

Speaker A

Yes. And none of this has been happening through, like, normal diplomatic channels. Basically. As far as I can tell, the entirety of this conflict has been contained in like a handful of posts on X and a handful of blog posts and some stuff that has been leaking out from either side. So I have been making calls for the last two days to the people who are involved in this situation, trying to get some information. And I've gotten a little bit, and I'll. I'll happily share that with you. But I would say confusion reigns. Like, no one. Even the people who are directly involved in this situation are confused about the details here. And so I think we should also just say up front that, like, there is still a lot that is unknown about what's going on right now.

2:46

Speaker B

Absolutely. Maybe to start, Kevin, we could go back to a part of the story that I think is pretty well known, which is just sort of what happened between Anthropic and the Pentagon, particularly in those final hours where the Pentagon finally said, hey, this isn't going to work. We're not going to give you what you want. And time ran out and they did not come to an agreement.

3:32

Speaker A

Yeah, this escalation started on Thursday, February 26, when basically there was a day left until this deadline that the Pentagon had given Anthropic. And Dario Amadeh, the CEO of Anthropic, put out a statement on Anthropic's website, basically saying we are not going to compromise, no matter what on these two exceptions that we want mass domestic surveillance and fully autonomous weapons. He explained why they were going to compromise on those. And then he said in the line that a lot of people have been quoting that quote, the these threats do not change our position. We cannot in good conscience accede to their request. Basically, we have been trying to work out a deal while preserving these exceptions that are very important to us, but we have not been able to do so.

3:52

Speaker B

And probably worth saying, Kevin, that I think a reason that quote stood out so much was that I cannot remember any tech leader invoking conscience as a reason not to do something since Trump has been reelected. So it felt like a shift in tone for the whole discussion around the tech and power and just something we have not seen from Silicon Valley in a while.

4:43

Speaker A

Yes. And what I understand from talking with folks close to the situation is that even after this post from Dario Amadei, there were discussions happening between the Pentagon and people from Anthropic. They were trying to work out the contours of a deal. There was some sort of willingness to at least change some of the language around these exceptions. But while these discussions, discussions are happening in the back channels between the officials at the Pentagon and the people at Anthropic. President Trump posts a statement on Truth Social late Friday afternoon, just before this deadline that the Pentagon had given Anthropic. He said that, quote, the United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars. He also said that he was directing every federal agency in the United States government to immediately cease all use of anthropic technology with a six month phase out period, basically for federal agencies to switch from using Claude to using other models. One thing the President did not mention is this idea of declaring Anthropic a supply chain risk. Right. This is something that we talked about on the last show. Basically, this is a much stricter designation, something that we don't think has ever been applied to a major American company before. It's usually used for Chinese chip suppliers or things like the Kaspersky Lab. But Trump did not say that he was going to designate the company a risk to the supply chain. And so I think some folks at Anthropic and elsewhere thought, okay, this is like a deal that we can live with. We are going to, you know, lose our government contracts, but we're not going to be declared essentially like an enemy of the state.

5:03

Speaker B

And more than that, Kevin, he also did not invoke the Defense Production Act. Right. Which, like, to me was the true worst case scenario here, which where the United States government would effectively have nationalized or partly nationalized Anthropic and forced it to make a version of Claude that did its bidding. So when I saw the Truth Social post, my initial thought was like, okay, maybe they're just going to walk away from this whole debacle and try to save some face.

6:56

Speaker A

Yes, it did look like that. And then a little over an hour after Trump's Truth Social post, Pete Hegseth, the Defense Secretary, posted his own take on the matter on X, in which he said that he was directing the department to designate Anthropic a supply chain risk. He said, quote, effective immediately, no contractor, supplier or partner that does business with the United States military may conduct any commercial activity with Anthropic. So this was a pretty severe escalation. And the people who thought, okay, maybe Anthropic is going to, you know, get away here with not being declared a supply chain risk, Thought maybe. Maybe they're not after all.

7:21

Speaker B

Yeah. Now, at the moment of this recording, so far, the only evidence that we have that the Pentagon plans to declare Anthropic a supply chain risk is this social media post. Right. Like, my understanding is that Anthropic has not been informed of any new proceeding against the company. Anthropic says they would fight it in court. So while this may happen and we should talk about what it would mean if it dies, for the moment, it also appears like it could just be a threat.

8:02

Speaker A

So meanwhile, while all of this is going on between Anthropic and the Pentagon, OpenAI has been working on its own deal with the Pentagon to use its models inside the government's classified networks. There have been some reporting on a leaked message that Sam Altman had sent to OpenAI employees on Thursday, basically indicating that they were standing in solidarity with Anthropic, which is very unusual because these companies do not like each other and their leaders have not have a long history with each other. But basically, he was saying to OpenAI's employees, We are not going to sort of cave on these exceptions either. We are committed to not having our models used for mass domestic surveillance or fully autonomous weapons and actually saying some sort of supportive things about Anthropic. But a day later on Friday night, after this whole deal between Anthropic and the Pentagon had blown up in spectacular fashion, Sam Altman went on X and posted that OpenAI had reached an agreement with the Pentagon to deploy our models in their classified network, basically saying, we have confidence that our models will not be used for domestic mass surveillance and autonomous weapon systems, and that the Pentagon had agreed with those principles, and then they put them into our deal. So those are the events of the past couple of days, and I think when I summarize them, it sounds insane because what we effectively have are two companies, OpenAI and Anthropic, that claim to have identical red lines when it comes to the use of their products by the military, mass domestic surveillance and fully autonomous weapons. One of them, Anthropic, has been declared a supply chain risk, which is a very punitive, harsh measure that basically requires them to cut off all business with the US military and the federal government. The other, OpenAI just announced a deal with the Pentagon to use its systems in classified networks with the same two red lines that Anthropic had objected over. There's some nuance there. There's some details that I'm sure we'll get into, but I think if you just sort of Zoom out and look at the facts of the case. It is a trul. Insane series of events.

8:28

Speaker B

It is. And I think we should just talk now, Kevin, about this nuance that you bring up. You know, we said at the top of the show there is some uncertainty here. Kevin and I have not been allowed to review the contracts that Anthropic and OpenAI have with the military, although we would love to. We're hard for NY times.com but I think what we can tell you is that it appears that this conflict comes down to this all lawful use standard. Right. Keep in mind, the Pentagon signed a deal with Anthropic that had in place the red lines that it is now freaking out about. It went back to its AI labs and it said, hey, we want to change this. We want you to say, we can use this for anything that is legal on paper. That sounds great. Here's the problem. We don't meaningfully regulate the use of AI in this country. And as we've talked about on the show in the past, we do not have a national privacy law. These are among the reasons that Anthropic has become very concerned about, about what powerful AI systems might do if they were given to the military in a country where there are not actually laws around how this powerful new technology can be used. And I think Domestic Surveillance 1 is a really interesting one, Kevin. You know, the. The Pentagon has said, well, you know, we're not going to domestically surveil people. That's illegal.

10:49

Speaker A

Hmm.

12:10

Speaker B

Well, at the same time, Kev, there are other federal agencies right now that have mounted what amounts to a social media dragnet of looking through the social media posts of people trying to immigrate to this country, trying to find posts that are critical of the administration, and then using that as a pretext not to allow them to immigrate. Right. Now, maybe the Pentagon will say, well, you know, that's not surveillance. You know, that's just part of our immigration process. But I think to folks at Anthropic, they would say, well, no, no, no. If we knew how powerful tools that can go through every social media post in real time, that might be an area that we are uncomfortable getting into. Right. And so this is where I think we start to understand what is different between Anthropic and here. Right. Is Anthropic has said, we're serious about this stuff. And I'm sure it's possible to write into a contract a little bit of legalese that gives them enough cover to go back to their employees and say, hey, don't worry, we're not going to do anything untoward while at the same time doing a little wink, wink, nudge, nudge to the Pentagon. And the Pentagon could do these tools to do exactly what they're doing with the social media accounts of would be immigrants. Right. And so to me, that is what I see happening here and seems like a significant part of, of the conflict. Kevin, I know you've been on the phone like all weekend. What do you make of that analysis?

12:11

Speaker A

Yeah, I think that's largely my understanding. When he announced the agreement that they had made with the Pentagon, Sam Altman did put out a statement that left some room for interpretation, I think, on what OpenAI had actually agreed to. So I will be very curious to see the actual language of these contracts if that ever makes it out into public again. We are hard fork at nyt, but what I can tell you from talking with folks on all sides of this over the past couple of days is that OpenAI is framing this as essentially an identical set of constraints. Right. They don't believe that they have agreed to anything that would require them to use their models for mass mess surveillance or for autonomous weapons. But in his statement, Altman said that the Pentagon, quote, agrees with these principles, reflects them in law and policy, and we put them into our agreement. So basically, if you kind of parse that very carefully, he is just saying sort of what the Pentagon has been saying, which is that they're not going to do mass domestic surveillance because it is illegal. And what Anthropic has been insisting on this whole time is that actually there are forms of mass domestic surveillance that are not illegal as the law is currently written. And so we want to prohibit the use of our systems for that stuff too.

13:25

Speaker B

More than that, Amade has also said that during their negotiations, Anthropic was offered similar concessions, but the Pentagon accompanied those proposed concessions with, quote, legalese that would have made them ineffective, which is entirely consistent with what the undersecretaries of this, of this agency are saying on X, which is that they were not going to let any private company dictate how they wage war. Right. So I just think that's very important to say is that Anthropic is telling us, hey, we were offered a very similar deal and it did not protect you as an American in the way that OpenAI is now telling you that you were being protected.

14:49

Speaker A

Yeah, I mean, I think when you boil it all down, there are basically two options here. One is that the administration and the Pentagon just have a Political vendetta against Anthropic. There's a bunch of language in the statements coming out of Pentagon officials X accounts about how these are all, you know, a bunch of woke liberals who are unpatriotic. And I think there is some sort of sense in which this is just about style and tone and personality. Emile Michael, one of the undersecretaries at the Pentagon who's been negotiating this deal, just clearly does not like Dario Amadei at all. And I've heard that from, from multiple people, actually, that there's like particularly bad blood between those two. And so I think that's option one is like, this is purely a political vendetta. OpenAI has been chosen for this contract because the administration likes them more. And there's sort of no substantive difference between what these two companies have agreed to do. The other option is that OpenAI has actually agreed to things that Anthropic didn't, that there are substantive differences between these agreements and that OpenAI is sort of using this sort of legalese, as you put it, to sort of frame this as a victory when really they have conceded to the thing that Anthropic objected to. I'm not sure yet which of those two is more true, but I don't think anyone in this situation, except maybe the Secretary of Defense knows.

15:25

Speaker B

Yeah. You know, I mean, there are two really important things about what you just said, Kevin. One is the idea that the federal government is trying to commit what Dean Ball, who was a member of the Trump administration and helped to write its current AI policy, what Dean Ball called an attempted corporate murder, just based on ideology. And man, if you lived through the bias and censorship debates on social media of the early 2020s, it's really crazy to hear elected officials saying that because we have a different ideology than you, we are going to take your contract away, designate you a supply chain risk, and try to prevent other people working for you. Right. So that is honestly, Kevin, that is how the Chinese government regulates its tech companies. Either you get on board with the party or they crush you. Right. So that I think is really chilling. And again, not just to me, to former members of the Trump administration. Okay. That feels really important to say.

16:49

Speaker A

Do you, I'm curious what you think about that. No. I've been looking back through sort of historical examples of the US Government taking punitive actions against American companies. And I think it's safe to say that this fight with Anthropic and the Pentagon is, by a fairly wide margin, the most punitive action that the US Government has taken against a major American company at least this century and possibly ever. We have seen this administration bully and strong arm and jawbone companies in the tech sector before. We have even seen them try to block certain companies from doing business with the government. But we have not seen them try to kill a company for what, as far as I can tell, are contractual disputes and ideological differences. It's really crazy.

17:50

Speaker B

But of course, this is why almost all of Silicon Valley has lurched to the right over the past two years. It's why Tim Cook is giving golden trophies to President Trump. It's why Greg Brockman at OpenAI is donating $25 million to Trump's political action committee. Right. There is this sense that you have to be in line with these people or they're going to try and crush you. Until now, though, we hadn't actually tried to see the Trump administration try to crush a company. But now we have. And I just sort of can't imagine what kind of chilling effect that is going to have across Silicon Valley.

18:35

Speaker A

Casey, I want to get your take on the employee activism that we've seen over the last couple of days. There was a, an open letter petition, whatever you want to call it, going around that was signed by some employ of OpenAI and Google, DeepMind and other leading AI companies, basically saying, like, we stand with Anthropic. We also do not want to make tools for mass domestic surveillance and autonomous killing. And sort of expressing solidarity with the stance that Dario Amadeh has taken. Do you think that's meaningful? Do you think that's part of what is fueling some of the decisions that these companies are making? Because that has been true in the past. Employees at these companies have had a lot of leverage over things like military contracts.

19:07

Speaker B

I do think it is very meaningful. There are a lot of very well meaning people at OpenAI, at Google, at DeepMind as well as Anthropic, who truly do not want to see the most dystopian possible AI scenarios come to pass. And so it matters that they are going to their leadership and saying, we are not going to participate in this. I hope that those employees get a hold of the contracts that their employers are signing and really scrutinize them. I hope that they take note if they find out that their technology actually is being used for something that looks pretty domestic surveillance like that they would blow the whistle. Right. We really are going to need to rely on these employees in the coming years as the technology improves and as the Pentagon, you know, potentially does the thing that it is telling us today that it is not going to do.

19:48

Speaker A

Yeah. I think one other important thing to note here is that Sam Altman and OpenAI are trying to very carefully explain this to their employees in a way that does not suggest that they are just capitulating to the demands of the Pentagon. OpenAI is saying to its own employees that they believe they got actually a stronger deal than the one Anthropic had in terms of protecting against mass domestic surveillance and the use of their systems for autonomous weapons. Several people pointed me to this sort of line in Sam Altman's post about this, about how they were going to create what. What he called a safety stack, basically a set of protections built into the model itself that the Pentagon is going to be using in classified situations that would essentially prevent the use of ChatGPT, presumably for the. The things that they're worried about.

20:39

Speaker B

Yeah. By the way, this is the same company that told us it was going to build safeguards to make sure that SORA couldn't be used to make images of Brian Cranston. Kevin. So I'm just going to suggest that sometimes when the OpenAI tells you it's going to build guardrails, they don't actually show up on time.

21:34

Speaker A

Yeah. I have also talked to people who say that this is basically security theater, that, you know, if you dump a bunch of data that you've collected on Americans or purchased from a data broker into an AI model, like, it is not going to be able to tell whether that information was legally gathered. It is not going to be able to tell where that information came from. And so this is not really a meaningful change.

21:48

Speaker B

Yeah. Let me underscore that point, Kevin, because it is so important. It is legal for data broker companies to buy up data on millions of Americans and it is also legal for federal agencies to buy that data. Now, that does not constitute domestic surveillance to a legal standard, but it is functionally equivalent. Right, so this is the whole ball game here. Right. The Pentagon already has all of the tools it needs to do what is practically domestic surveillance. It's just not called that because it's legal to buy data about Americans from data brokers. So I understand we are so deep in the weeds here, but the reason we wanted to do this episode today is to try to persuade you this is very high stakes stuff. It is being done in the shadows and the nuances really, really matter.

22:12

Speaker A

Yeah, I think the details and nuances are where the whole story lies right now. And it's hugely high stakes. And so I Think on the surface this might look like some kind of boring contractual debate between AI companies, but this is really about sort of fundamental question of who controls technology. Is it the people who build the technology, or is it the militaries and the governments of the countries where that technology is built? And I think that is sort of the high level question under debate here. And it's one where the Pentagon and Anthropic did not see eye to eye.

23:00

Speaker B

I mean, this story, Kevin, is the whole reason that you and I have just never been on the side of AI is all hype and it's fake and it's a bubble that's about to collapse. Right? We saw these systems improving in real time. We knew that very soon they would be in a position where they could do the sort of instant analysis of things like social media data, geolocation data, and other data that could just potentially create massive new systems of oppression. And we are now on the precipice of those systems being potentially rolled out under the guise of a policy that is called all lawful use because there is no law to regulate them. So it really just could not be more serious. And I'm glad we're getting a chance to talk about it today. I want to bring up one more thing, though, which is the limb that Sam Altman may have just crawled out on, right? As I'm reading through his statement, I'm trying to square it with what I know. You know, you were talking earlier in the show, it's like, okay, so you're telling me that the same day the Pentagon tries to kick one company out for having two things that it will never do. It signs a deal with another company and makes an agreement that it will never do two things. It's so hard to square that. Right? And yet you and I have both covered Sam for a long time. And we know that a criticism he has gotten from his former coworkers is he tells people what they want to hear, right? This was at the root of him being fired in 2023 was his coworkers saying, this guy is telling me what I want to hear. He's not being consistently candid. And he's just sort of leaving me in this state of perpetual confus. And so now we fast forward to a moment that is so much higher stakes than that, right? Because we have to take Sam Altman's word that he has signed a deal that will not enable mass domestic surveillance of Americans in the short term and maybe autonomous murder bots in the medium term, which is what I Don't know. Three years, five years, who knows? So the reason that I note that, though, Kevin, is that in every case it has always come out in the end what the truth was, right? And I hope the truth here is that Sam got his red lines. I hope the truth is that somehow he arm wrestled Pete Hegseth down and Pete Hegseth said, okay, you got me, Altman. We're not going to do any domestic surveillance for real, and we're not going to do any autonomous murder bots for real. My fear is, though, that either through naivete or deception, he has misled us and we are going to find out sooner or later that in fact those two use cases are not only legal, but they're happening.

23:35

Speaker A

Right. I think that's still a big tbd. And I would also like to know, Sam, if you're listening, please come on and talk to us about this because I think there are still a lot of unknowns here. But I would also bring up another point, which is one of the big criticisms of Anthropic over the years has been about this idea of regulatory capture. Right? There are many people, including some very high up in the Trump administration, who believe that all of Anthropic's sort of warnings and statements about the risks of powerful AI systems, the speed with which they're accelerating, the things that they could potentially do, have been kind of a pretext, right. That they're not actually sincere about this, that they're just trying to get a bunch of onerous regulation passed so that they can sort of enshrine their status as an incumbent and prevent smaller startups and others from competing with them. So we've heard that term a lot. Regulatory capture. This, to me, is an example of regulatory capture, right? This is a company OpenAI coming into a very hot dispute between their biggest rival and the United States government and effectively using what seem to be vibes, charm, possibly some, you know, some better political instincts to get a deal done through their relationships with the government. So call it what you want, call it savvy politicking or negotiating, call it hair splitting over the deals of this contract. But this is effectively a company realizing that if it wants to do business with the US Government, it has to essentially abide by the terms that the US Government has set. That is regulatory capture as textbook and example as you're ever going to see.

26:12

Speaker B

Yeah, so. So where we go from here, Kev?

28:05

Speaker A

So I think there are a bunch of unresolved questions that I'm going to be looking at over the next few weeks and months. One of them is like, what actually happens to this supply chain risk designation? This is something that the Pentagon has said it's going to do to Anthropic, but we have not actually seen any, like, formal language about that other than Pete Hegseth's posts. And we have also not, not fully understood what that actually would mean for Anthropic or what kinds of relationships it would be forced to sever with various other government contractors. So that's sort of one bucket of unknowns is like, all the legal and contractual details of this supply chain risk designation for Anthropic. We also still have a lot to learn about what the other AI companies are being asked to agree to that Anthropic wouldn't, and what companies like OpenAI may have done to get their deal through while Anthropics was being rejected. And then I think there's a third bucket which is like, what does this do to the popularity of these companies with consumers? You know, I think we are starting to see very early signs that some consumers who are very upset about the Pentagon's demands here are switching from ChatGPT to Claude. One of those users appears to have been Katy Perry, the pop star who posted a screenshot on X of her Claude Pro plan that she had newly purchased circled with a little red heart.

28:08

Speaker B

So Katy Perry really said, the Anthropic employees, those are my California girls, and they're undeniable.

29:37

Speaker A

I should also say I have to underscore that this is exactly the kind of moral conflict that Dario Amedei has been preparing for his entire life. One of Daario's favorite books, a book that he used to buy for all Anthropic employees, is called the Making of the Atomic Bomb. It's a very long history of the Manhattan Project during World War II. And the reason that he wanted Anthropic employees to read this book is that he believed that eventually what they were building, the AI models, the chatbots, would become as important to national security, to the government, to the future of the global order as nuclear weapons. And he wanted to sort of instill in them the idea that, like, they were doing something with profound moral and ethical consequences. He understood that it's not just like building technology, that if you build something that is powerful enough, the government is going to want to use it and they're going to want to use it on their terms. And so I think this is exactly the shape of conflict that he was envisioning when he was telling people to read this. Book about the Manhattan Project.

29:46

Speaker B

I think you're exactly right. It has been so amazing, honestly, to watch how many predictions that were made by like the rationalists and the less wrong community in the early 2010s have started to come true. These sort of conflicts between the government and the big AI labs, while they were not predicted with any degree of specificity, there was still a thought that we were going to get here, and now it sort of seems like that moment has arrived. I'm sure it must feel extremely surreal to Dario, as well as, you know, many other people who have been working on this for a long time. I just hope that we can navigate out of it safely. Yeah, well, truly unprecedented 48 hours or so. I'm sure a lot more is going to unfold in the days ahead, and I'm sure we'll be returning to the subject here on Hard Fork, but perhaps by then I'll be out of this ski chalet.

30:57

Speaker A

Yeah, I hope you make it down safely. And I think you should go skiing. I know you're I know you're not a fan, but I think you should do it.

31:50

Speaker B

If you knew where my center of gravity was, you would know that Kevin Roose just tried to kill me Live on Air.

31:58

Speaker A

Taxes can feel confusing when you're trying

32:25

Speaker B

to manage everything by yourself, but with Intuit, TurboTax help is closer than you think.

32:28

Speaker A

Stop by one of their new state of the art store locations. There you can ask a tax expert to sync your accounts via the TurboTax

32:33

Speaker B

app and automatically import your documents.

32:39

Speaker A

With TurboTax Full Service, your expert looks for every possible deduction while keeping you updated every step of the way. In the app, eliminate the guesswork and file your taxes with a TurboTax expert today.

32:41

Speaker B

Visit TurboTax.com what would you like the power to do? Keep getting up. There's more fight in you. So play on. Diego

32:52

Speaker A

bank of America Proud to be

33:03

Speaker B

the Official bank of U.S. soccer and FIFA World Cup 2026 bank of America NA Member FDSE at Strayer University we help students like you go from Is it possible? To anything. Is offering access to up to 10 no cost gen Ed courses so you can reach your goals affordably and fast. Visit strayer.edu to learn more. No cost Gen Ed is provided by Strayer University Affiliate sofia. Eligibility rules apply. Connect with us for details. Strayer University is certified to operate in Virginia by Chev and has many campuses including at 2121 15th Street north in Arlington, Virginia. Hartfork is produced by Whitney Jones and Rachel Cohn were edited by Veer and Pavic. Today's show was engineered by Katie McMurran. Our executive producer is Jen Poyant. Original music by Alyssa Moxley and Dan Powell. Video production by Sawyer Roquet, Pat Gunther, Jake Nichol and Chris Shott. You can watch this whole episode on YouTube@YouTube.com hardfork Special thanks to Paula Schumann, Huiwang Tam and Dalia Haddad. You can email us@hardforkytimes.com on with your AI red lines. What would you like the power to do? Keep getting up. There's more fight in you, so play on. Bank of america proud to be the

33:04

Speaker A

official bank of u.s. soccer and FIFA

34:52

Speaker B

world cup 2026 bank of america na member fdse.

34:54