The United States government has officially designated a leading American artificial intelligence company a supply chain risk to national security. Which is a label they usually save for foreign adversaries. We are talking about the exact same classification used for Huawei and ZTE. The Department of War has effectively placed a do not touch order on Anthropic. That sounds like a lot more than just a canceled contract. Oh, it is. It's an absolute blacklist. It prohibits military contractors, suppliers, basically any partners from conducting commercial activity with the company. If the government can blacklist a domestic company for refusing to remove safety guard rails from its software, does the private sector actually retain any control over how its technology is used in warfare? That is the multi-billion dollar question here. And this whole thing didn't start in some corporate boardroom. It started with a specific military operation. Specifically, the raid in Caracas to capture Nicolas Maduro. Right. Operation Absolute Resolve. Exactly. According to reports, the U.S. military utilized Anthropics model, Claude, during the planning and execution of this operation. Now, Anthropics doesn't have a direct contract to sit in the Situation Room. The integration happened through Palantir. We should probably clarify Palantir's role here because they aren't just, you know, a simple database company. No, no. Think of Palantir as the operating system for modern warfare. They have this platform called the Maven Smart System. It ingests satellite imagery, drone feeds, intercepted communications, basically all the messy data of war. And it synthesizes it into a coherent picture for the commanders on the ground. Right. And they were running Claude inside that ecosystem to help process all that information. The mission itself was technically a success. I mean, the administration got what they wanted. Yeah. But the friction actually started afterward. Yeah, it started because Anthropic asked a question. Following the operation, company leadership allegedly inquired whether their technology had been utilized in the raid, and that inquiry triggered an immediate alarm at the Pentagon. Why, though? I mean, if I sell a product, asking how it was used seems pretty standard. Not when you were dealing with classified operations, but Anthropic felt they had to ask because of constitutional AI. Let's use plain English for that. What exactly is constitutional AI? Sure. So most AI models are trained on massive amounts of Internet text and then just fine-tuned to be helpful. Anthropic takes a totally different approach. They give the model a constitution, a specific set of ethical principles it has to follow. Like rules against generating harm or things like that. Exactly. Rules like do not help kill people, do not violate privacy, do not create hate speech. The model essentially checks its own output against this constitution before it even responds. So to know if the model actually followed its own constitution, the company needs to know what it was doing. Yes. They need a degree of transparency to ensure their ethical guidelines aren't being violated. And the Pentagon viewed this request for transparency as a massive liability. Because it's an operational security issue for them. Right They saw a private vendor in San Francisco claiming they had the right to audit a military operation To the generals that just looks like a vendor trying to exercise veto power over national security decisions They see a tool they bought, and the toolmaker wants to verify if it was used to hurt someone. And that friction led directly to the ultimatum. The Pentagon, led by Secretary Pete Hegseth, issued a directive to all AI vendors. The rule was very simple. AI vendors must allow their tools to be used for all lawful purposes. All lawful purposes. That phrase does a lot of heavy lifting. But Anthropic refused to agree to that broad language. They did. They insisted on maintaining two very specific red lines in their contract, regardless of whether the government called it lawful or not. What were the two lines? First, the AI cannot be used for mass domestic surveillance of American citizens. Okay. That seems straightforward. And the second... Second, it cannot be used for fully autonomous weapons. Wait, hold on. Let me just make sure I have this straight. What is the specific definition of an autonomous weapon in this context? We are talking about systems that select and engage targets without human intervention. In military terminology, it's lethal autonomous weapons systems. Meaning the software processes the sensor data, identifies a human being as a target, and authorizes the strike. Yes. All without a person ever pressing a button. So Anthropic basically said we will not build the software that decides who dies. Correct. The government's counter argument is that lawful use already covers those concerns. They argue that if an action is legal under the Constitution, a private company has no right to block the government from using its procured tools to execute it. But wait, if the Pentagon says lawful use is the only standard they will accept and they refuse to sign a ban on autonomous weapons, are they admitting they plan to use them? That is the real gray area. The Pentagon spokesperson stated they have no interest in mass surveillance or autonomous weapons without human oversight right now. They actually called those fears fake. However, they refused to sign a contract that explicitly forbade it. Exactly. Their position is entirely about operational flexibility. They, on a view, they cannot have a terms of service agreement overriding a commander's decision in the field five or ten years from now. And Anthropic is saying the technology just isn't safe enough for that. That is their core argument. They point out that current large language models hallucinate. They make up facts. They misinterpret context entirely. Which is a known issue across the industry. Yes. So Anthropic believes relying on them for autonomous killing is reckless. And mass surveillance violates fundamental rights. They view these strictly as safety issues. While the administration argues that lawful is the only standard that matters, and woke corporate policies cannot constrain the military. Exactly. OK, let's pause here to reset the pace. Usually when a company says no to the government, they just lose the contract. They don't get the money. Right. You walk away, you lose the revenue, maybe your stock dips a bit, but you just continue doing business with everyone else. But the administration didn't just cancel the contract in this case. They went much further. They absolutely did President Trump ordered all federal agencies to immediately cease using anthropics technology And then Secretary Hegseth followed up by designating the company a supply chain risk under 10 U Code Section 3252 I want to focus on that specific statute 10 U.S. Code Section 3252. What does that actually mean? This is a statute designed specifically to prevent espionage and sabotage by foreign enemies. It is the exact legal tool used to ban Huawei and ZTE because of fears they were funneling data straight to the Chinese Communist Party. So it implies the company is an active threat to the integrity of the nation's defense. Yes. And applying it to an American firm for a contract dispute is entirely unprecedented. It acts as a corporate death penalty in the federal sector. Because it's not just the Pentagon that has to top using them now. Exactly. It effectively forces any company that wants to work with the military companies like Boeing, Lockheed Martin, or Palantir to strip Anthropic out of their own internal workflows. So if I'm an engineer at Lockheed Martin and I use Claude to write code or summarize technical documents in my daily work. You are now a liability. Lockheed Martin cannot risk its massive government contracts by harboring a designated supply chain risk in its software stack. They have to rip it completely out. There's a massive contradiction in the order itself, though. It claims Anthropik is a security risk, but simultaneously mandates they continue providing services for a six-month transition period. Which really highlights that this isn't about espionage. Think about it. If Huawei was actively spying on the Pentagon, you wouldn't say, OK, keep the routers plugged in for six more months while we find a replacement. Right. You would cut the line immediately. You would sever it that second. So that tells us they need the tech. They absolutely need the tech. They just don't want the rules that come with it. Keeping them on for six months admits that the government is highly dependent on these models. So this designation is just a power play. Completely. It threatens Anthropik's entire enterprise ecosystem and its projected IPO. It signals to the rest of Silicon Valley that noncompliance comes with an existential cost. And almost immediately, another company stepped in to take advantage. Yes. Hours after the ban on Anthropik was announced, OpenAI CEO Sam Altman announced a new agreement. The pivot was instant. OpenAI stepped right into the vacuum. They announced an agreement to deploy OpenAI models on the Department of War's classified networks. Here's the part that is really confusing to me. OpenAI claims they have the exact same red lines as Anthropic regarding surveillance and autonomous weapons. Obelically, yes. Their safety guidelines prohibit the exact same things. So if they have the same red lines, why did the Pentagon ban one and sign the other? Why is Anthropic a security risk and OpenAI a trust and partner? It comes down to the mechanism of enforcement. Anthropic wanted strict contractual prohibitions. They wanted a written veto in the deal that they could enforce legally. OpenAI adopted a layered approach instead. What does a layered approach actually mean in this context? It means they are relying on a cloud-only deployment structure and the government's own interpretation of the law. OpenAI agreed to the all-lawful use standard. So they are effectively betting that the government won define mass surveillance or autonomous weapons as lawful Correct Altman stated that the department agrees domestic surveillance is illegal so they didn need to fight over it OpenAI integrated their safety stack into the deployment rather than demanding external oversight of specific missions. So Anthropic wanted the government to sign a paper saying we promised not to do X. OpenAI said we know X is illegal, so we don't need you to sign a paper. Essentially, yes. They gave the Pentagon the lawful use language they demanded while assuring the public that their technical architecture prevents the bad stuff. It sounds like OpenAI gave the Pentagon an optical win. And a contractual one. The Pentagon gets to say we don't bow to terms of service, and OpenAI gets the massive contract. During this entire fallout, the administration publicly attacked Anthropic as a radical left and woke company. The record really contradicts that characterization, though. Anthropic is funded by major corporate players like Amazon and Google. Their CEO, Dario Amadei, has publicly stated that AI is existentially important for national defense. He's not exactly a pacifist. Far from it. He has explicitly stated he supports helping the U.S. military defeat autocratic adversaries. He even admitted in interviews that autonomous weapons might eventually be necessary. Wait, really? So what is his actual objection? His objection is entirely that the current technology isn't safe enough yet. He argues that LLMs hallucinate and make unpredictable errors and therefore shouldn't be making kill decisions today. That seems like a technical argument, not a political one. Exactly. It suggests the woke label is just political cover. The real issue here is the sovereign AI doctrine. Meaning what? The administration is establishing that the state, not private labs, is the final arbiter of how technology is deployed. It is a fundamental shift from a safety first model to a deployment first mandate. They are worried about falling behind. They are looking straight at China. The argument is that if American companies are permitted to constrain the military with safety guardrails, the United States will lose the advantage to an adversary that operates without any such constraints. They want the tech to be totally subservient to national security objectives. This supply chain risk designation, does it stop at the Pentagon? No. The General Services Administration has already terminated Anthropik's OneGov deal. That means agencies totally outside the military, like the Department of Energy, or the EPA might have to stop using CLAWD. It effectively chills the entire market for them. It forces every company to make a hard choice. If you want to do business with the U.S. government in any capacity, you cannot use the RISC software. It consolidates power strictly around the vendors who agree to the government's terms. To wrap this up, Anthropic held the line on their terms of service and was designated a national security risk. OpenAI aligned with the government's legal framework and became the primary partner for the military. The main takeaway here is that the Silicon Valley consensus, this era where companies felt they could dictate ethical terms to the government, is effectively dead. The government has demonstrated it is entirely willing to use the full weight of executive power to ensure AI companies are subservient to national security objectives. If you're not subscribed yet, take a second and hit follow on whatever app you're using. It helps us keep making this. We appreciate you being here.