Marketplace All-in-One

What's behind the Anthropic-Pentagon feud?

7 min
Feb 26, 2026about 2 months ago
Listen to Episode
Summary

Anthropic faces Pentagon pressure to roll back AI safety guardrails, with Defense Secretary Pete Hegseth threatening to revoke a $200 million contract. The episode also covers NVIDIA's blockbuster earnings, Meta and YouTube's child safety trial, and privacy concerns around age verification systems.

Insights
  • Military national security priorities are directly conflicting with AI company safety principles, forcing a choice between government contracts and ethical guardrails
  • NVIDIA's exceptional growth may be hitting market saturation concerns, as investors show muted enthusiasm despite record-breaking results
  • Age verification systems designed to protect children online create new cybersecurity vulnerabilities by concentrating sensitive government ID data in corporate databases
  • Privacy-preserving alternatives to age verification exist but require significant development time and investment before market deployment
  • Defense Department is using multiple pressure tactics (contract threats, supply chain designation, production act) to force AI companies into compliance
Trends
Government-corporate tension over AI safety standards and military applicationsShift from binding safety commitments to non-binding goals in AI governanceAI chip market maturation concerns despite record growth metricsRegulatory focus on child safety online driving age verification mandates globallyPrivacy-tech innovation gap between regulatory demands and secure implementationDefense Department weaponization of procurement and supply chain policyThird-party digital infrastructure models emerging for privacy-preserving complianceCybercriminal targeting of age verification vendors as high-value data sources
Companies
Anthropic
AI company loosening safety guardrails under Pentagon pressure to roll back limitations on Claude AI model military use
NVIDIA
AI microchip maker posted blockbuster quarterly profits with 73% revenue growth but stock shows muted investor response
Meta
Facing landmark trial alleging its platform damages children's mental health through harmful content exposure
YouTube
Co-defendant in child safety litigation with Meta over platform harms to minors' mental health
Discord
Disclosed breach of age verification vendor exposing approximately 70,000 users' government ID cards to cybercriminals
People
Pete Hegseth
Defense Secretary reportedly gave Anthropic ultimatum to roll back AI safety rules or lose $200 million Pentagon cont...
David Brancaccio
Host of Marketplace All-in-One podcast covering the Anthropic-Pentagon conflict and related business stories
Nancy Marshall-Genzer
Marketplace reporter providing details on Pentagon pressure and Defense Department's concerns about AI constraints
Kion Vestensen
Senior researcher at Freedom House discussing privacy risks of age verification systems and privacy-preserving altern...
Quotes
"Anthropic unveiled a new policy on safeguards earlier this week, and it's moved from self-imposed guardrails to non-binding goals for AI safety."
Nancy Marshall-Genzer
"The Pentagon doesn't want any constraints on AI use in weapons. For example, if it has just minutes to fire weapons and needs AI to do it, it doesn't want to have to ask Anthropic for permission first."
Nancy Marshall-Genzer
"Protecting children from the worst of the internet is a pressing policy aim. There's plenty of evidence that children using social media platforms can face real harms. But the important thing here is that online anonymity has long been a key enabler for free expression, free speech, and access to online information."
Kion Vestensen
"There are promising efforts being developed right now to do age verification in a way that's privacy-preserving, but they're not ready to go to market."
Kion Vestensen
Full Transcript
When business ethics and the military's view on national security come into conflict. I'm David Brancaccio in Los Angeles. The artificial intelligence company Anthropic is loosening some of its core safety principles. This at the same time the company faces pressure from the Pentagon to roll back limitations on how Anthropic's clawed AI models are used. Marketplace and Nancy Marshall-Genzer is here now with some details. Well, David, Anthropic unveiled a new policy on safeguards earlier this week, and it's moved from self-imposed guardrails to non-binding goals for AI safety. In a blog post on Tuesday, the company said under its old policy, if Claude became capable of, say, helping build a weapon, Anthropic would adopt new, stricter safeguards. And it hoped other companies would do the same and governments would coordinate with it on this. And that just did not happen. Now, there's also pressure from the administration on related matters. What's the Pentagon's concern? Well, there are reports that Defense Secretary Pete Hegseth has given Anthropic an ultimatum, and Hegseth wants the company to roll back its rules even more by tomorrow, or it could lose a Defense Department contract worth $200 million. The Pentagon doesn't want any constraints on AI use in weapons. For example, if it has just minutes to fire weapons and needs AI to do it, it doesn't want to have to ask Anthropic for permission first. But Anthropoc wants to be sure Claude isn't used for things like government surveillance or autonomous weapons. And Hegseth has other tools to pressure Anthropoc via what, its business partners? Axios is reporting the Defense Department could designate Anthropoc as a supply chain risk As a first step in that process Axios says the Pentagon is asking major contractors if they use Claude Secretary Hegseth could also impose the Defense Production Act to compel Anthropic to ease up even more on its rules. Nancy Marshall-Genzer out of Washington. For the AI microchip maker NVIDIA, it was like winning an Oscar, a Grammy, an Emmy, and a Nobel Prize all at the same time. and then your audience says, all right, but what else you got? The company posted blockbuster quarterly profits late yesterday with revenue up 73 percent, with predictions the next quarter should be higher than that. Yet in pre-market trading now, NVIDIA stock is up only a little. One puzzled analyst said maybe investors are frozen with shock about how good things are over there. Alternate explanation, the realization that trees don't grow to the sky, that eventually there has to be some limit to the AI mania. A landmark trial against Meta and YouTube is underway as the companies face evidence their platforms hurt children by damaging their mental health. This comes as lawmakers around the world are pushing new safety laws that could require users to verify their age by uploading maybe a government ID or submitting to a facial scan. But some digital rights advocates warn that done wrong systems to make the online world safer for children could put sensitive private data in the wrong hands. We're joined now by Kion Vestensen. He a senior researcher at Freedom House a nonprofit focused on democracy and human rights Welcome Thanks for having me David Age verification for what we get access to online I mean to keep younger people away from harmful or age-inappropriate content. You're not against that in itself. That's right. Protecting children from the worst of the internet is a pressing policy aim. There's plenty of evidence that children using social media platforms can face real harms. But the important thing here is that online anonymity has long been a key enabler for free expression, free speech, and access to online information. And we need to make sure that we protect it. It's happened to me before. There was somebody tampering with one of my online accounts, and I think it was Meta Facebook asked me to take a picture of myself holding up my driver's license. That should have made me more nervous at the time. Well, that's a really good example where you are opting into this face comparison to get something that's yours. But age verification measures introduced at scale pull an incredible amount of personal data into the online ecosystem. Last fall, Discord disclosed that hackers had breached a vendor doing age verification services. Discord estimates that in this one single breach, around 70,000 people had their government ID cards exposed in the hack and now presumably transacted by cybercriminals on the internet. We should also anticipate that these companies will be a target for state hackers Because there are good ways and bad ways to do this There are ways that are more vulnerable but there are ways you persuaded in this world of hackers where there a decent chance that your data will be safeguarded? There are promising efforts being developed right now to do age verification in a way that's privacy-preserving, but they're not ready to go to market. One model that's gaining steam involves creating third-party digital infrastructure that would check a government-issued identification card and then immediately delete any associated sensitive data. This would be a non-profit third-party tool. That service could then supply a token confirming someone's age when they request it in order to access a social media platform. But it's going to take time and money to figure out how to do this in a privacy-preserving way. And as we invest in developing these tools, policymakers should look towards other mechanisms rather than these sort of blunt hammer age verification approaches. Kion Vestans and his senior researcher at Freedom House, that's a nonprofit that focuses on democracy and human rights, thank you for this briefing. Thanks for having me. And in Los Angeles, I'm David Brancaccio. You're listening to the Marketplace Morning Report. From APM American Public Media. Want even more Marketplace? Sign up to receive weekly tips from our editorial team to help you make the most of your money. Plus, you'll also be the first to know about exclusive Marketplace merchandise and local events. Text MARKETPLACE to 80568 to sign up.