The AI Daily Brief: Artificial Intelligence News and Analysis

AI Is Officially Political

28 min
Mar 6, 2026about 1 month ago
Listen to Episode
Summary

OpenAI and Anthropic are locked in a high-stakes political battle over Pentagon AI contracts, with Anthropic refusing to compromise on safeguards against domestic surveillance and autonomous weapons. Meanwhile, OpenClaw emerges as a transformative AI agent platform reshaping global tech development, and Google demonstrates significant multimodal AI capabilities through Notebook LM's new cinematic video features.

Insights
  • AI has transitioned from a niche technology issue to a mainstream political battleground, with government contracts and regulatory frameworks now directly influencing corporate strategy and competitive positioning
  • The OpenClaw phenomenon is driving unprecedented global developer adoption and competitive urgency, particularly in China where large tech companies are racing to integrate agent capabilities into their platforms
  • Political donations, regulatory alignment, and public messaging are now material business factors for frontier AI companies, creating reputational and contractual risks that extend beyond technical capabilities
  • The Anthropic-Pentagon dispute reveals fundamental disagreements about AI safety implementation, with Anthropic arguing that model-level safeguards are insufficient theater compared to contractual restrictions
  • Revenue growth in frontier AI (OpenAI $25B+ ARR, Anthropic $19B ARR) is accelerating faster than CapEx concerns, suggesting the industry is moving toward sustainable business models despite continued infrastructure investment
Trends
AI agent platforms becoming primary development focus globally, with OpenClaw driving hackathons and competitive product launches across US and ChinaGovernment-AI company relationships becoming politicized, with administration alignment affecting contract awards and supply chain designationsDefense contractors proactively de-risking by removing Claude from systems due to supply chain risk threats, creating cascading business impact beyond formal designationsMultimodal AI capabilities (video generation, image synthesis, voiceover) becoming table-stakes for enterprise AI tools and competitive differentiationChinese tech companies moving faster on agent integration than Western cloud giants, suggesting potential competitive advantage in agentic AI deploymentData center energy agreements becoming political leverage points, with companies pledging to fund infrastructure expansion to manage grid impact concernsInternal company culture becoming externally visible political battleground, with employee departures and retention tied to corporate policy positionsAI safety standards (AIUC1) emerging as enterprise adoption enabler, with third-party certification and insurance products unlocking institutional buyingRevenue leaks becoming competitive intelligence tool, with companies strategically releasing ARR numbers to counter competitor announcementsPolitical coalition-building around AI policy spanning ideological extremes (Steve Bannon to Ralph Nader), creating unpredictable regulatory landscape
Topics
OpenClaw AI Agent Platform AdoptionAnthropic-Pentagon Contract DisputeAI Supply Chain Risk DesignationFrontier AI Company Revenue GrowthMultimodal AI Video GenerationAI Safety Standards and CertificationGovernment AI Policy and RegulationDefense Contractor AI IntegrationData Center Energy InfrastructureAI Agent Development in ChinaOpenAI vs Anthropic CompetitionAI Domestic Surveillance SafeguardsAutonomous Weapons PolicyEnterprise AI Agent InsurancePolitical Influence on AI Contracts
Companies
OpenAI
Announced Pentagon contract, leaked $25B+ ARR figures, competing with Anthropic for defense contracts and market domi...
Anthropic
Refused Pentagon contract terms on surveillance/weapons, facing supply chain risk designation, CEO Dario Amadei accus...
NVIDIA
CEO Jensen Huang called OpenClaw most important software release ever, discussed $30B OpenAI investment and token eco...
Google
Launched cinematic video overviews in Notebook LM using Gemini 3 models, demonstrating multimodal AI capabilities
ByteDance
Offering hosted OpenClaw instances to customers as part of Chinese tech company race to integrate agent capabilities
Alibaba
Offering hosted OpenClaw instances to customers alongside ByteDance and Tencent in competitive agent platform deployment
Tencent
Offering hosted OpenClaw instances to customers as major Chinese tech player integrating OpenClaw into platform strategy
Moonshot
Kimi creator offering cloud-based OpenClaw versions within proprietary apps to attract new users in China
Minimax
Offering cloud-based OpenClaw versions within proprietary apps alongside Moonshot in Chinese agent platform competition
Eleven Labs
First voice agent certified against AIUC1 standard, launching insurable AI agent with enterprise safety guarantees
Microsoft
President Brad Smith attended White House data center energy pledge signing, signatory to Big Tech Pledge
Amazon
Signatory to Big Tech Pledge on data center energy, benefiting from NVIDIA ramping AWS for OpenAI partnership
Meta
Signatory to Big Tech Pledge on data center energy infrastructure and power supply commitments
Oracle
Signatory to Big Tech Pledge on data center energy infrastructure alongside other hyperscalers
xAI
Signatory to Big Tech Pledge on data center energy, representing AI startup building significant infrastructure
Palantir
Referenced in Anthropic CEO memo as aligned with Pentagon/Trump administration against Anthropic
Facebook/Meta
React library compared to OpenClaw in GitHub Stars growth metrics discussed by Jensen Huang
People
Jensen Huang
NVIDIA CEO called OpenClaw most important software release ever, discussed token economy and $30B OpenAI investment
Dario Amadei
Anthropic CEO published leaked memo accusing OpenAI of bad faith, defending refusal to compromise on Pentagon contrac...
Sam Altman
OpenAI CEO accused by Amadei of acting in bad faith, announced Pentagon deal, addressed employee concerns in all-hands
Pete Hegseth
Defense Secretary threatened Anthropic with supply chain risk designation over contract disagreements
Donald Trump
President declared Anthropic persona non grata, signed Big Tech Pledge on data center energy, appointed AI czar
Emil Michael
Pentagon Undersecretary for Research and Engineering, former Uber executive restarting negotiations with Anthropic
Paul Nakasone
Former NSA/Cyber Command Director, OpenAI Board Member, stated US needs both Anthropic and OpenAI partnerships
Max Schwarzer
OpenAI reinforcement learning lead announced departure to join Anthropic amid Pentagon contract controversy
Brad Lightcap
OpenAI COO attended White House data center energy pledge signing
David Sachs
AI czar lauded Big Tech Pledge on Twitter, critiqued opposing data center policies
Felix Tao
Mindverse AI co-founder quoted on Chinese founders racing to build OpenClaw-based projects
Dongxi Q
Qvera co-founder quoted on competitive urgency driving OpenClaw adoption in China
Bernie Sanders
Released video with AI doomers like Eliezer Yudkowsky, positioning AI as political issue
Eliezer Yudkowsky
AI doomer and safety researcher featured in Bernie Sanders video on AI concerns
Max Tegmark
Future of Life Institute co-founder announced pro-human AI declaration with 90 signatories
Steve Bannon
MAGA influencer and former presidential advisor signed pro-human AI declaration
Ralph Nader
Consumer advocate signed pro-human AI declaration alongside ideologically diverse coalition
Quotes
"OpenClaw is probably the single most important release of software probably ever. Linux took some 30 years to reach this level. OpenClaw in, what is it, three weeks, has now surpassed Linux."
Jensen Huang, NVIDIA CEOMorgan Stanley TMT conference
"I want to be very clear on the messaging that is coming from OpenAI and the mendacious nature of it. This is an example of who they really are, and I want to make sure everyone sees it for what it is."
Dario Amadei, Anthropic CEOLeaked memo, Friday night
"These kinds of approaches, while they don't have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater."
Dario Amadei, Anthropic CEOLeaked memo
"The real reasons DOW and the Trump admin do not like us is that we haven't donated to Trump while OpenAI and Greg Brockman have donated a lot."
Dario Amadei, Anthropic CEOLeaked memo
"Ultimately, this is about our warfighters having the best tools to win a fight, and you can't trust Claude isn't secretly carrying out Dario's agenda in a classified setting."
Administration officialAxios comment
Full Transcript
Today on the AI Daily Brief, AI is officially political, and how. Before that, in the headlines, is OpenClaw the most important software release ever? The AI Daily Brief is a daily podcast and video about the most important news and discussions in AI. All right, friends, quick announcements before we dive in. First of all, thank you to today's sponsors, Recall AI, Robots and Pencils, AIUC, and Blitzy. To get an ad-free version of the show, go to patreon.com slash ai daily brief. If you are interested in sponsoring the show or really anything else in the AIDB ecosystem, head on over to aidailybrief.ai. While you are there, two things that I want to call your attention to. First, last day to do our February pulse survey. Appreciate everyone who has done that. This will just take a couple minutes and it helps us track AI usage and give people data about what's actually going on and what's trending and what's changing. And if you contribute, you get that data before anyone else. And the other last, it's the last day to sign up for this first edition of Enterprise Claw. You can find that at enterpriseclaw.ai. With that, let's go over to the headlines and some big words from Jensen Huang. We kick off today with a fun little quote from NVIDIA CEO Jensen Huang at the Morgan Stanley TMT conference from Wednesday. Jensen absolutely waxed about OpenClaw, saying, OpenClaw is probably the single most important release of software probably ever. Linux took some 30 years to reach this level. OpenClaw in, what is it, three weeks, has now surpassed Linux. It is now the single most downloaded open source software in history. Now, what he's specifically referring to is not the idea that overall OpenClaw has more downloads than Linux or Facebook's React library. What he's referring to is this chart that's flying around, which is true, of the GitHub Star history of these projects, where OpenClaw is officially ahead of those vaunted projects in GitHub Stars and has done so extremely quickly. Now, hold aside the specific details. The context really matters here. OpenClaw is a phenomenon that has fundamentally changed how people think about what AI can do. It has been ground zero in ushering in the true agent era. And one of the more consequential parts of Jensen's comments is that they came at a Wall Street conference, clearly signaling that personal agents are a big deal and that investors need to get up to speed. This shift in AI is also aligned with Huang's predictions about where the industry is going. For more than a year, Huang has been conceptualizing AI tokens as the new fundamental unit of work in GDP. During his talk, Huang updated this thesis and claimed the so-called token economy is coming into focus. Jensen also discussed NVIDIA's recent $30 billion investment in OpenAI, specifically in the context of it not being the $100 billion deal that was rumored to be in the works last year. He said, I think the opportunity to invest $100 billion in OpenAI is probably not in the cards. Not because NVIDIA has gotten any less bullish on the company, but because Jensen's base case is that they IPO by the end of the year. Meaning, in his words, this might be the last time we'll have the opportunity to invest in a consequential company like this. Huang added that NVIDIA's $10 billion investment in Anthropic late last year was also probably their last. Which isn't to say that NVIDIA won't continue to benefit from the success of those companies. For example, Jensen commented that Amazon's gigantic compute partnership with OpenAI means that NVIDIA is, quote, ramping AWS like mad. Now, OpenClaw is not just a US phenomenon. In fact, the information recently reported on the many ways OpenClaw is changing what Chinese founders are building. They highlighted a recent OpenClaw hackathon in China, where one contestant made Tinder for AI agents, basically where OpenClaws can find love interests for their humans still. Another created an automated recruiting site where OpenClaws owned by job seekers and companies interview each other. There was also a gamified social media and travel platform that hosts content created by OpenClaws. Felix Tao, the co-founder of Mindverse AI said, Every founder I know is now working on new projects to test the boundaries of what personal AI agents can do. One of the interesting differences in the Chinese tech scene is the large companies diving straight into the new agentic trend. ByteDance, Alibaba, and Tencent are now all offering hosted OpenClaw instances to customers, something that none of the Western cloud giants have done so far. Kimi creator Moonshot and Minimax are also offering cloud-based versions of OpenClaw within their proprietary apps as a way to draw in new users. The article also mentioned numerous startups and founders working on OpenClaw projects, either building features on top of OpenClaw or spinning up competitors in the personal agent space. Qvera's co-founder Dongxi Q said, Tech entrepreneurs in China responded immediately to OpenClaw and launched new projects because they knew all of their competitors would be doing the same. Nobody wants to be left behind. Parker Lyman of Maness even tweeted, This is how competitive it is in China. OpenClaw installers have started offering two hours of house cleaning as part of the package in order to win clients. They'll even list any items you want to declutter on a second-hand marketplace. All for $57. Writes Lenny Rachitsky of Lenny's podcast, I don't think enough people are appreciating how insane this is. Over 80 open-cloud meetups scheduled around the world and more popping up every day. For a product less than a few months old. I've never seen anything like this. Something very special is happening. Now moving over to the numbers game. Just one day after Anthropik's revenue numbers were leaked to the press, OpenAI struck back and leaked a larger number. On Tuesday, Bloomberg reported that Anthropic had surpassed $19 billion in ARR, more than doubling their run rate since the end of last year. That put them within striking distance of OpenAI, who told investors they had closed 2025 with more than $20 billion in ARR. Now, as soon as I heard that Anthropic was officially at basic parity with the last number that we got from OpenAI, I just knew that we were somehow from some leaker going to get new OpenAI numbers. And sure enough, late last night, the information reported that OpenAI now has exceeded $25 billion in ARR. They also firmed up their 2025 estimate, claiming they actually ended the year with $21.4 billion. That makes that a 17% jump over the first two months of 2026, which if it were not for Anthropics' staggering 36% gain in the last couple of weeks, we'd be talking about with just as much slack in our jaws. Sources added that OpenAI's ARR calculation was based on revenue average over the past four weeks, but if they extrapolated just the past week, ARR would be even higher at $30 billion. Derek Thompson tweeted about all this, AI might still be an industrial bubble because almost every big tech is a bubble of some kind and the revenue has a long way to catch up to CapEx, but the idea that this industry has no business model is a take aging like a rotted banana. Lastly today, something which I am absolutely going to come back to and do more of an operators-focused episode on at some point, Notebook LM can now create fully animated videos to accompany reports. Google is calling these cinematic video overviews, and the results are pretty impressive. The demo showed a brief clip of a video overview about mathematical limits using images and video with some very cool space-themed visualizations. Now, we did previously have video overviews, but up until now, they'd just been slideshows. They were already a useful extension of audio overviews but there wasn as much of a let say wow factor The new cinematic video overviews are immediately more striking and pretty much guaranteed to make people wonder how they were made Specifically, they feel more like a native video presentation with custom animations and images, rather than a simple slideshow leveraging stock images. Robert Scoble presented an even more impressive example, sharing a video based on summarizing AI chatter on X over the past few days. The video opens up on an animation of a DaVinci-style contraption, as the voiceover discusses how AI discourse has moved on from chatbots to discuss infrastructure, agents, and politics. The video flips through various generated images in a matching style, making the entire presentation feel like a coherent whole. It also draws on real photos where relevant. Skolbo said that he analyzed tweets and generated the script externally, but the rest of it was straight from Notebook LM, which also generated an audio podcast and a mind map. Now, one of the things that we've talked about numerous times this year is how much Google's product strategy, I think, is about flexing their lead in multimodal AI. And one could argue that this is one of the bigger flexes to date, especially if you factor it for actual immediate term relevance for real people and real workers. Cinematic video overviews orchestrate the Gemini 3 family of models, Nano Banana Pro, and Vio to weave together voiceover, images, and video in a way that just feels like the beginnings, at least, of a professional video production. What's more, this is not your grandpa's 10-second video clip. Scoble's video, for example, runs for almost five minutes. Describing the new tech, Google wrote, Gemini now acts as a creative director, making hundreds of structural and stylistic decisions to tell the best story with your sources. It determines the best narrative, visual style, and format, and even refines its own work to ensure consistency. Now, at this stage, the only downside is that the feature is exclusive to the top-tier ultra subscription, making me once again grateful that my job justifies holding one of those types of subscriptions for all the major players. very cool stuff from Google, something I'm very excited to play around with more. For now, however, that's going to do it for the headlines. Next up, the main episode. teams, in-person meetings, and more, so developers don't have to build it themselves. If you're building a meeting note-taker or anything involving conversational data, Recall.ai is the API for meeting recording. Get started today with $100 in free credits at Recall.ai slash AIDB. That's Recall.ai slash AIDB. Most companies don't struggle with ideas. They struggle with turning them into real AI systems that deliver value. Robots and Pencils is a company built to close that gap. They design and deliver intelligent cloud-native systems powered by generative and agentic AI, with focus, speed, and clear outcomes. Robots and Pencils works in small, high-impact pods. Engineers, strategists, designers, and applied AI specialists working together to move from idea to production without unnecessary friction. Powered by RoboWorks, their agentic acceleration platform, teams deliver meaningful results including initial launches in as little as 45 days depending on scope. If your organization is ready to move faster, reduce complexity, and turn AI ambition into real results, Robots and Pencils is built for that moment. Start the conversation at robotsandpencils.com slash AI Daily Brief. That's robotsandpencils.com slash AI Daily Brief. Robots and Pencils, impact at velocity. There's a new standard that I think is going to matter a lot for the enterprise AI agent space. It's called AIUC1, and it builds itself as the world's first AI agent standard. It's designed to cover all the core enterprise risks, things like data and privacy, security, safety, reliability, accountability, and societal impact, all verified by a trusted third party. One of the reasons it's on my radar is that Eleven Labs, who you've heard me talk about before and is just an absolute juggernaut right now, just became the first voice agent to be certified against AIUC1 and is launching a first-of-its-kind insurable AI agent. What that means in practice is real-time guardrails that block unsafe responses and protect against manipulation, plus a full safety stack. This is the kind of thing that unlocks enterprise adoption. When a company building on 11 labs can point to a third-party certification and say our agents are secure, safe, and verified, that changes the conversation. Go to AIUC.com to learn about the world's first standard for AI agents. That's AIUC.com. Want to accelerate enterprise software development velocity by 5x? You need Blitzy, the only autonomous software development platform built for enterprise codebases. Your engineers define the project, a new feature, refactor, or greenfield build. Blitzy agents first ingest and map your entire codebase. Then the platform generates a bespoke agent action plan for your team to review and approve. Once approved, Blitzy gets to work autonomously generating hundreds of thousands of lines of validated end-to-end tested code. More than 80% of the work completed in a single run. Blitzy is not generating code, it's developing software at the speed of compute. Your engineers review, refine, and ship. This is how Fortune 500 companies are compressing multi-month projects into a single sprint. Accelerating engineering velocity by 5x. Experience Blitzy firsthand at Blitzy.com. That's B-L-I-T-Z-Y dot com. Welcome back to the AI Daily Brief. When it comes to what I cover on this show, I have a strong preference, as you guys well know at this point, to changes and updates that are directly and immediately relevant to you and your lives and your work. And yet, of course, all of those changes are happening in a larger societal context that we can't ignore. And right now we are in a particularly notable moment in the history of the politics of AI which I would describe as something like, if AI has flirted with politics so far, it is now through this phase becoming much more discreetly and distinctly a political issue. The Verge goes even farther, writing in a recent piece that AI is now part of the culture wars. And with a recent memo from Anthropic CEO Dario Amadei, the culture warreness of this conversation is likely to get worse, not better. I'm sure at this point you've been keeping up to speed with the Anthropic-Pentagon bun fight, But the quick TLDR is that Anthropic had a couple of red lines around domestic surveillance and autonomous weapons that they refused to change in their contract, which really ticked off Defense Secretary Pete Hegseth, which led to all sorts of threats of the U.S. government designating Anthropic as a supply chain risk, which is not something that the U.S. government has historically done for American companies, which led to memos and much public fighting last week, finally culminating in President Trump blasting out on Truth Social that Anthropic was now persona non grata with the U.S. government, and Hegseth following up that not only would they not be working with Anthropic, they would in fact be pursuing the supply chain risk designation and pushing other defense contractors to stop working with Anthropic as well. On the same day that this was all going down, OpenAI announced their own deal with the Department of War, and it has just been a mess. In the wake of OpenAI announcing their deal last Friday night Anthropic CEO Dario Amadei published a 1 memo that was not happy with basically anyone The memo was later leaked to the information and Amadei got right to the point He opened the memo by writing, I want to be very clear on the messaging that is coming from OpenAI and the mendacious nature of it. This is an example of who they really are, and I want to make sure everyone sees it for what it is. Dario explained that while we didn't know exactly what was in the OpenAI contract, he had a few impressions about how their safeguards would work. He suggested that OpenAI would deploy a model without legal restrictions, but with a safety layer that amounts to model refusals on certain tasks. Amadei continued, Our general sense is that these kinds of approaches, while they don't have zero efficacy, are, in the context of military applications, maybe 20% real and 80% safety theater. He explained that applications like autonomous weaponry or domestic surveillance rely on contexts that the model can't be privy to, such as the presence of a human in the loop or the provenance of surveillance data. Amadei also alleged that the idea that Anthropic were offered the same terms as OpenAI and rejected them was false. He added that he also believed it was false that OpenAI's terms meaningfully prevent AI use in domestic mass surveillance or autonomous weaponry. Circling back to earlier statements, Dario reiterated the core concern that the DOW has legal surveillance powers which are quote, not of great concern in the pre-AI world but take on a different meaning in a post-AI world. Amadei wrote that Anthropic's negotiations on Friday had ultimately come down to a single clause in the contract. According to his retelling of events, the Pentagon had agreed to everything Anthropic had asked for, but required them to delete the specific phrase about analysis of bulk acquired data. He said, this exactly matched the scenario we were most worried about. We found that very suspicious. On autonomous weapons, Amadei said the Pentagon had argued that a human in the loop is required under the law, but Dario noted that this is only Pentagon policy, which was added during the Biden administration and could be changed at will by Secretary Hegseth, adding, so it is not, for all intents and purposes, a real constraint. Still, a lot of the details of the negotiations were kind of secondary to the main point he was trying to make. Specifically, he said that a lot of the messaging from OpenAI and DOW are, quote, just straight-up lies about these issues or tries to confuse them. In pretty much no uncertain terms, he accused Sam Altman of acting in bad faith, suggesting that all of his appearances to support Anthropic in public were just about him acting in a way that, quote, doesn't make it seem like he gave up on the red lines and sold out when we wouldn't. In the spiciest and perhaps most politically fraught part of the memo, Dario argued that the disagreement didn't actually have to do with the contract. He wrote, The real reasons DOW and the Trump admin do not like us is that we haven't donated to Trump while OpenAI and Greg Brockman have donated a lot. We haven't given dictator-style praise to Trump while Sam has. We have supported AI regulation, which is against their agenda. We've told the truth about a number of AI policy issues like job displacement. And we've actually held our red lines with integrity rather than colluding with them to produce safety theater for the benefit of employees. Which I absolutely swear to you is what literally everyone at the DOW, Palantir, our political consultants, etc. assumed was the problem we were trying to solve. Sam is now, with the help of the DOW, Dario continues, trying to spin this as if we were unreasonable. We didn't engage in a good way, we were less flexible, etc. I want people to recognize this as the gaslighting it is. Coming to a conclusion, Dario writes, Thus Sam is trying to undermine our position while appearing to support it. I want people to be really clear on this. He's trying to make it more possible for the admin to punish us, by undercutting our public support. Finally, I suspect he is even egging them on, though I have no direct evidence for this last thing. Dario argued that the narrative was mostly failing with the general public, but had been successful with some, in his words, Twitter morons. My main worry, he concludes, is how to make sure it doesn't work on OpenAI employees. Due to selection effects, they're sort of a gullible bunch, but it seems important to push back on these narratives which Sam is peddling to his employees. So boy howdy, a lot to unpack in this. I think it's important to keep in mind that this was Friday night, right as this was all going down. And I think that there are a couple possible interpretations. One is that this was some type of strategic, either a strategic recruitment play, in other words, to get disaffected OpenAI staffers to come over and join Anthropic, or an attempt to lean into anti-administration sentiment, basically an act of App Store politics. Anthropic at the time of Dario's writing had not yet hit number one in the App Store charts, but already it had rocketed up to number two. The other possible interpretation, though, of course, is effectively that this was just a crash out, that it wasn't super considered, and that any of these strategic outcomes were just secondary to the fact that it was a CEO venting in a sort of private forum that then would become public. This seems to be what J.V. Mouselich thinks, writing, Dario was obviously on megatilt here, same as everyone else on Friday, and the inflammatory stuff especially about the White House is deeply effing stupid to say. White House was trying to de-escalate and Dario needs to eat some crow ASAP. Now Zvi here is genuinely sympathetic to Anthropic and AI safety in general, and so I think it's notable that that interpretation is coming from him. Unsurprisingly, it does seem that the administration was not happy about this. Axios business editor Dan Primack wrote, Amadei's blog post is said to have infuriated Defense Department officials who believe he was trying to virtue signal to a. Anthropic employees upset about the Venezuela revelations, and b. AI engineers at rival companies who might share similar concerns. Now, I'm pretty sure Dan was talking about a previous memo, not this most recent one, but implying that the same logic from the previous memo applies to the Friday night writing as well. It is worth noting as we interpret things that Dario has never been a big fan of Trump. A news article from last September reported that in a Facebook post urging friends to vote for Kamala Harris, Amadei had likened Trump to a feudal warlord. He also cut ties to a number of law firms who had made deals with the president. While pretty much everyone agreed that this was not going to work out all that well for Anthropik vis-a-vis the White House, even if they generally supported Dario's position, there was more mixed feelings around his accusations with regard to Sam and OpenAI. Dean Ball wrote, I do not share the cynicism of some with respect to OpenAI's actions in the DOW anthropic dispute. It basically seems to me as though OpenAI was attempting to de-escalate last week. Whether they executed well is a separate question, but in their defense, good execution and such chaos was nearly impossible. It seems OpenAI tried to reduce tensions and find a productive path forward, while allowing its employees considerable latitude to speak their minds. The easy thing would have been for management to stay quiet and let this happen. They did not do that, and they also stood firm in opposition to the supply chain risk designation. In general, OpenAI is unjustly maligned. This is the thing that bothers me the most about Dario's leaked memo. It spends so much time on OpenAI conspiracies and cynicism that I fear industry solidarity in the future will be harder than it needs to be. This is not the last time we will see state interference into Frontier AI, and until we build formalized structures for such interference, it will be important for the industry to hang tough together. I fear that will be less likely now. Interestingly, Sam Altman seems to agree with Dean that the particulars of how they handled the Pentagon contract weren't handled as well as they might have been. During his first all-hands dealing with the issue Altman said that he didn regret signing the deal but wished he didn rush to announce it last Friday night Echoing previous comments he said the announcement made OpenAI look opportunistic and not united with the field Sources said the tones of the all meeting was respectful with employees trying to drill down on the details in the contract. Altman apparently empathized with the mood in the room, saying, to try so hard to do the right thing and get so absolutely personally crushed for it, and I know this is happening to all of you too, so I feel terrible for subjecting you to this, is really painful. A source speaking with the New York Post said that the reaction within the company was largely positive, save for a small group. They said from the internal messages, people are pragmatic and agree that Friday night was perhaps a little rushed and not the best communication. But now that there is more information, it feels like everybody is generally positive, save for like these 30 people who are always the ones raising questions. And while no one has publicly quit over the contract, reinforcement learning lead Max Schwarzer announced on Monday that he had decided to leave OpenAI to join Anthropic, which basically everyone assumed was a direct response to this. That said, not only did Schwarzer not throw OpenAI under the bus, he tried to give at least a plausible reason for his move that wasn't this, saying that he wanted to return to doing individual work as a researcher rather than continuing in a management position. On Wednesday evening, the Financial Times reported that Anthropic had restarted negotiations with the Pentagon around the contract. Amadei was reportedly back in discussions with Department of War Undersecretary for Research and Engineering and former Uber executive Emil Michael. You might remember him as the person who referred to Amadei as a liar with a god complex just about a week ago. The reporting framed the talks as a last-ditch effort to strike a deal and avoid being labeled to supply chain risk, and while they said that the memo was likely to complicate negotiations, they did not include any sourcing about the administration's current outlook on it. Axios, however, did receive comment from the administration, which threw cold water on the prospect of a reconciliation. An administration official said, Ultimately, this is about our warfighters having the best tools to win a fight, and you can't trust Claude isn't secretly carrying out Dario's agenda in a classified setting. What's more, even before a formal supply chain risk designation, military contractors are already ripping out Anthropics tech. CNBC reports that a number of defense contractors are telling employees to stop using Claude and switch to other models. The reporting directly references the threat to label Anthropics supply chain risk as the cause. While opinions have been pretty unified that the designation goes way too far, including even from central figures at OpenAI, for example, on Monday, former NSA and Cyber Command Director and now OpenAI Board Member Paul Nakasone said, this is not a good space for our nation. We need Anthropic. We need OpenAI. We need all of our large language model companies to be partnering with our government. The moves of the defense contractors show why these types of threats are so pernicious. No one who has mission-critical and business-essential contracts with the U.S. government is going to take those risks. Alexander Hartstrick of J2 Ventures, which has a focus in the defense space, said that already 10 of his firm's portfolio companies have, quote, backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one. Now, while this is undoubtedly the largest AI politics issue, and one that is thrusting it into the mainstream, it has political coattails that are dragging other things in as well. As elections get closer, the conversation around data centers, for example, is getting more heated as well. This week, the president finalized the Big Tech Pledge on data center energy use. The pledge was signed on Wednesday at a White House roundtable with several tech executives in attendance. Attendees included Microsoft President Brad Smith and OpenAI COO Brad Lightcap. Anthropic, of course, was not represented, but they also haven't begun building their own data centers. Seven companies signed the pledge, namely Google, Meta, Microsoft, Oracle, OpenAI, Amazon, and XAI. So this covers all of the hyperscalers as well as each AI startup currently building significant AI infrastructure. Substantively, the tech companies have pledged to bring their own power supply, either through constructing new power plants or paying to cover the cost of expanded infrastructure. The pledge doesn't prescribe any particular solution, but the president said that each company should negotiate directly with utilities to ensure they're paying an appropriate rate. The agreement states that the tech companies will be on the hook for additional costs even if they pull out of data center projects. That was presented as a key term that could assuage fears of overbuilding into an AI burst, with consumers left holding the bag. In addition, the company signed up to contribute power back to local grids in times of need. These load management agreements have been in place in Texas for several years, and have proven fairly successful at keeping the grid operational during winter storms. The pledge is structured as an agreement with the president, so it's unclear if it carries any legal weight, but the president pointed out that this pledge is in the best interest of the hyperscalers. Articulating quite simply the obvious political truth, Trump said, they need some PR help because people think that if a data center goes in, their electricity prices are going to go up. Some centers were rejected by communities for that, and now I think it's going to be the opposite. AI czar David Sachs took to Twitter to laud the deal and critique opposing types of data center policies. Sachs wrote, Speaking of Bernie Sanders, it's very clear that he thinks this is a winning political issue, and one that he's very much not going to let go. He put out a video of him flying to Berkeley, speaking with some of the more prominent AI doomers like Eliezer Yudkowsky, and then releasing the video to his Twitter. Jeff Schellenberg of Compact Magazine isn't sure that this is the right strategy for AI criticism. Jeff writes, The economic populist view of AI is or should be quite different from the Yudkowsky and doomer view. However, because the latter is more narratively compelling and urgent seeming, economic populists seem to be embracing it. This is unfortunate. Finally, showing just what absolutely weird bedfellows AI issues are going to bring together, Future of Life Institute and AI safetyist Max Tegmark announced the pro-human AI declaration. The Verge reports that a secret meeting took place back in January to sign this document, and the group of people represented in the 90 attendees are, to say the very least, scattered across the political spectrum. The group who've signed this thing include everyone from MAGA influencer and former presidential advisor Steve Bannon to Ralph Nader. If you want to know more broadly what I think about the anti-AI movement and which parts of it we should be paying attention to and how we should be engaging, I have a whole episode of that last week. For now, on the purposes of this episode, the big thing that I want to track and where we'll conclude is that part of the fallout of the Anthropic and Pentagon fight is that something which has remained mostly on the sidelines so far as a political issue is now being absolutely thrust into the mainstream. Hopefully pretty soon we can get a reprieve from this, and in either case I'll probably try to dial back the coverage unless something truly huge happens, but that is where things stand from where I'm sitting. And that is going to do it for the AI Daily Brief. Thanks as always for listening or watching, and until next time, peace!