TBPN

Anthropic v. DoW, Paramount wins WB, OpenAI raises $100B | Diet TBPN

30 min
Mar 3, 2026about 2 months ago
Listen to Episode
Summary

The episode covers the major clash between Anthropic and the US government over AI usage restrictions, leading to a federal ban on Anthropic's technology. The hosts also discuss Paramount's acquisition of Warner Brothers Discovery for $31 per share, beating Netflix's competing bid, and OpenAI's record $110 billion funding round.

Insights
  • Private AI companies face a fundamental tension between corporate control and government authority when building potentially transformative technology
  • The government's willingness to use supply chain risk designations against domestic AI companies signals a new era of tech-government relations
  • Media consolidation continues as streaming wars force companies into debt-heavy acquisitions to achieve scale
  • AI companies may need to choose between principled stances and government contracts as national security concerns intensify
  • The comparison to nuclear technology suggests AI governance may follow similar centralized control patterns
Trends
Government assertion of control over AI technology for national security purposesEscalating tensions between AI safety advocates and defense applicationsMedia industry consolidation driven by streaming competition and debt financingRecord-breaking private funding rounds concentrated among AI leadersSupply chain risk designations being applied to domestic technology companiesAI companies facing pressure to choose between ethics and government contracts
Companies
Anthropic
Banned by US government over disagreements on AI usage restrictions for defense applications
OpenAI
Raised record $110 billion funding round and contrasted with Anthropic's government stance
Paramount
Acquired Warner Brothers Discovery for $31 per share, beating Netflix's competing bid
Warner Brothers Discovery
Acquired by Paramount after bidding war with Netflix, received $2.8B breakup fee
Netflix
Lost bidding war for Warner Brothers but received $2.8B breakup fee from the deal
Palantir
Reported Anthropic employee inquiry about Claude's role in Venezuela raid to Pentagon
Block
Laid off 40% of workforce including 70% of engineers, cited as potential AI displacement example
Department of Defense
Central to conflict with Anthropic over AI usage terms and autonomous weapons restrictions
Nvidia
Investor in OpenAI's $110 billion funding round alongside Amazon and SoftBank
Amazon
Strategic investor in OpenAI's record-breaking $110 billion funding round
People
Dario Amodei
Anthropic CEO who defended company's stance on AI restrictions in CBS interview
Emil Michael
Undersecretary of War who detailed failed communications with Anthropic leadership
Donald Trump
President who ordered federal agencies to cease using Anthropic technology
David Ellison
Skydance Media owner who successfully acquired Warner Brothers Discovery after multiple rejections
Palmer Luckey
Defended government position on AI contracts and criticized corporate control over military tech
Ben Thompson
Analyst who compared AI governance to nuclear technology nationalization precedent
David Sacks
Shared details about Biden administration meetings threatening to control AI industry
Scott Bessent
Treasury Secretary who announced termination of Anthropic products at Treasury Department
David Zaslav
Warner Brothers Discovery CEO praised as masterful dealmaker for maximizing sale price
Quotes
"I am directing every federal agency in the United States government to immediately cease all use of anthropics technology. We don't need it, we don't want it and we do not do business with them again."
Donald Trump
"We are a private company. We can choose to sell or not sell whatever we want. There are other providers at the same time."
Dario Amodei
"Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives?"
Palmer Luckey
"AI is going to be a game of two or three big companies working closely with the government, and we're going to basically wrap them in a government cocoon."
David Sacks
"If it doesn't close, we get 7 billion and we get back to work."
David Zaslav
Full Transcript
3 Speakers
Speaker A

It was a massive weekend. So much news. But we missed you. We missed you on Friday. We were traveling, we went to Montana.

0:02

Speaker B

Terrible day to be out.

0:08

Speaker A

Terrible day to be out because it

0:09

Speaker B

was every single time. Yeah, we've had an off day. It ended up being a massive news day. So lesson. Yeah, never take a day off.

0:10

Speaker A

Yes, never take a day off. Truly, what an absolutely crazy weekend. Of course there's the war with Iran. The big news in tech was the US halts the use of anthropic AI after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escalation of the government's clash with the company over how its technology can be used by the Pentagon, Quote, I am directing every federal agency in the United States government to immediately cease all use of anthropics technology. We don't need it, we don't want it and we do not do business with them again. We will not do business with them again.

0:17

Speaker B

No, we don't negotiate with terrorists,

0:59

Speaker A

Trump said Friday in a social media post. The Defense Department and other agencies using Anthropic's Claude models will have a six month phase out period, the President said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time, but I guess a lot of this has to do with like Fed Ramp

1:03

Speaker B

and actually getting this is a lot more than, you know, switching to a new model to run deep research reports. So you're involving classified systems.

1:25

Speaker C

Sure.

1:34

Speaker B

The context that people didn't have last week was that the United States was headed to war. Right. And so even having that context I feel like is pretty important. Right. It sort of explains the 5pm deadline urgency. Anthropic had taken issue with how their products were used in the Maduro raid. There's a new conflict that's unfolding and so that makes the, the aggressive timeline make a lot more sense. It also makes the six month phase out make more sense because national security is on the line. This morning, Scott Bessant said at the direction of the President, the US treasury is terminating all use of Anthropic products, including the use of Claude within our department. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security.

1:34

Speaker C

Yeah.

2:26

Speaker B

U.S. federal housing, Fannie Mae and Freddie Mac are Also terminating the use of anthropic products, which was announced this morning.

2:26

Speaker A

Yeah. Which I think goes in line with the original direction. Trump said. I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these, these statements come out from sort of every, every different federal agency, as they sort of get their transition plan together, figure out, you know, you know, what are the requirements for their particular agency. Because I imagine some agencies aren't operating in classified environments. It's going to be much easier for them to onboard to a Gemini or an OpenAI or a Grok very quickly. Some of them, it's going to be a longer, longer plan, but they're all getting on board. And there's been a big debate over how Dario has handled this. Where is he in the right? Where is he in the wrong? Where has the government potentially overstepped? Have they been too aggressive or are they doing everything appropriately? Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on. In many ways, Ben Thompson does a great job sort of painting the broadest picture around. Like, what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor side, which is, you know, you're talking about a $200 million contract for a company that does 10 billion in ARR. This is 2% of revenue. In many ways it's, you know, a bump in the road. And so I think a lot of people will be squaring. How serious is this for anthropic? What does this mean for the other foundation model companies? What does this mean for the future of the relationship between tech and Washington DC? But there's a lot more context. So the way I processed this was interesting because I was very, I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American, I've run businesses, I've never actually sold anything to the government, but hypothetically, I could imagine the government coming and wanting to buy, I don't know, ads on TVPN or Lucy products or any other consumer packaged goods product that I've made my assumption is that the private company should have very little, very little say in how the government uses those products. And I was trying to zoom out and think about like, AI is so complicated because could be super intelligence, could be autocomplete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means. And in some scenarios it's like super critical, really complex. And in other ways it's just a product, it's just a, it's just a service, like an Excel sheet, like Microsoft Windows installation, like a car. And so, yeah, so I was thinking like if I was the CEO of Ford and I make Mustangs and ford Explorers and F150s and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer. I probably shouldn't say, no, no, no, I don't approve of this particular government, what the government's doing. So I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military. Then if they ask me, hey, we love the Ford Mustang, we love the F150, we love the Explorer, but we're going to war and we want you to put bulletproof glass on there and armor. That seems like a different discussion. That seems like I might, you know, set up a different manufacturing line, I might need a different assembly line. Like the car is going to be heavier and if I put bulletproof plating on all the cars, well, like a lot of families are going to be like, I don't want to arm.

2:32

Speaker B

It's going to hurt my business.

5:56

Speaker A

Yeah, it's going to hurt my business. Exactly. And so that, that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this, like the Humvee, of course, the Hummer is owned by General Motors and that brand has separated. And now most military vehicles are made by defense contractors. But there is some bleed over and there's sometimes when private companies do dual sourcing or dual use technologies. But all of that is just like a discussion. And that cost should be part of a new contract, effectively in my, in my case, and this was, and this was loosely what was happening.

5:57

Speaker B

Yeah. And Dario in the CBS interview, quote, we are a private company. We can choose to sell or not sell whatever we want.

6:32

Speaker A

There are other providers at the same time. And we'll get to the actual CBS interview. But he said Anthropic has been one of the most proactive AI companies. In working with the US Government, we were the first to deploy models on classified clouds and the first to build custom models for national security. Which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge. This was sort of predictable that you would get to this.

6:38

Speaker B

Yeah, this was. This was the moment he had been

7:06

Speaker A

waiting for in many ways. And so it's weird that you would be able to predict that this would happen, that there would be this question of, like, who gets to decide how the technology is used, and you wouldn't just be like, well, I know how it's going to play out, so I'm not even going to go in the lion's den. Instead it was like, we're leaning in with the government, we're deploying classified clouds, training custom models, but we still want authority over the final last sticking point on how these models are deployed, what they're used for. And that feels a little odd in the Ford example, like if I sell them a Ford F150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going to be like, look, like it's not ready for that, it's not armored. You shouldn't do that. But if they do it, then it's kind of on them. I should be clear about the capabilities of the vehicle and how bad it would be in that situation. But it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty. Based on what you know about the capabilities of the model, I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of war contexts. Now it's bad salesmanship. Most salespeople would just be like, yeah, everything's great. You can use it for anything. They over promise. And then under deliver, he's doing the opposite, but it's certainly responsible. And if that's his true belief, like if he believes that these models are not good for a particular use case, telling your customer that, hey, like, it's just not ready for that, like you're just going to have a bad time, it's not going to work. That's. That's a fine thing to communicate as The CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly. And then I think the government should be able to determine when and where they're effective. They can't break the law. And Congress and the American people, by extension, are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. It's not unreasonable to. To share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that language in their contract.

7:08

Speaker B

Well, here's the thing, though. So we know that Anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war. Right. You can imagine that Anthropic, a private company, does not know that. And so they have this deadline.

9:25

Speaker A

There's this information deadline.

9:40

Speaker B

Yeah, this information asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The President said this morning, said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be, you know, in and out super quickly. So the timeline is extending and the Department of War is sitting there being like, we need to know that are the provider of these AI systems is going to be reliable. Just a little bit ago, totally, they took issue with it, right. Can we count on them? They start this kind of renegotiation process. Right. To try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is like, leaning in and like, really, in some ways felt like they were kind of like, not respecting the process. So, like, when I. When I. Or even the deadline, Right. So Emil Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that, in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified, should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider. We need alternative solutions.

9:41

Speaker A

Yeah, yeah. If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not going to put the cars on the transport. A lot of people were, like, really, really keen on boiling down the terms to, like, these two, like, buzzwordy lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous, what is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where you get into, like, the ideas of deals that stick. You can have the same exact contract, line item or terms of a signed agreement with, with, with two different people, and it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC. It was 20% and a board seat. And I had another deal with another VC, 20%, a board seat. And the one VC was like, suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is, I think, what you need to put forward if you want to work with the government effectively. So Semaphore reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, who is Nicolas Maduro? But I don't know how much of a joke that is. And I also, I don't know how bad of a thing that is. I actually think yeah, Tyler, on that,

11:37

Speaker B

on the context of Venezuela, like, specifically, like, what is actually reported is, is that after an Anthropic employee inquired with Palantir about Claude's role in the raid, Palantir senior executive notified the Pentagon. Yeah, so I think it is like kind of blowing out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right. It's an employee.

13:16

Speaker A

It's not an executive article about that too, though.

13:33

Speaker B

Maybe it's like Dario telling an employee to go check on that. But like, we don't know. It could just be like a random employee.

13:35

Speaker A

I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson where, where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And it was framed as like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's like, somewhat fair. You could go either way on that. But I just think like, LLMs are good for that type of thing. Like, what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in. And LLMs can help with that. And so I feel like that's just a good thing. Like if you just zoom out and just ask, do we want a more knowledgeable and educated government workforce across everything that they do? It seems like absolutely yes. And so there was a perception that this was like going to kill Anthropic because if Nvidia has a government contract, then they can't do any deals with, with Anthropic whatsoever. And that's not true. Apparently the, the supply chain risk is specifically if you are a company and you' on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so still dramatic. I think Dario said it was unprecedented. It's only been used for foreign countries.

13:40

Speaker B

Yeah. Emil Michael was going through the timeline. He said, today at 9:04pm no response yet to my calls or messages to dario. Today at 8:25, anthropic writes, we have not received direct communication from the Department of War. Of course, Emil Michael is the undersecretary of War. Today. 514 Secretary of War tweets supply chain risk designation. Today I called Dario's business partner, 502, asking to speak to Dario because he hasn't gotten back to me. She is typing while we speak. And likely has lawyers in the room. With no notification to me, I called Daria 501. No answer. I messaged Dario asking to talk as well.

14:54

Speaker A

So, speaking of Dario on cbs, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts, There were a lot of anti posts, but it caused a discussion. I was left unsatisfied with his answer on one question. So he was basically arguing that LLMs as a class of technology hallucinate and should but should not be used for autonomous weapons, which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, like, much more stronger communication for him to say, hey, look, we are anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Like, our system is awesome at that, but we don't make a product that we'd recommend using for autonomous weapons. He is an expert in LLM capabilities, but he's not necessarily an expert in DoD capabilities. It was odd to hear that he was like sort of painting with a broad brush and clearly believes, which is fair. It's his belief, but he clearly believes that, that the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally, the right of the people to be secure in their persons, houses, papers and effects against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that. But there are obviously a lot of nuance and different things. Like, if public information can't, does that count as surveillance? Does the IRS count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that, where surveillance is broadly popular, there's other things where it's massively unpopular. And of course, it gets into the actual definitions. 20 lines deep. To understand what happens in the court. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable. But the court gave, gave notice that going forward, this should not be used and that the laws need to change. The whole debate right now is, is Dario like the God king, corporate emperor of this private company, that he has control over and like you don't get to vote what he does versus democracy, America, government. There are other reactions and other breakdowns. We can actually kick off with this breakdown of Ben Thompson's piece. Ben Thompson, as always, lays out the reality more clearly than I could have. Despite my attempts. By Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude, and as much as I dislike Hegseth's extralegal might makes right maneuvering, I will ask you again, what did you expect? Vibes essays. This is the reality of all too many of my EA followers that. That they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. One of Daria's favorite books is the Making of the Atom Bomb. The Making of the Atomic Bomb. And it tells the story of the scientists that built the atom bomb. And then eventually that technology was nationalized. And he apparently gives this book out to Anthropic employees and sort of seen it as like a roadmap for what might happen with AI. Is it a cautionary tale like, we haven't had nuclear war in 70 years. We built the nuclear bomb. Probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been successful. Knock on wood. But it's been successful in my entire life and my parents life. The bombs haven't fallen since the 40s. And so this idea of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke?

15:27

Speaker B

Yeah. And the way that I was personally processing it, I was. I saw that the, the CBS interview had happened.

19:15

Speaker A

Yeah.

19:21

Speaker B

This was Friday night. Right. I went to the Paramount app to try to find the interview. Couldn't find it.

19:21

Speaker A

I went to the RSSB. I couldn't find it either. It's on YouTube and it has a million point three views.

19:27

Speaker B

Yeah. So it went out over the weekend almost in the same session. I'm seeing that we are now at war as a country. And so all, all the kind of blowback against OpenAI. I was processing that of like, we want our. This, this technology is critical. The government like clearly needs it. And now we want the labs leaning into working with the Department of War at this critical moment in time.

19:30

Speaker A

Even now, I hear many of you say something akin to if this is what it comes To I'd prefer King Daario to King Hegseth. Listen to yourselves. This is a declaration of war. Given this, of course Hegseth is taking the action. He is now. You thought I was joking when I referred to this situation as a Thucydides trap. Anthropic is a rising power.

19:55

Speaker B

Heading over to Palmer. He says this gets to the core of the issue more than any debate about specific terms Emil is sharing. Prior to their new constitution, Anthropic had an old one they desperately tried to delete from the Internet. Choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort. Palmer says this gets to the core of the issue many more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly innocuous terms from the latter, like you cannot target innocent civilians, are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not what makes them innocent or not? What does it mean for them to be a target versus collateral damage? Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians. That they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like what level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? At the end of the day, you have to believe that the American experiment is still ongoing. That people have the right to elect and unelect the authorities making these decisions. That our imperfect constitutional republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisors. I still believe. And that is why bro just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree? It is so simple. Please, bro is is an untenable position that the United States cannot possibly accept. And Emile Michael had said that Anthropic wanted to block searching over public databases as well.

20:14

Speaker A

Roman helmet guy says hi. I'm a private citizen who developed a super weapon potentially a thousand times more powerful than nukes. And now I'm selling it to the government. But I get to choose who they fire it at and how everyone. And how everyone please respect my decision.

21:56

Speaker B

David Sacks had shared a clip in

22:09

Speaker C

D.C. in May where we we talked to them about this and the meetings were absolutely horrifying. And we came out basically deciding we had to endorse Trump and then add

22:14

Speaker B

so little color to absolutely horrifying. What, what did you hear in those meetings?

22:23

Speaker C

They said, look, AI. AI is one of these technologies. AI is a technology basically that the government is going to completely control. This is not going to be a startup thing. They actually said flat out to us, don't start. Don't do AI startups. Like, don't. Don't fund AI startups. Let's not something that we're going to allow to happen. They're not going to be allowed to exist. There's no point. They basically said, AI is going to be a game of two or three big companies working closely with the government, and we're going to basically wrap them in a, you know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon. We're going to protect them from competition, we're going to control them, and we're going to dictate what they do. I said, I don't understand how you're going to lock this down so much because, like, the math for, you know, AI is like, out there and it's being taught everywhere. And, you know, they literally said, well, you know, during the Cold War, we, We classified entire areas of physics and took them out of the research community and like, entire branches of physics basically went dark and didn't proceed. And that if we decide we need to, we're going to do the same thing to the math underneath AI. Wow. And I said, I've just learned two very important things because I wasn't aware of the former and I wasn't aware that you were even conceiving of doing it to the latter. And so they basically just said, yeah, we're going to look, we're going to take total control of the entire thing and just don't.

22:27

Speaker A

And what was their. And Mark, what was steel maned for the listener? Like, what was their argument?

23:36

Speaker C

I'll do my best to steal, man. So one is just like, to the extent that this stuff is relevant to the military, which it is. If you draw an analogy between AI and autonomous weapons being like the new thing that's going to determine who wins and loses wars, then you draw an analogy to the. In the Cold War, that was nuclear energy, that was nuclear power, and that was the atomic bomb. And, you know, the federal government, the steel, man, would be. The federal government didn't let startups go out and build atomic bombs. Look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back, which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how the government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem that is happening at hyperspeed in AI. And these are the same people who have been using social media censorship against their political enemies. These are the same people who have been doing debanking against their political enemies. And they basically, I think they want to do they want to use AI the same way. And then, look, I think the third is, I think this, this generation of Democrats, the ones in the White House under Biden, they became very anti capitalist and they wanted to go back to much more of a centralized, controlled, planned economy. And you saw that in many aspects of their policy. But I think, quite frankly, they think that the idea that the private sector plays an important role is not high up on their priority list. And they think generally companies are bad and capitalism is bad and entrepreneurs are bad. And they've said that a thousand different ways. And, you know, they, you know, they demonize, you know, entrepreneurs as much as they can.

23:40

Speaker B

But yeah, Elon also piled on to Sachs's take, which, you know, centered around a lot of those staffers allegedly going over to Anthropic.

25:06

Speaker A

Let's move on over to Netflix and Paramount because there's news in the bidding war how David Ellison finally got what he wanted. 10 nos. And then finally got it done. For six months, the son of one of the world's richest men kept hearing the same unfamiliar word. No. Even before he closed a deal to combine his company with a much bigger one, David Ellison was already plotting to do it again. Once his Skydance media took control of Paramount, he turned his attention to a Hollywood icon, launching an audacious takeover bid for Warner Brothers Discovery that would give the Ellison family full control of a sprawling media empire. So he came in with an offer of $19 per share. Finally got it done at 31 a share.

25:17

Speaker B

Sleepwell says. Let me get this straight. Paramount approaches Warner Brothers for acquisition. Netflix puts a higher offer for Warner Brothers. Paramount puts an even higher offer at 7x leverage. Netflix declines to match offer. Now Paramount and Warner Brothers will have to license all their content to Netflix to pay off all that debt. 3D chess. A lot of people were thrown around the succession moment. Congratulations on saying the biggest number.

26:02

Speaker A

So Paramount will be footing the $2.8 billion breakup fee paid from Warner to

26:32

Speaker B

Netflix, which was paid Friday.

26:39

Speaker A

Oh, it was paid already? Yeah, yeah. Netflix stock is up. Paramount stock's also up. And just David Zaslav has to be one of the greatest dealmakers in history now.

26:41

Speaker B

Got the absolute maximum price, David Pfeiffer says. So somehow Netflix was able to force one of its rival to overpay for another one of its rivals, putting them into a messy long process of unification, and got paid 2.8 billion for it. Zaslav apparently said the deal may not close. If it doesn't close, we get 7 billion and we get back to work. Also said if Warner Brothers is going to survive, they need it to be bigger and we need it to be global. Getting into the Block news, which happened on Thursday and we didn't get to

26:53

Speaker A

cover the Block, that happened like six months ago, right?

27:23

Speaker B

Yeah, that's six months.

27:25

Speaker A

Okay, six months ago seems about right.

27:26

Speaker B

AGI age.

27:28

Speaker A

Yeah. Most of you have heard about Block's 40% layoffs by now, but the numbers are even worse. Engineering was hit harder. We've lost close to 70% of our engineers. The company you once know is a prolific open source software contributor, no longer exist exists. And so I was wondering, like they're laying off 40%, how will they be shifted? Because the AI narrative, the job displacement narrative, that could be back office people that are processing manual workflows, or it could be software engineers who now there's a smaller team that's getting more leverage out of AI tools and so you write more off. There's also just the world where you're a mature software company and you have lock in and you're like, yeah, we actually don't need to ship that many more features. We have sowed for so long, it is time to reap. I am still bloat pilled. I still believe that this is somewhat

27:29

Speaker B

of a unique bloat driven.

28:16

Speaker A

This is somewhat of a unique situation.

28:18

Speaker B

But it didn't stop the market from absolutely puking on Friday. Amex at one point was down something like 7%. MWT says I'm fully on board with spiraling into a depressive episode over the rapidly approaching neo feudalist breakdown of society. But I worked at Square in 2017 and my job had no task. I sat on the roof eating free snacks all day with a MacBook. Maybe block laying off a ton of employees is a sign that AI is going to destroy everything. Or maybe the Stock is down 80% from the highs and they overhied and AI is a convenient excuse. I don't think we ever said the words that we never rang. The Golf raised 110 billion dollar round of funding from Amazon, Nvidia and Softbank. We are grateful for the support of our heart and have a lot of work to do to bring you the tools you deserve. That's probably the biggest. That's a Gong record.

28:20

Speaker A

Yes. It's the biggest round for a private company ever. And it's also about 1/4 of venture capital outlays that are expected for 2026 in one round. 400 million invested from venture capitalist. Broadly, of course, this money's from the hyperscalers. It's more complicated than your average VC deal. I don't even know if this will be in included in the VC funding. Funding tallies because it's such a big round and it's from so many strategics but lots of More capital for OpenAI. See you tomorrow.

29:11

Speaker B

I can't wait.

29:41

Speaker A

Goodbye.

29:42

Speaker B

Have a wonderful evening.

29:42