Big Technology Podcast

Dario’s Choice and Anthropic’s Future, Apple’s AI Devices, Netflix Loses WBD

61 min
Mar 2, 2026about 2 months ago
Listen to Episode
Summary

The episode covers Anthropic's breakdown in Pentagon negotiations over AI usage restrictions, leading to a supply chain risk designation and OpenAI swooping in to secure the deal. The hosts also discuss Apple's upcoming AI device strategy with smart glasses, pendant, and enhanced AirPods, plus the Netflix-Warner Bros Discovery deal collapse as Paramount intervenes.

Insights
  • AI companies face existential risks when government partnerships collapse over ethical red lines, as seen with Anthropic's potential multi-billion dollar loss
  • Apple may be positioning to dominate AI devices by leveraging iPhone integration advantages over competitors who lack smartphone ecosystems
  • Government AI integration is deeper than expected, with models already embedded in war games and military operations, making substitution extremely difficult
  • The streaming consolidation battle reflects broader industry decline, with traditional media assets becoming acquisition targets rather than growth engines
  • OpenAI's quick deal-making demonstrates how AI companies can capitalize on competitors' ethical stances to gain market advantage
Trends
AI companies increasingly forced to choose between ethical principles and lucrative government contractsSmartphone-tethered AI devices emerging as next battleground for tech giantsGovernment AI dependencies creating national security vulnerabilities when partnerships failAI model commoditization shifting value to device integration and user experienceTraditional media consolidation accelerating as streaming wars intensifySupply chain risk designations being weaponized against domestic AI companiesReal-time AI usage in military operations becoming standard practiceTech company philosophical differences with government creating business risks
Companies
Anthropic
Lost Pentagon deal over AI usage restrictions, declared supply chain risk
OpenAI
Secured Pentagon AI contract after Anthropic negotiations collapsed
Apple
Developing three AI devices: smart glasses, pendant, and enhanced AirPods
Netflix
Lost Warner Bros Discovery acquisition to Paramount, received $3B breakup fee
Warner Bros Discovery
Media company acquired by Paramount after Netflix deal fell through
Paramount
Outbid Netflix to acquire Warner Bros Discovery for $110 billion
Google
AI partner with Apple for Gemini integration in devices
Amazon
Investing $50 billion in OpenAI, operates government cloud services
Palantir
Uses Anthropic AI in government operations, involved in military actions
Meta
Competitor in AI glasses market with Ray-Ban partnership
Microsoft
Investor in Anthropic alongside Google and Amazon
Nvidia
Investor in Anthropic with ownership stake
People
Dario Amodei
Anthropic CEO who refused Pentagon's AI usage terms over surveillance concerns
Sam Altman
OpenAI CEO who quickly secured Pentagon deal after Anthropic's exit
Pete Hegseth
Secretary of War involved in Anthropic-Pentagon negotiations
Emil Michael
Undersecretary of War who criticized Anthropic publicly on social media
Donald Trump
President who tweeted threats against Anthropic during the controversy
David Zaslav
Warner Bros Discovery CEO who orchestrated the Paramount acquisition deal
Mark Zuckerberg
Meta CEO developing competing AI glasses and criticizing Apple's ecosystem
Jack Shanahan
General who supported Anthropic's position on military AI restrictions
MG Siegler
Spyglass newsletter author and podcast co-host analyzing tech trends
Quotes
"Within hours of declaring that the federal government will end use of its artificial intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools."
Alex KantrowitzMid-episode
"I'm sympathetic to Anthropic's position. No LLM anywhere in its current form should be considered for use in fully lethal autonomous weapon system."
General Jack ShanahanMid-episode
"Anthropic wanted language that would prevent all Department of War employees from doing a LinkedIn search."
Emil MichaelMid-episode
"If we believe that models are getting commoditized and if there's going to be diminishing returns, then the value might come from the way that they implement them."
MG SieglerLate episode
Full Transcript
6 Speakers
Speaker A

Anthropic's war with the Pentagon hits another level. Apple's preparing three AI devices, but the iPhone might be the killer feature, and Netflix will have to go forward without Warner Brothers Discovery. We'll dig into what it all means with Spyglasses. MG Siegler right after this Fiscally responsible

0:00

Speaker B

Financial geniuses, Monetary magicians. These are things people say about drivers who switch their car insurance to Progressive and and save hundreds because Progressive offers discounts for paying in full, owning a home, and more. Plus, you can count on their great customer service to help when you need it. So your dollar goes a long way. Visit progressive.com to see if you could save on car insurance, Progressive Casualty Insurance Company and affiliates. Potential savings will vary. Not available in all states or situations.

0:18

Speaker C

This episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed Sponsored Jobs to find right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C According to Indeed data, Sponsored Jobs have four times more applicants than non sponsored Jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply.

0:51

Speaker A

Welcome to Big Technology Podcast. It's the first Monday of the month, which means M.G. siegler from Spyglass is here to break down the month's news with us. And boy, am I glad that we have an episode for you today because it feels like a year's worth of news has happened over the weekend since we last left you On Friday, the Pentagon declared Anthropic a supply chain risk, making it clear that Anthropic was not able to work with the government or its contractors on government work, which is going to be a major hit to the business if it holds up. We also have OpenAI coming in and signing a very similar deal for the one Anthropic was just about to sign with the Pentagon. So we're going to dig into the latest in that story and what the implications might be Anthropic and the rest of the AI industry. We'll also talk about Apple's forthcoming AI devices. It's a set of them and Netflix, of course, losing the deal with Warner Brothers Discovery is Paramount swoops in and pays a lot of money for that shrinking property. All right, mg great to see you. Thank you for being here.

1:18

Speaker D

Great to see you, Alex. As you know, I'm happy to be here as a week ago I was in Dubai, so this is, you know, my family was lucky in the timing of getting out of there. But, you know, obviously thoughts with all the people over there and it's, it's a terrible situation.

2:22

Speaker A

Definitely. No, I'm very glad that you and your family made it out. And yeah, it seems like it's not just military infrastructure, but civilian hotels, airports, even a data center. I think an Anthropic data center was hit. So we'll, we'll talk about. Oh, right, an Amazon data center that may or may not have been serving Anthropic. Let's, let's pick up, because let's pick up on this story of Anthropic and the Pentagon because we now have some more news about what exactly led to the dispute and what the fallout might be. We have, we have movement, right. Anthropic's lost the deal. Not only that, they can't work with the government anymore maybe, and OpenAI has now picked up that, that deal. So let me just take you all through what exactly happened because there's this Atlantic story inside Anthropic's killer robot dispute with the Pentagon. They say. On Friday morning, Anthropic received word that Pete Hegseth, Secretary of War, his team was going to make a major concession pledge not to use Anthropic's AI for mass domestic surveillance or fully autonomous killing machines. But then qualified those pledges with loophole phrases like as appropriate, suggesting that the terms would be subject to change based on the administration's interpretation of the given situation. And here's where it goes off the rails. But on Friday afternoon, Anthropic learned that the Pentagon still wanted to use the company's AI to analyze bulk data collected from Americans. This could include information such as questions you ask your favorite chatbot, your Google search history, your GPS tracked movement, and your credit card transactions, all of which would be cross referenced with other details about your life. Anthropic told. Anthropic's leadership, told Hexath, the team that told Hegseth's team that was a bridge too far and the deal fell apart. Just to pick up my perspective from Friday where I said maybe there's not really a there there. And this is, you know, likely, you know, positioning and marketing. I think there's more of a there there than I thought. It does seem like this is a good line for Anthropic to draw. However, you know, as I kept reading more about this, it just seemed to me like this is a deal that did not need to fall apart, that there were ways to word the deal that you could basically include the carve outs that everybody had agreed to and it would have been fine, but the Pentagon just set this deadline for Friday at 5pm and it stuck with it. Basically. Dario didn't return their calls in the way that they wanted and then they went nuclear, substituted them out with OpenAI and declared them a supply chain risk. That's, that's sort of my perspective on, on where we stand today. Do these details change or have any change for you, MG in the way that you see this story and what's your general read on where it is and where it's going?

2:39

Speaker D

So I haven't actually written about this in part because I still feel like, yeah, it's obviously we're all digesting it a bit in real time and it's a delicate sit, given what we just talked about. Sort of with the, the situation in the Middle east going down. It does seem, I mean, obviously the timing of that, you note the Friday deadline. It's what the most wild thing to me about all of this is that, you know, Secretary Hegseth is going through with these negotiations in the middle of major preparations for war. Obviously. I mean, we didn't necessarily know that at the time though, you know, clearly there was the buildup happening and, and you know, in the middle of getting ready for these, these strikes, they are going back and forth with a, you know, an AI technology provider to try to get them, you know, to agree to, to terms. And so, you know, part of me, a cynical part of me wonders if, you know, they weren't using that. Not that they would disclose anything like that to, to anthropic necessarily, but like that they knew that this was sort of coming and so they knew, like, we need to both, we need to get something done now because we're probably going to be using some of this technology in the forthcoming, you know, war preparations and execution of the, of the war strategy and, or, you know, is this going to be the best position for us to sort of lay down the terms that we want and, and maybe Anthropic will have to sort of, yeah, just yield a bit easier. But maybe he hadn't done enough research on, on, on Dario, listen to your interviews and many other interviews to know like what, what his response was likely to be sort of these types of ultimatums. And so, yeah, it does feel like a bit that they probably could have hashed this out. But I do wonder again if the timing of the macro stuff of, of the actual war and attack situation just added the time pressure necessarily to some.

5:30

Speaker A

Well, this is interesting because they actually did end up using Anthropic in these strikes. And, and last week on Friday, I said it actually Anthropic's use was limited because I was reading the reports saying that Palantir has Anthropic involved. And that was what started this entire discussion because Palantir systems were used in the capture of Maduro and Anthropic had some questions about how they were used. I think I got, I won't say I got that wrong, but I had an incomplete picture of how deeply integrated Anthropic already is in the US Government and this stunned me. This is from the Wall Street Journal. U.S. strikes in the Middle east use Anthropic hours after Trump ban So by the way, the ban, and we'll talk about this is going to be six months from now, right, that they can't use it. But already with the Iran strikes, they are using Anthropic. Here's the Wall Street Journal story. Within hours of declaring that the federal government will end use of its artificial intelligence tools made by tech company Anthropic, President Trump launched a major air attack in Iran with the help of those very same tools. Commands around the world, including US Central Command in the Middle east, use Anthropic's cloud AI tool. The command uses the tool for intelligence assessments, target identification and simulating battle scenarios. Even as tension between the company and the Pentagon ratcheted up, highlighting how embedded the AI tools are in military operations. This was, this isn't just military analysts like asking Claude questions. It seem, seems like you have war games going on with Claude, which was much more than I expected. And I mean, I'd love to get your reaction A to, to what your reaction is now that we're learning how deeply it is integrated and B, why would the military risk having to substitute it out over, you know, language that they could have agreed to Anthropic with and they just didn't.

7:25

Speaker D

Again, I sort of come back to the notion of was this sort of a, it's just like the worst possible timing in way for both sides, right? Like, whereas if it were a more stable situation, you know, maybe the two sides could have sat down and hash things out a little bit more. But given the build up to this, like, it seemed like the administration got very fast, was very fast to get exacer, exacerbated by or sorry, exasperated by Anthropic. And now again, you might see why it's like, look, we don't have time for this, guys. We are, we are preparing for some military action right now if you guys are not on board, unfortunately, like you know, we already have the systems in place. We're, we're using those right now and you know, we'd love for you guys to be on board but if you're not, like that's something we can discuss I guess down the road to your point like six months later and also to your points. Like it's not just that they're yeah. Using like Claude Chatbot stuff. This is directly related it seems like to their, you know, their contracts with Palantir and also Amazon which has their own sort of government cloud stuff right. That, that allows these things to operate behind, you know, their own firewalls and insecure centers and whatnot. And so it's again, this is not something they could swap out overnight. It's not something that even if they Give clearance to OpenAI or anyone else that they can just, yeah, put, put in there because again, these things have to be tested. Like how would you know to trust that? You know, if you're all of a sudden swapping out your main model and you're running like literal war games on there, like how do you know, like you know what, what to trust and whatnot. And so again, it just feels like this, this timing of it maybe it's like guys, we need to make sure all of our, our, our eyes are dotted and T's are crossed before we go ahead with this operation. As you know, we're, we're going to be using some new technology this go around. So has anyone talked to Anthropic about like, you know, the latest with, with what they're thinking about it and then as you noted the, the Madora situation and obviously that Palantir seems like was involved in that as well. And so that came to the forefront there. And so it is, this is, this is all sorts of ensnared and, and weird entanglement going on right now. We, I feel like, I feel like all the talk that we've been doing about circular deals and all this, like we're, we're now at new stakes now in terms of, of where this is all getting integrated within these systems, right?

9:24

Speaker A

These like science fiction papers of AI potentially being used in the military somewhere down the line like in future years, like, oh wait a second, it's already used. But it is interesting because they do have this six month deadline to, to disentangle themselves from the federal government or really the federal government has the six month deadline disentangle them from Anthropic. Are you, are you Suggesting because of the timing that basically it was like, it's not that we need this Friday you to meet this Friday deadline because we're going to swap out another model. It's like, we, we're going to give you this Friday deadline because we got some other shit that we need to handle. And, and I mean, doesn't it feel like that.

11:53

Speaker D

Obviously, I have no idea. I have no inside sources at the Pentagon to know that they were giving this ultimatum, given their timeline for, for war preparations. But it does feel like, you know, there, they must have been at some level thinking like, we don't have time for this right now. You know, if you, if we want to hash out something like, great, here's, here's X, Y and Z partner on the team that can sort of talk you through it. But if not, like, sorry, we're, we'll revisit this at some point.

12:31

Speaker A

But that's the thing I don't understand because you're about to fight a major operation, right? Like, this is a war and you're not going to be able to swap out anthropic. So if anthropic, like, comes in, like, before the war starts, right? Like, you're not going to like, switch to deep seek or OpenAI on Friday afternoon. So, you know.

12:56

Speaker D

Yeah, deep seat. Could you imagine that?

13:15

Speaker A

I could.

13:16

Speaker D

That would go over.

13:16

Speaker A

I could, I could. I mean, at this point, anything's possible, right? I mean, I wouldn't do it, but I wouldn't be stunned. I mean, give. Look, look what they just did to an American company. But I guess it's interesting because, like, if you fight that war and you say, all right, anthropic gave us problems during the war. That's maybe when you start the process of thinking about, you're gonna find out one way or the other. So. And by the way, the attack was quite successful in the early going. I mean, I'm not, not sure if this is like all anthropics. Doing that would be, I think, a bridge too far to put it all on anthropic. But if that's the AI tool you're using, you're having a pretty successful campaign early on, militarily at least. Like, I don't know, is that, is that when you want to start subbing them?

13:17

Speaker D

So two things to that. One, again, I do think the Maduro, the Maduro thing situation obviously played a role in this. It's sort of weirdly like, hinted at what was to come, right? Because all of a sudden we, we learned that maybe it was being used. The reporting had, there's conflicting reports, but some of the reports had, you know, the notion that Anthropic learned about how Palantir was using it potentially for that raid. And they didn't like that, maybe too much. And sort of that's why they raised, raised the, you know, raised it up the chain a little bit. And maybe the, the administration didn't like the fact and Palancer didn't like the fact that they were doing that. And so fast forward to now. Again, we know, you know, the government obviously knows that they're, they're heading into this new situation. Either maybe they wanted to try to get it squared away before they did that. Or again, to my earlier, to my first point, like, it's possible that they used it as like a point of leverage over Anthropic, right? To say, like, look, we understand, you know, that there's been this back and forth about how we're potentially using, using the technology here, but, like, look, we're, you know, we're going to be using these things going forward. Your models, we'd love to keep doing that. And like, dot, dot, dot, by the way, like, you know, again, wouldn't. They wouldn't tip their hands. But let's just look into a week from now and see what you think, like, how this is, how this is playing out. And if you really, you know, want to, to be on sort of the wrong side of this from their advantage vantage point.

14:00

Speaker A

Look, the more I think about this, the more it just seems to me like, like I shared on Friday that this is sort of an ultimate culture clash. And we'll get into the opening ideal in a moment. But you look at Emile Michael, the Undersecretary of War who's been working on this, clearly doesn't like Daario, clearly doesn't like the Anthropic team. And I wouldn't be surprised knowing about him and knowing about them that there would be a culture clash there. And in fact, so I read that basically at the beginning that Anthropic stood up against what they thought was going to be domestic surveillance. And they had seemingly both agreed on the autonomous warfare part. This is what Emile Michael said. He just, you know, it's like one of those, like, I reported it for years. And he just tweeted it out. He just tweeted it out. Sort of like, what, what, what happened behind closed doors. He says Anthropic wanted language that would prevent all Department of War employees from doing a LinkedIn search. They wanted to stop the Department of War from using any public database that will enable us to, for example, recruit military service members, hire new employees. When I called to discuss cutting off the Department of War from using publicly available information that would hurt our military readiness, Dario didn't have the courage to answer. Right. This is the now sort of infamous. Emil called him before the deadline and Dario was in a. And then by the time he got out of the meeting, this whole thing was blown up. Now this is really where it gets wild. He says, we agreed in writing to act according to the national security Act of 1947 and the Foreign Internet Intelligence Surveillance act of 1978 and all other applicable laws. They wanted the word pursuant versus consistent with and wanted to delete all applicable laws, which was less protective of Americans. Can't make this up. We also agreed to. To oversight of all weapons weapon systems by saying the Department of War will use the AI systems for all lawful uses use case in accordance with all applicable laws of the US Law and the Department of War directives. And we wanted to. To. To retain the ability to override or disable the AI system as appropriate because he didn't like. This is Emil talking about Dario? He didn't like the word he didn't like as appropriate. What do he prefer? Inappropriate. I agreed. I even agreed to take that out. He knows it. His investors, customers and employees should know about his lies. Risking the safety and security of our country and our troops are a marketing vehicle for him. I mean, again, like, this is. I'm just going to say it. If you have two adults in the room, I think you should be able to work out this language. The other explanation is that the Department of Defense really did want to be able to override these systems, really did want to be able to conduct domestic surveillance. But again, we're talking about a tool that's so important to the military today that's being used in the use cases we described. To blow it up over these terms, to me, seems like complete, like a ridiculous thing.

15:26

Speaker D

I think there's a few things going on here. So first and foremost, like hearing you talk through those exact quotes, it's like, I'm sure you've been involved with them. I've been involved in. On, you know, in a deal side on a number of times, like with lawyer. When lawyers get involved and want to use very explicit language to. To make sure that everything is drilled down and there's no wiggle room. Like lawyers themselves, for lack of a better phrase, go to war over these little terms. Right. And it's like, no, we can't say it this way. We have to say it exactly this way. And the other side's lawyers will say, no, we can't let them say it this way. So, like, there's definitely some level of that. I, I, I know that, you know, they're, they're talking about this on, like, the, the Emil and Dario level, certainly, but, like, the legalese stuff just seems like it's, it's lawyers, like, you know, going back and forth on both sides to try to cover their own asses. Right. And the company's asses in the, in the case of the outside scenario. That said, I think you hit on it earlier where it's like, obviously these two sides just don't like each other from a philosophical level. Right. There's long been the, the charge against Anthropic from the Trump administration that maybe Anthropic is the more quote, unquote, woke AI company that they have all this, you know, effective altruism stuff going on that they don't like and that, you know, David Sacks has come out strongly on these, on these issues and that they just feel like that they're misaligned philosophically. And I do think it's an awkward situation because they didn't, I'm not, I don't know this for sure, but I wouldn't be shocked if they didn't necessarily know just how vital Anthropic was to some of the systems they're using. Again, with, with regard to Palantir, obviously, they, they use Palantir for a lot of different things, and government famously has for a while for different services. And the fact that, know Anthropic because everyone, I think, across the board loves their models for different reasons, regardless of sort of your philosophical bent about the team that's building them. You know, they have great technology. And so the fact that Palantir and then Amazon, obviously, and a bunch of others have used anthropic services, like, maybe the government just wasn't savvy enough to know just how integrated Anthropic itself was and that they can't just, again, like we were talking about, swap it out overnight for everyone makes, you know, frontier models. We can use open AI, we can use Google, we can use anyone. Like, let's just get someone else in there. It's like, it's not gonna be that simple. And so I think, you know, all of these things sort of coming to a head leading up to the situation that we're talking about with the attacks last weekend, it just Feels like there's a boiling point. And again, there's maybe some points of leverage that Emile and some others like, thought about. And obviously we didn't even talk about the Trump, Trump tweet. He tweeted like, you know, to basically tried to, to, you know, end Anthropic as we know it. You know, saying that like, we're, we're done dealing with them. You know, best of luck with whatever you do. We're not working with you anymore and none of our partners are working with you. And obviously the government has partnerships with Google, with, with every, with Amazon, with everyone else. And it's like, you know, it felt like it was an existential, potentially threat to, to Anthropic itself. And so there's so many layers going on into this and obviously the reporting every single day with more and more layers to it to unravel. And it's just weird to think that again, all this is unraveling while there's actually attacks going on. Like, yeah, insane.

18:22

Speaker A

Here's General Jack Shanahan, who's no friend to the sort of woke wing of the tech industry. He's the general behind the Maven program that, you know, Google, Google employees rebelled against, which was a partnership between Google and the Department of Defense of Defense. He said you might expect him to be sympathetic to the Department of War's position. He's not. He says, I'm sympathetic to Anthropic's position. No LLM anywhere in its current form should be considered for use in fully lethal autonomous. In a fully lethal autonomous weapon system. Despite the hype, frontier models are not ready for primetime in national security settings. Overreliance on them at this stage is a recipe for catastrophe. Mass surveillance of US Citizens. No thanks. Seems like a reasonable second red line. That's it. Those are the two showstoppers painting a bullseye on Anthropic garner. Spicy headlines, but everyone loses in the end. This should never have been such a public spat. Should have been handled quietly behind the scenes, scratching my head over why there was such a misunderstanding on both sides about terms and conditions of use. Something went very wrong during the rush to roll out the models. Let reason and sanity prevail. I mean, that seems like a pretty reasonable take.

21:42

Speaker D

It does. But again, I think that maybe it was a trickle down effect of the Maduro situation. Coming into this, knowing that the government, knowing that they're going into this situation and not wanting to, you know, for this to come up, like say that the, these attacks started and Anthropic got wind that their models were being used via Palantir or whatnot. And you know, they just start to raise like this giant PR campaign against, against the government for doing that. Now you might say that that would backfire against them and it could have, but it's sort of, who knows how it would have exactly played out in that case. But I'm just trying to game through like what the government was thinking here in terms of like, why engage this ahead of time? Again, either it's that they viewed it as a point of leverage over Anthropic leading up to this, that they knew that they could, they could get maybe, or they thought that they could get more of what they wanted out of, out of Dario leading up to this. Or again, that they wanted to sort of COVID themselves for if and when they went forward with this and, and using these models. But again, you point to the other stuff which is, you know, there's, there's the multi layers here. It's not just war game scenarios and things like that. It is the mass surveillance stuff, which obviously Anthropic cares about. And you would be hard pressed to find people who would be on the other side of that. Right. Like to your point on the general's comments, like, everyone sort of is on this. Not everyone, of course, but like a lot of people I think would be on the side. But the government's pushback against that, at least to date has been like, we just don't want Anthropic to have de facto say over anything. It's not like that. They're saying like, we want to mass surveil the, you know, the, the American populate. And they would say like the laws are already in place against that. Like, obviously there's gray areas with all of this stuff. But like they just, their stances. We do not think that a company should have de facto say over what, you know, what we would do in situations. And again, we're not gonna, the plan is not to mass surveil the U.S. but again, these are, they're slippery slopes, which is what Anthropic would argue, I would assume. And so you could just go back and forth and continually will go back and forth over those issues.

22:59

Speaker A

Right. And I would, I would still hold that they should have come to a deal, but they didn't. And so now the question is what happens next? So as I mentioned earlier, the Pentagon has labeled Anthropic a supply chain risk, which as I understand it means no federal government agency can work with Anthropic after this six month Deadline. Not only that, private companies working with the government on certain contracting work cannot use anthropic for that work. So by the way, if, let's say you're a Boeing, you may not want to have a certain model that your engineers use for government work and a different model that they use maybe for commercial work. You want to have standardization. So this is a potentially very big hit for Anthropic. Not just the $200 million contract that it had with the Pentagon, but this is a potential billion dollar, multi billion dollar hit. If the Pentagon does go through with this designation. Would you agree?

25:06

Speaker D

I totally agree. It's, it's not just as you said, the contract itself, it is the, the trickle down effects and the broader ramifications of if they lose that distinction. And it, and again, like, it might just. Yeah, it puts a chilling effect on new contracts that are signed.

26:09

Speaker B

Right.

26:24

Speaker D

Because it's like, what if some other company is thinking about like, oh, we might do a government contract one day. And to your point of like, would we rather just use one model like, you know, to sort of do all of our work or would we really want to have to swap out Anthropic for open AI if, if we do go forward with this government contract? And to that point, like, I do think that the two sides, you, you hear, you know, there's been subsequent reporting that like, there's still some talk that they want to, you know, figure out how to make this work again for nothing else. Maybe if, if we still have this six month window, like where they're going to be using the anthropic models like you, these six months are probably going to be pretty intense in terms of what's going down from, you know, the war perspective. Yeah, A lot of tokens being used. And so they probably do want to find a way to hash things out. So like the hope obviously is that cooler heads prevail maybe once this initial wave of this, you know, these attacks are, are sort of behind us, hopefully that they can, you know, sit down again and maybe hash out the legalese as we were talking about and like the exact wording of like, how to go forward with this. Because yes, it's, it's bad for Anthropic if, if they get ripped out of the US Government as they're talking about.

26:24

Speaker A

Right. And we should say that the supply chain threat is not something that's typically used for domestic companies. Right. It's typically like, right.

27:40

Speaker D

It's Chinese. It's like all the threats that were used against Huawei and all the Chinese companies. And it's wild that this is. And that's like. So it's the, the sort of backdrop behind all of this right now. I noted this earlier but you know, a number of people have seen this. Like Claude is now the number one app in the App Store, which is wild.

27:47

Speaker A

And it's the first time ever.

28:07

Speaker D

Very clearly related to some of this. Yeah, like. Right. Like it's not just that obviously it's been doing well. Anthropic's been doing well with the new, with the new Opus models that have rolled out and, and co work and cloud code and whatnot. But like some of this is certainly, you know, virtue signaling if nothing else. Right. Like people saying like, oh yeah, we want to be on the side of the AI company that is pushing back against the government that's trying to mass surveil or, or in the headlines at least. Right. Like that's the way that it's being portrayed.

28:08

Speaker A

Who does that remind you of? Tim Cook. Ten years ago. One month. Ten years. One month ago was the, the time that he sent that memo out about standing up to the FBI and Apple's basically capitalized on that for the last decade.

28:37

Speaker D

Yeah. And so you know, it's Anthropic running a similar playbook to that. I mean maybe not explicitly at least right now, like not doing PR campaigns. That would be pretty, you know, not in great taste to do that at the moment. But still like again, it doesn't seem like it's completely unrelated that Claude is shooting up and people and they're sort of. Yeah, thinking of that this might be. Anthropic is positioned as the AI company that's going to be the quote, unquote moral one. And you know, that's a whole obviously hornet's nest of, of of a topic as well.

28:51

Speaker A

That's right. I will give my hot take here which is that this supply chain risk threat is never, never manifests, just never takes, that takes effect again. It's a six month deadline. We've seen six month deadlines a lot from the US government. Often it's been around TikTok, yo, we'll ban TikTok in six months. Yeah, we'll extend it for another six months. If, if Anthropic is this pivotal for, for the government, then they will just continue to use it and extend this or rescind it or it won't hold up in court. So that's hot. Take one. The other side of it though is I've already heard some rumblings from Government, like big companies that are government contractors, that they will preemptively take Anthropic out of their workflow or at least are highly considering it because they don't want to bank on the fact that this will get extended. So even if this isn't going to go through completely, I do anticipate that, that it will hurt Anthropic when it comes to these private companies.

29:22

Speaker D

I agree with you. But I would also just say, like, I wouldn't fully discount the notion that we talked about already, but that these two sides just don't like each other from the personnel involved. Right. Like every indication seems that way. And so like, is there, are they going to be able to get past sort of the grudge between the Trump administration and, and Dario basically, or is there some intermediary that has to come in to sort of, of assuage that in some ways because yeah, like the tick tock thing and everything else, like you know, the sort of, the Taco stuff. Right. Like Trump always chickens out of like the things that he threatens and, and goes back upon. Is this another one of those? And again, it feels like, yes, it probably will be. Except if they view it like that, they want to make a sort of, you know, makes, make some sort of points about, of more philosophical points and high level point about, you know, quote unquote, woke companies or companies that are, that are misaligned or they view as misaligned with sort of the American public and the electorate and all that. And you know, they might dig in their heels a little bit more because of that.

30:29

Speaker A

Yeah, I do think Taco really did apply with the tariffs, but maybe after this Iran thing it's going to be tougher for that. Yeah, it's that label to stick.

31:42

Speaker D

Yeah, yeah.

31:50

Speaker A

Maybe the entire intermediary that comes in is Sam Altman or maybe not. I mean, he swooped right in. This is from the Times. You know, as these discussions were breaking down with Anthropic, Emil Michael had an ace up his sleeve. On the side, he had been hammering out an alternative to Anthropic with its rival OpenAI. A framework between the Pentagon and OpenAI had already been reached. Mr. Altman of OpenAI got a call, got on a call with Mr. Michael to discuss a deal for his company. Within a day, they had drafted the framework. OpenAI agreed to the Pentagon's requirement that its AI could be used for all lawful purposes. But it also negotiated the right to put technical guardrails on its systems to adhere to its safety principles. At 10pm on Friday, as Anthropic's lawyers began working on a lawsuit against the Pentagon, Mr. Altman was on the phone with Mr. Michael, finalizing the details of OpenAI's deal with the Department of Defense. Mr. Altman then posted the news of the agreement on social media. On Saturday, Altman invited people to ask him questions on X about the deal. As OpenAI faced a backlash for swooping in, he goes, we don't want the ability to opine on a specific legal military action, but we do really want the ability to use our expertise to design a safe system. Basically the same, very similar deal to the one that Anthropic could not agree on with the Pentagon. Your thoughts on OpenAI's role in this whole situation? Classic, I guess.

31:51

Speaker D

I mean, this 100, this could have been predicted, right? Like you, you see, you see the opening. Sam Altman sees the opening. Sam Altman's going to take that opening and he is going to, to immediately ring up Emil Michael and get, get him on the, get him on the line and figure out a way to sort of swoop in there and not only potentially take over all these contracts, but also obviously he's positioning, he's trying to position this as like, they are the peace broker here, right? Like that they are the ones who are, who are going to sort of iron out these differences between Anthropic and the US Government by cutting their own deal. That paves the path to sort of do a new deal going forward. But I think that they wouldn't mind if, say, they got all those contracts instead of Anthropic got all those contracts going forward as well. And so, you know, that part was maybe left out. But, but they're the peace broker and, and they're going to come in here and, and make everything. I mean, again, this was so predictable. And it was also predictable, the backlash to it, right? Because like, no one believes that, like two blood rivals that won't hold hands on stage at an event are going to, you know, one is going to help out the other in a major way. Now, to be fair to, to Sam Altman, like, he might think at a high level, like, yeah, I think we should probably take a stand on this. That's more in line with what Anthropic is, is trying to project, at least at the highest level. But still we're going to do that in a way that's, that's good for the business at the end of the day. And so, you know, both things can probably be true. But again, the, the optics around this are just not great. And, you know, again, to be expected there.

33:19

Speaker A

I got a text as this was all unfolding where Sam had said something like, we don't want Anthropic to, you know, not be able to work with the government. And someone sent me like this text like, oh, well, looks like OpenAI is really changing their tune on Anthropic. And I was like, I don't think so. Wait and see. And there they were. Could be a potential very lucrative deal for OpenAI. And especially if this thing goes through and OpenAI, by the way, in the middle of a year where they're really emphasizing enterprise, they could potentially swoop in and get much more than just that one contract.

34:58

Speaker D

I would. Just one last thing I would add to this because I was going over it today trying to look through some of the numbers for ownership stakes, as I like to do is like a hobby of these, these AI companies, given the ownership stakes in Anthropic from that we know obviously from Google and Amazon. But now Microsoft bought in, right. Famously. And Nvidia too. And so, you know, don't, don't necessarily underplay those elements to it as well. Especially someone like Amazon. Right. Who has lots of government contracts as well at Google too, if they can sort of step in and be a bit of an intermediate intermediary here and say, you know, like, look cool, we gotta pause on this. Like, we can all work together, we can all get along.

35:35

Speaker A

We.

36:17

Speaker D

You can figure out how to use these models in ways that both sides sort of figure out. Because again, like, it does. Ding. Also their businesses, those big players, if, if all of this gets ripped out.

36:18

Speaker A

Yeah. By the way, I mean, Amazon just did this $50 billion funding deal with OpenAI. So, you know, it's 15. I don't think that that was related, but 35 next.

36:29

Speaker D

Yeah.

36:38

Speaker A

So maybe they, they might just say, all right, creative destruction.

36:39

Speaker D

They're hedging. They're always hedging. So. So they're fine either way, I guess. But yeah, wild.

36:41

Speaker A

All right, so could Amazon and OpenAI work on a potential device together to go against the Apple and Google alliance? And where is Apple's AI device bet going? That's what we're going to, where we will pick up when we come back right after this. You want to eat better, but you have zero time and zero energy to make it happen. Factor doesn't ask you to meal prep or follow recipes. It just removes the entire problem. Two minutes, you get real food and you are done. So remember that time where you wanted to cook healthy but just ran out of time. You're not failing at healthy eating. You're failing at having three extra hours every night. Factor is already made by chefs, designed by dietitians and delivered to your door. Inside there are lean proteins, colorful vegetables and healthy fats. It's the stuff that you'd make at home if you had the time. There's also this new Muscle Pro collection for strength and recovery. You always get fresh and never frozen food. It's ready in two minutes and there's no prep, no cleanup and no mental load. Head to factor meals.com bigtech50off and use code bigtech50offer to get 50% off your first factor box plus free breakfast for one year. The offer is only valid for new Factor customers with the code and qualifying auto renewing subscription purchase. Make healthier eating easy with Factor if a driver in your fleet got in an accident tomorrow, could you prove what actually happened? Without footage, it's much harder, so your insurance rates spike and you're stuck paying for it. That's why so many fleets choose Samsara's AI powered dash cams, clear video evidence, real time alerts, and coaching tools that help prevent accidents before they happen. Samsara AI helps reduce crash rates by nearly 75%. For instance, the city and county of Denver saw a 50% reduction in false claims against them and a 94% reduction in safety events overall. This is the kind of visibility that every operation manager needs. Don't wait for the next accident to take action. Head to samsara.com bigtech to request a free demo and see how Samsara brings visibility and safety to your operations. That's samsara.com bigtech samsara operate smarter and we're back here on Big Technology Podcast with MG Siegler of Spyglass. You can find it@spyglass.org highly recommend signing up for it. Getting the newsletter One of my favorite tech reads all right, mg, let's talk a little bit about switching gears from this big blow up. Yes, nice to get let's talk about Siri or more. Let's talk about the devices that Apple might be developing that will have Siri or Gemini or Gemini powered Siri baked in. So recently we've gotten news that Apple is going to release maybe three devices all at once. Smart glasses, a pendant, and AirPods with expanded AI capabilities. I've thought that. I think we've both discussed actually that this is going to be a pretty good year for Apple. And when this news hit, I was like I got to go to SpyGlass to get MG's perspective. And you started with a very surprising line at the beginning that maybe we're seeing the beginning of Apple, if not pulling ahead of the AI race, really starting to assert itself and make a strong play here. Here. Let's talk a little bit about what you're seeing.

36:47

Speaker D

Yeah, so this, there's a few things that I think lead in to fuel that idea. And you know, this, this dates back to obviously when, when apple@wwdc 2 years ago now was gearing up to, to talk about AI in a real way for the first time. And obviously they ended up doing that and falling flat on their face because they couldn't execute upon it. But now in a way it's almost like are they going to run basically the same game plan, but now that they have the Google partnership for Gemini building these models, like they can actually do it and execute on it, execute on it in the right way. I wouldn't put it past them to sort of basically do that, do everything that they promised. And then to your point on these devices extended a bit to sort of the, the world that we're entering now. I do think that they are potentially in a good position. We've talked about it before that, that, that if, if we believe that models are getting commoditized and if, if there's going to be diminishing returns and sort of spending billions and billion hundreds of billions of dollars on, on training these large language models, like what's the next sort of step after that? And if you, and if you're Apple and you believe that that is the case, that they don't need to train necessarily their own massive frontier models that they instead can partner as they're doing with Google on them, then the value might, you know, from their eyes come from the way that they implement them. And obviously a lot of their value has always been derived from selling devices, the best devices, you know, many would say to the, to the public. And so if they can create these devices that leverage that. And by the way, like, I do think the one key device to all of these things remains the iPhone. And I think that what you're seeing with these three devices that are being talked about that you put out there, you know, AirPods and glasses and a pendant, all of them, you know, per the report, per Mark Gurman's reporting, like, would likely be reliant to some degree upon the iPhone. And that's where Apple has this unique advantage. You know, maybe you could say that Google and Samsung have similar capabilities because of their device, their smartphones. But Apple has a very unique advantage in, you know, certainly ahead of the metas of the world and others that are trying to create these types of new fang of devices, let alone any startup that's trying to do so. And OpenAI in that bucket. Apple has this unique position where they have the iPhone in billions of pockets and now they're going to have these devices that rely upon that as sort of, at least for the foreseeable future as basically, you know, the central processing unit of those devices potentially. And so you can close your eyes and not. It's not too hard to imagine a world in which Apple is sort of the device leader again in this new AI world. And if, if they're the device leader, who's to say they're not the overall leader? If they're the, the way that everyone's interfacing with AI.

40:12

Speaker A

Okay.

43:06

Speaker D

At least on the boundaries.

43:06

Speaker A

Yes. I want to talk through this because, you know, I've recently, I've gotten like the first wearable that I actually use frequently, which is this Garmin watch, which is not an Apple product.

43:07

Speaker D

Yep.

43:16

Speaker A

But actually works quite well with the iPhone. There's this Garmin app, it mostly connects. Only had one situation where I've had to like reset the whole thing because the Bluetooth connection was off. And this is basically like these AI devices probably wouldn't exist in their own, like their own ecosystem. For instance, when you want to set up the metaglasses, you set it up with the smartphone, but it's still, it syncs pretty well and there's technologies that have come out that lets you like, sync data through wi fi that have made it much more seamless. So if the iPhone is going to give an advantage to Apple's AI devices, how does its interoperability, which has always been Apple's calling card, how does that help in a way that would be that much better than, you know, the ways that these current wearables are connected.

43:17

Speaker D

So it's a good question. It's hard to know for sure without obviously seeing what Apple's going to release out there. But I would just point to, you know, comments made by no less than Mark Zuckerberg over and over again about complaining non stop about how they don't get the full level of interoperability that they would like with, with, with Apple's products. Right. And some of that is obviously just a little bit of posturing because those two sides don't like one another. And obviously Meta famously doesn't have a smartphone play. And so you know, they're telling regulators that look, you need to make sure that the iPhone is as open as can be to, to third party products like perhaps the ones we're making and others are making. And obviously Europe is very open to that notion. They've, they've basically installed some laws in various places to, to make it so that they have to be more interoperable and allow thing low level system integrations that Apple may not want to. And again your question though, like what's going to be all that different from the, you know, at the day to day level it might not be all that different but I do think that there's lots of low lying under, under the hood stuff, you know, potentially as, as, as boring as like slightly longer battery life because Apple is able to you know, more tightly hone the way that the connection is made between their device and the iPhone. And I think there's all different sorts of things, background syncing, contact syncing, all this type of stuff that can come into play that you might not think on a, on a day to day level as you're using it is like that big of a deal. But there are advantages that Apple has and the question will become probably both certainly in Europe, but I think it will ultimately become true also in the US of like how much of that is too much of a competitive advantage and that they're hurting competition as a result of that. And that's. We're going to hear a lot from Mark Zuckerberg and probably some others, maybe Sam Altman as well about that going forward.

44:09

Speaker A

So we're going to have. It seems like these are all coming at the same time. Smart glasses, a pin and, and these enhanced AirPods. What, what do you give the chance of being the most successful of those three,

45:59

Speaker D

I would imagine. I mean I do think that they'll all be slight for slightly different purposes. I would imagine price will be a key factor in that as it always is. But like if I had to guess I would think that the AirPods would probably be the most successful. Just because you and I are wearing them right now. Everyone's wearing them out and about like they're a known thing. As long as they don't look entirely ridiculous and different with some sort of camera sensor on them. I think that they will continue to be obviously a very popular product. It's a matter of again, how much do they cost if they add a camera sensor to it? Is it a 500 product all of a sudden? Do they, can they Keep it like at $300 or something around there, I think that will matter a lot. Glasses obviously Meta has already sort of proven somewhat of a market but relative to Apple's other products, like it's a drop in the bucket. It's not very big. You know, the Meta Ray Ban products are not huge compared to say AirPods or Apple Watch or anything else. And so can Apple take that to another level? I think that, you know, I think that they'll have success with it but you know, we're now seeing already they're starting to be backlash preemptively against Meta because they're talking about using facial recognition within the glasses. Right. Adding that after the fact. And so we're all of a sudden right. Thrown right back into the glass hole situation from Google Glass a few years ago and, and Meta has, has to their credit sort of avoided that to dates and now we're getting thrown back into that. And how does Apple deal with something like that, that if Meta is you know, for lack of a better word, sort of poisoning the well or the market by thinking like I don't want any glasses with any sort of camera on your face and obviously Apple's product will have that to some degree. And then the pen in itself obviously you think too humane and you know, ex Apple engineers and designers who are working on that didn't end up being successful of course and sold to HP&A Fire sale it seems like. But Apple has that unique advantage of having the, the iPhone itself and it sounds like this would maybe be more of a. The I German even said it was like an internal phrasing of it as the eyes and ears maybe of the, of the iPhone going forward. And so you wear it around and it's constantly just looking at things. Again this is a privacy thing though. But Apple's as we're you know, talking about Apple is in the, the unique position to be more trusted than probably any other tech company certainly from a privacy angle. And so yeah, there's all those, all those elements to it.

46:15

Speaker A

Right. Yeah, I think the AirPods, that's my bet. I think we're going to see a battle of these AI devices in the, in the earbuds space. But it does seem you're right. Like we're just kind of, we are sort of doomed to just be videotaped by everybody at all, although we kind of already are.

48:42

Speaker D

So I, I still like, like looking at us, you know, right now wearing these AirPods like I've always been curious like how they're actually going to do that though, from a pure product perspective. It's like so I have a beard. If like there's stems, you know, feature the camera, like, does it just record my beard like looking forward or do they have to stick out more than as results and that will look, look ridiculous. You know, everyone joked when the airpods first came out how ridiculous, you know, they thought they look because they're sticking out of your ears. But like, ultimately they're pretty streamlined and you can't really, you know, tell all that often, you know, when you're looking at people. And we got used to it very quickly. But if you got cameras sticking out of them and then there was like talk where it wasn't like, it wasn't necessarily camera cameras but was more IR cameras and was used like, you know, to potentially capture motion and things like that to, to help with gesture control of different devices and things. And that made a little bit more sense to me. But I am very curious like how they end up doing that. There was also talk that they were going to put a camera in the watch and that you would have like, yeah, like almost like a, you know, Dick Tracy style, like camera that you would like shoot people with like looking at, looking at your wrist. And, and so all these things are going to create situations where you just need new cultural norms to come in. And again, Apple has done much better than any other company. But to met his credit, they have done well with the Ray Ban so far.

48:57

Speaker A

That's right. And I think the, the battle will definitely fall on whose assistant is better and Siri has to get better. I mean, it's, we do. It feels like beating a dead horse at this point. But we didn't even talk about it because it's so regular that Siri got postponed again or features within Siri got postponed again. You had a really funny, funny piece about that. You said it's almost like Apple is having some major issues with their AI implementation and strategy. They should probably look into that. But it just keeps happening, right, that, that this keeps getting delayed and you know, you start to lose faith over time, even with the Google partnership, that they're going to be able to figure this out.

50:16

Speaker D

Yeah, I was always like a little bit skeptical. I mean, I've obviously been super skeptical of Siri over the past 50, having used it over the past 15 years. But like, like when they announced the Google partnership, I was always a little bit skeptical of the initial rollout because it's like, how are they going to. It's sort of what we're talking about with the government, like, right. Like you can't just swap these things in. It may seem like it's, it's that simple but like there's a lot of like underlying things that need to be connected. Look at Amazon for an example of that. Right. Like look how long it took them to, to rework Alexa to be able to, to work with things like anthropics models and, and all the models that they're using behind the scenes to sort of upgrade Alexa. It took, took over a year and they promised something and they couldn't deliver on the timing of it. And now we're seeing the same thing. We've seen the same thing play out with Apple. It just takes a long time to get like all of the little pieces in place because the last thing Apple can afford to do right now is put something out there even in beta. I think even in some sort of, you know like thing where it's any, any forward facing user facing service and just have it flop again. That would be just a death knell. I think to, they would have to change the Siri name at that point. You would have like Siri. We might have the Microsoft style like funeral where they would be like walking down Cupertino with, with a coffin and series in it because they would need a new branding if they fail one more time with this.

50:55

Speaker A

Yeah, I think it's long past, past time to do that. Could Amazon and OpenAI be the competition here? I mean we talked about it before the break, but Amazon's gonna invest 50 billion in OpenAI now of course OpenAI has a device program underway. Apple has, I mean Amazon has the echo. I think Alexa is actually already pretty good. Could you see as part of that deal because OpenAI will be helping Amazon develop some of some specialized AI technology that this could also be part of a counter, could be a team team battle. OpenAI and Amazon against Google and Apple.

52:11

Speaker D

Yeah, that's, that's sort of the, where my mind went when I was reading about yeah these reporting and again like $50 billion. Yes, it's over like two tranches as it sounds like you know 15 and then 35 but still $50 billion that, that Amazon is investing in a time when they're making cuts. They're famously doing layoffs right like and, and they're getting dinged left and right for their, for their capex spend like 50, $50 billion is no joke. And they're spending that for a reason obviously with, with open AI. And so you have to my Mind again went to wondering is this some sort of massive play to get sort of all of the models in house to. You know, a lot of. There's a lot of talk right now about orchestration and the idea that like, like Perplexity and others are now trying to like move their businesses into being these layers on top of the LLMs to be able to do whatever you, you as a user shouldn't have to worry about which model picker and things like that. You should just let it say what you want and let a service pick the best one for you. And obviously that's harder with an Amazon because as you noted, they make their own product in, in Alexa. But given that they have the anthropic partnership and now given that they have the open AI partnership, is there a world in which they're using all those models behind the scenes and sort of they can use that to counter both Apple and Google potentially where they say like look, if you're using those products, you're only going to get in this in both cases Google as a result because they're both using Gemini, but you know, they're using their in house house models. Whereas if you use Amazon, if you use Alexa, maybe going forward, you know, you will have the power of Claude, you will have the power of Chat GBT and you'll have the power of Alexa. All three, you know, on top of maybe some others that they add in there as well. And it's sort of a playbook that they've run right with the cloud in a way too. Right. So it's like they view it as like you can pick which, which you want to use and, or let us pick which you, which you think we should use from a product perspective. And so no, no, you know, indications that that necessarily is going to be what happens. But I wouldn't be shocked about that.

52:48

Speaker A

Okay, before we leave, I definitely want to talk briefly about this Netflix Warner Brothers Paramount deal. You've written about it. We haven't talked about it on the show in depth really yet. But the, the cliff notes here was that Netflix had agreed to buy Warner Brothers Discovery, which has CNN and HBO and was going to build this sort of powerhouse streaming company that maybe the streaming company of the future by adding these old school assets. Netflix is obviously in the lead. No one really comes close to it in streaming. So this might just have solidified it as the dominant service. It reaches a deal with Warner Brothers Discovery, Paramount comes in and says, nope, we want to make the deal instead. We weren't given a fair chance to bid and just keeps throwing out these bids until it decides that it's going to end up or both companies decide that Paramount will be the buyer and not Netflix. Warner Brothers Discovery is going to have to pay Netflix about a $3 billion breakup fee and the final deal is going to be 110,110 billion or so that Paramount will pay for Warner Brothers Discovery, whose market cap, as you note in Spyglass, was $20 billion a year ago. So just give us the, your perspective on what happened here and what the implications are.

54:51

Speaker D

Yeah, so it does seem like on one level, at the highest level, this is just a master, a masterful job by David Zaslav, who's the CEO of Warner Brothers Discovery, because he was able to take a company, as you noted, that a year ago was a fraction of what this offer is in terms of market cap and still is right now and turn it into this offer. And they basically did that, you know, by, at first it was Paramount that came out with, with an offer much lower offer than what these current offers are, I believe $19 a share. And we're up to $31 a share now with the newest one. And I think the wild card there, you know, was Netflix coming in because Netflix was viewed as, as a, obviously a big player. And the biggest sort of, if you want to call it a media company, the biggest one, it's market cap is roughly double that of Disney. And so they obviously have the capital to be able to do whatever they want in a deal like this, but they had not historically done anything like this. And so I think that, you know, Paramount basically felt like they came in and stole this from under their nose. And there is a question of was this the, you know, master stroke by David Zaslav sort of orchestrating this whole thing, knowing perhaps that Netflix, that Paramount basically needed this more than Netflix did. And so they were going to drive up the price to make it so that Netflix would walk away with their 3 billion dollar, all cash consolation prize, which is a pretty nice, you know, offer to. Yeah, it's about what they make in profit a quarter. And so they just got that in one fell swoop. But still, yeah, this, this deal has, has gone back and forth and back and forth. And now the fact that Netflix walked away relatively quickly once the Paramount offer came in, you know, kudos to Netflix for it seems like had good discipline, they weren't going to get into some sort of bidding war and go outside of, of their bounds. But also like, I, I'm just overall poor Sort of sad for Hollywood because I do feel like that they, they didn't like either of these deals, but I think that they're going to be in for a bigger world of pain with the Paramount deal than they would have been with Netflix. And we talk, you know, you could talk about, yeah, the streaming dominance of Netflix and whatnot, but the reality in my view at least is that this is much more about like the future going forward. And the future is going to be Netflix versus YouTube and a few other, you know, key players. I think Prime Video will be in there, Disney plus obviously. But like, it's not just. And Tick Tock and which has interesting new ownership given this, this Paramount structure as well. And so all of these players in there is really the battle going forward. And we're talking about like this decaying sort of industry that in, in movie going, which is an industry I love, but it's not like a giant growth industry. And so we're talking about like these players battling over these assets. And it feels like, you know, Netflix would have been a good safe haven for a, for a studio that's like been owned by conglomerates for 100 years. This isn't like a new thing. Everyone's all afraid because we're in the world of tech now and, and AI is coming and all of this. But like, Netflix would have been a pretty good safe haven, I feel like, for this. And instead we're just gonna get a straight down the middle sort of combination of two studios and that's just going to be a lot of layoffs and it's going to be just this brutal sort of, you know, decline over a longer period of time.

56:10

Speaker A

Right. And I'll note that Netflix is up 26% in the past five days. So clearly the market has really digested this and said, yeah, probably better that you didn't do the deal. I thought maybe it would be good, like maybe it would be nice to roll all this content up. Obviously, as a consumer you're not happy about that because you have, you have less choices. But from a business perspective, I understand why Netflix was interested, but obviously different way market likes it and everyone will just move forward.

59:28

Speaker D

Yeah, we'll see. Until, until, like, there's going to be a lot of fallout from this and I think it's going to happen both, both from, you know, the antitrust perspective because, you know, of the relationship with, with Trump and the Ellisons and, and there's going to be a lot of different hearings on this type of stuff. And I think it'll play out over years and years because then they'll look back on it after it's approved and say like was it approved for you know, less than above board reasons? And so I, I think we're going to just hear about this for years and years and years. And the reality is it's like, it is a bit sad. It just feels like, you know, obviously Paramount's play is going to be to try to bulk up to compete with the, the Disney pluses and the Netflixes of the world. But are they really realistically going to be able to do that? Maybe if they can leverage Tik Tok or something in some way, you know, now, now owned in, in no small part by Oracle maybe. But like it feels more like that this is a still a slow decay story and, and you know, they'll just sell their, their products ultimately the content itself to Netflix just as they've been doing.

59:58

Speaker A

All right folks, the website is spyglass.org mg always great to speak with you. I'm so glad we got a chance to speak today, especially just, I mean an incredible weekend of news that I think we're all still trying to wrap our heads around and I'm so glad we got a chance to digest it here together.

1:00:58

Speaker D

Indeed. Good as always, Alex.

1:01:14

Speaker A

All right, thank you so much. Thanks everybody for listening and watching. If you haven't, if you could rate us 5 stars on Spotify or Apple podcasts, it will go a long way to helping the podcast reach new audiences, which would help us, you know, recruit guests and that would always be great. So hope you do that. Hope you have a great Monday and the rest of your week. And we'll be back here on Wednesday with an another new interview. I'm not quite sure who it will be, but we'll hopefully touch more on the anthropic Pentagon saga. So thank you again for being here and we'll see you time next. Next time on Big Technology Podcast,

1:01:16

Speaker E

Michael Lewis here. My best selling book the Big Short tells the story of the buildup and burst of the US housing market back in 2008. A decade ago, the Big Short was made into an Academy Award winning movie. And now I'm bringing it to you for the first time as an audiobook narrated by yours truly. The Big Short story. What it means to bet against the market and who really pays for an unchecked financial system is as relevant today as it's ever been. Get the Big Short now at Pushkin FM Audiobooks or wherever audiobooks are sold.

1:01:49

Speaker F

That new thing. Yeah, we've got it. The Drop by GNC Bringing you all the newness that matters. Hand picked by the pros who actually know what's up and what's proven to work. We keep you on top of the trends and dialed into what's next. Whether you're crushing it at the gym, leveling up your game or thriving every day, the Drop by GNC is where the latest solutions in health and wellness land first. Nonstop innovation and fresh finds daily explore what's new and what's next on the Drop by GNC.

1:02:26