Big Technology Podcast

Anthropic vs. The Pentagon, Bloodbath at Block, The Citrini Selloff

65 min
Feb 27, 2026about 2 months ago
Listen to Episode
Summary

The episode covers Anthropic's refusal to allow Pentagon use of its AI for autonomous weapons and mass surveillance, resulting in a standoff over hypothetical scenarios. The hosts also discuss OpenAI's $110 billion funding round, Block's massive layoffs attributed to AI efficiency gains, and a research paper predicting economic collapse from AI displacement.

Insights
  • AI companies are increasingly positioning themselves through public stances on military use, with Anthropic using Pentagon disagreement as effective marketing for their 'ethical AI' brand
  • The rapid switching between AI tools (like from Cursor to Claude for coding) demonstrates how fragile competitive moats are in the AI application layer
  • Large tech companies may use AI as justification for significant workforce reductions, potentially triggering industry-wide layoffs beyond actual productivity gains
  • AI funding rounds are becoming increasingly complex with conditional tranches and circular financing arrangements that blur actual investment amounts
  • The stock market's volatility in response to AI research papers reveals underlying uncertainty about AI's true economic impact and current valuations
Trends
AI companies drawing ethical lines around military applications for competitive positioningConsolidation of AI coding tools toward foundation model providersConditional and circular funding structures in AI investment roundsAI-justified workforce reductions spreading across tech industryMarket volatility driven by AI economic impact speculationAutonomous agents becoming viable for enterprise knowledge workReal-time employee feedback systems enabled by AI summarizationRapid user switching between AI tools indicating weak product moatsGrowing software engineering job postings despite AI coding capabilitiesAI research papers moving financial markets
Companies
Anthropic
Refused Pentagon requests for autonomous weapons and surveillance use, positioning as ethical AI company
OpenAI
Raised $110 billion funding round with complex conditional terms and circular financing arrangements
Block
Laid off 4,000 employees (half the company) citing AI efficiency gains under Jack Dorsey's leadership
Pentagon
Demanded Anthropic allow AI use for autonomous weapons and surveillance, threatening supply chain restrictions
Palantir
Used Anthropic's Claude AI in military operation to capture Venezuelan President Nicolas Maduro
Amazon
Committed up to $50 billion investment in OpenAI with conditions tied to IPO or AGI achievement
Nvidia
Participating in OpenAI's funding round as both investor and compute infrastructure provider
Cursor
AI coding tool losing users to Claude Code, demonstrating fragile competitive moats in AI applications
Google
Mentioned as having Gemini AI competing with ChatGPT and Claude in consumer and enterprise markets
Lockheed Martin
Defense contractor contacted by Pentagon to assess Claude AI usage amid supply chain risk concerns
People
Dario Amodei
Anthropic CEO who refused Pentagon demands and published statement defending company's ethical stance
Jack Dorsey
Block CEO who laid off half the company citing AI efficiency and predicted others would follow suit
Sam Altman
OpenAI CEO managing $110 billion funding round and stating willingness to work with Pentagon
Nicolas Maduro
Former Venezuelan President captured in US military operation that reportedly used Anthropic's Claude AI
Emil Michael
Pentagon's Undersecretary for Research and Engineering involved in Anthropic negotiations
Jensen Huang
Nvidia CEO mentioned regarding company's investment participation in OpenAI funding round
Quotes
"The only reason we're still talking to these people is we need them and we need them now. The problem for these guys is they are that good."
Defense official (quoted in episode)
"I believe deeply in the existential importance of using AI to defend the United States and other democracies and to defeat our autocratic adversaries."
Dario Amodei
"If generative AI works, it should have been clear all along that a single GPU in North Dakota generating the output previously attributed to 10,000 white collar workers in midtown Manhattan is more economic pandemic than economic panacea."
Citrini Research
"You could call us and we'd work it out."
Dario Amodei (Pentagon's characterization)
Full Transcript
2 Speakers
Speaker A

Anthropic showdown with the Pentagon reaches an endpoint. We dig into what it means. Block is laying off half the company. As Jack Dorsey tells everyone, AI might be coming for their jobs too. OpenAI finally raises its $110 billion fundraising round, and we have yet another AI science fiction sell off that's coming up on a Big Technology Podcast Friday Edition right after this did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by Webbank, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

0:00

Speaker B

This episode is brought to you by Indeed. Stop waiting around for the perfect candidate. Instead, use Indeed sponsored Jobs to find the right people with the right skills fast. It's a simple way to make sure your listing is the first candidate. C According to Indeed data, sponsored jobs have four times more applicants than non sponsored jobs. So go build your dream team today with Indeed. Get a $75 sponsored job credit@ Indeed.com podcast. Terms and conditions apply.

1:10

Speaker A

Welcome to Big Technology Podcast Friday Edition where we break down the news in our traditional cool headed and nuanced format. We have a great show for you today. We're going to break down everything that's happening between Anthropic and the Pentagon and discuss what it means for the company and maybe the future of war and defense. We'll also talk about the big layoffs at Block. Half the company seems like it's on the way out the door. We'll talk about OpenAI finally raising the $110 billion round. It might that round might grow even larger. And of course the Citrini sell off. We're joined as always by Ranjan Roy of Margins, who's back from Europe and ready to podcast. Let's podcast Ranjan.

1:37

Speaker B

AI Science Fiction driven selloff is catnit for me, so have to come back for that.

2:16

Speaker A

We love it and it's been such a big week of AI news that that's the fourth most important story.

2:22

Speaker B

Somehow was Ctrini this. When did it get published again?

2:28

Speaker A

It was like, that was this week.

2:31

Speaker B

My God. My God.

2:32

Speaker A

Every week is a month. It feels like. All right, let's get into the big story. This is one I've been really looking forward to speaking with you about. We haven't talked about it on the show yet, but today is Friday, and that means it is the deadline between the Pentagon and Anthropic. The deadline for Anthropic to accede to the Pentagon's requests that Anthropic both give it the option to use its technology for autonomous weapons and conduct domestic surveillance. So this is sort of. Let me just take. And of course, Anthropic is Friday, but Anthropic has already said no to that on Thursday. And we're going to get into what the repercussions are, but I think it might be helpful to actually talk through what's happening between Anthropica and the Pentagon and maybe give some context here.

2:35

Speaker B

Walk me through it. Walk us all through it.

3:22

Speaker A

So you may recall that the United States captured the leader of Venezuela, Nicolas Maduro, in a raid that the United States didn't lose any military servicemen and actually seemed like it pulled off in a. In a remarkable way. Now, it turns out that Anthropic's technology might have been involved there. This is from the Wall street journals. A little while ago, Anthropic's artificial intelligence tool Claude, was used in the US Military operation to capture former Venezuelan President Nicolas Maduro. The deployment of Claude occurred through Anthropic's partnership with data company Palantir, whose tools are commonly used by the Defense Department and federal law enforcement. Following the raid, an employee at Anthropic asked a counterpoint at Palantir how cloud was used in the operation. So, you know, it did seem like this was just Anthropic, you know, kind of leaking this news that it was working with Palantir to help capture Maduro. It's great marketing if you want to show the capabilities of your tool. But in fact, Anthropic really didn't have much idea of what was going on within Palantir as far as its technology being used for the raid. And it even had to ask a Palantir employee about it. And that's where these conversations of the tech company going to the Defense Department or the Department of War now and saying, how's my technology being used? This is how it all began.

3:25

Speaker B

Yeah, I think especially in terms of how it was being Used again, the employee saying it was Palantir layered on top of Claude and that basically Claude has been helpful for synthesizing satellite imagery in different aspects of the intel picture. The thing that just kind of jumps out to me here is like, yeah, what kind of responsibility should Anthropic have in here? And this might surprise you a bit, but I'm not going to say I'm like full Department of War hegseth on this one. But I mean, the capabilities are embedded in Anthropic's model and like what, you know, like what kind of control they actually have over how it's getting used. It's computer vision in the end in this case and it's kind of like doing the analysis in the sites on top of it. So. So I don't know, I'm. I've been having a tough time trying to figure out where I land on this. Where. Where are you landing on this?

4:49

Speaker A

Well, first of all, this is sort of, I'm telling, I'm telling the story here to discuss, to basically set up this idea that I'm not really sure if there's a there, there between Anthropic and the Pentagon.

5:46

Speaker B

That's what I mean. Okay, Okay.

5:56

Speaker A

I think there might be a lot of posturing and positioning and now there might be an argument that these hypotheticals may matter and we'll get into that. But this was. This initially started on something so minor. It was. And by the way, this is from Dave Lawler, who's an Axios editor, who responded to my tweet about what did Anthropic do with Venezuela? We don't know and they didn't know either. And so what he said is, yeah, it might have. In the past, Claude has been helpful for synthesizing satellite imagery and different aspects of the intel picture. We don't know that the technology was being used either for mass domestic surveillance or autonomous weapons in Venezuela. This whole thing began simply with Anthropic inquiring how his technology was being used by the Pentagon. That's when Dario Amodei, the CEO of Anthropic, makes his way down to D.C. and again, I don't think this was a disagreement that happened based off of real world pictures because the Washington Post and Semaphore have both reported on what happened with the discussion next. So this is from the Washington Post. A defense official said the Pentagon's technology chief whittled the debate down to a life and death nuclear scenario at a meeting last month. If an intercontinental ballistic missile was launched at the United States, could the military use Anthropic's clawed AI system to help shoot it down? It's the kind of situation where techno, where the technological might and speed could be critical to detection and counterstrike. Anthropic chief Dario Amodei's answer rankled the Pentagon, according to the official, who characterized the CEO's reply as, you could call us and we'd work it out. So basically, the Pentagon's version of events is, you know, maybe these conversations began around this Palantir thing. They start having these conversations together about how anthropics technology can be used, and somebody from the Pentagon presents this, like, nuclear scenario to Dario, basically saying, we might need your technology to be used quickly. And Dario gives the most Dario answer ever. Yeah, call us and we'll let you know. Right. You could totally see him saying this now. This is from the Post. An Anthropic person spokesperson denied Amodei gave that response, calling the account patently false and saying the company has agreed to allow Clause, has agreed to allow Claude to be used for missile defense. So here's my read. I don't think the Pentagon went to Anthropic and said, we need your technology for autonomous weapon use and mass surveillance of Americans. I simply think that the disagreements about hypotheticals became so out of control where there was a culture clash. I mean, think about the culture clash here. It's Dario Amodei, CEO of Anthropic. We know how he acts. And Emile Michael, who's on the other end, by the way. Both of them have been on the show. I like. I mean, I've enjoyed speaking with both of these people. And Emile has basically said that, you know, Emil is the undersecretary of War for Research and engineering at the Department of War.

5:58

Speaker B

He.

9:01

Speaker A

He says that Dario wants nothing more than to try personally than to personally control the US Military and is okay putting our nation's safety at risk. Now I'm just gonna turn it to you. This is kind of how I look at it. It's simply conflicting culture. It's not a specific disagreement over technology that will be used in the moment.

9:01

Speaker B

Okay. Sorry, I didn't even realize. This is Emil Michael of uber fame, right?

9:18

Speaker A

Correct.

9:23

Speaker B

Like the mid 2010s, kind of like very aggressive, brash personality that kind of like. Like the. The face of tech spreading at all costs. Screw the taxi unions, all that. Okay. Okay. Yeah. So I.

9:23

Speaker A

He happens to be a very interesting guy. I've enjoyed speaking with him, but. Sorry, go ahead.

9:39

Speaker B

No, no, no. That. That's what? So I do agree. And. And it's rare that when the topic of like, autonomous weapons killing civilians, I would say there is not a there there. And it's just kind of like a, as you said, culture clash. I agree. That's really what it feels like. I think the other thing I keep thinking about as I'm reading through all of the. The stories coming out on this, this one is. It's so rare in the past when you would hear about some kind of like, you know, like nebulous Defense Department technology, war games, whatever else, as an individual, you would have no real concept of what that might look like to me. One of the most fascinating parts is we all use Claude. We all, like, understand how AI works. So actually thinking through, I don't know, I kept thinking, what's the query? What's the analysis? It's like, claude, how do I Capture Maduro? Here's 10 documents. Give me a strategy. I don't know. I keep trying to think through, what does it actually look like?

9:43

Speaker A

I don't know.

10:51

Speaker B

Right, Yeah.

10:51

Speaker A

I mean, my belief here, and we don't know exactly, my belief here is that Palantir did the heavy lifting on Maduro and then maybe someone was using natural language to synthesize some information there. I mean, the jokes have been amazing on X. Right? It's like you tell Claude code, you know, Claude codes, capture Maduro, make. Make no mistakes. And it just goes out and does it. Like, we're not at that point yet. And we even joked last week with Aaron Levy that like, the fact that people were like, yeah, Claude has been used for warfare and was responsible for the capture of Minduro. And everyone's like, yeah, of course they are not asking any questions has kind of been like a testament, in fact to the company's capabilities. But I think its involvement in this specific operation has been blown completely out of proportion. Just speculation, reading between the lines here. But the other side of the argument is that while these hypotheticals do matter and you want to have a defense contractor, because Anthropic, working with the Department of War is a defense contractor that won't say that will basically be ready to, you know, to do what you need them to do when you need them to do it. And this is from Sam Parnell, the Pentagon's chief spokesperson. He said the department had no interest in conducting mass domestic surveillance or autonomous or deploying autonomous weapons, but wanted to use AI for all lawful purposes. This is a simple common sense request, he says, that will prevent Anthropic from jeopardizing critical military operations and potentially putting our war fighters at risk. Again, this just kind of goes back to, like, you know, this is obviously not anything in theater right now. However. Yeah, the question stands. Do you want to even make the Pentagon think that you might say, you know, in a moment of war? We don't. We're not ready to go. To go that far. I don't know.

10:52

Speaker B

What's your perspective? I'm going to present to you a hypothetical here, Alex. If you are the CEO of a massive AI research lab that has some powerful foundation models, do you allow your technology to be used for autonomous warfare? Because. Yeah. Do you?

12:42

Speaker A

I don't think so. I don't think so. But I'll tell you what. I'll tell you what I will do. I'm gonna preface this by saying I think that Dario has real values and Anthropic has real values, and they've, you know, mostly stuck with them, and I give them credit for doing that. However, if I have this moment where the Pentagon is in. Let's just say this. I have this moment where the Pentagon is saying, we want to use it for all your technology for all lawful purposes. And I say, oh, all right, Just, you know, don't use it for autonomous warfare or mass surveillance. And they're like, just sign the all lawful, you know, degree here, decree here. We don't want any caveats. It'll just make it easier for us. I might be tempted to blow that out of proportion. I might be tempted to, from a marketing standpoint, perhaps release a blog post and say, no freaking way. I'll never work with the Pentagon on these things. Lo and behold, Thursday night statement from Dario Amodei on our discussions with the Department of War. Dario said, and this is. I love the way that Dario writes. I think he's a great communicator. He says, I believe deeply in the existential importance of using AI to defend the United States and other democracies and to defeat our autocratic adversaries. Anthropic has therefore worked proactively to deploy our models to the Department of War and intelligence community, where we were the first frontier AI company to deploy our models in the US Government's classified networks, the first to deploy them at the national laboratories, and the first to provide custom models for national security customers. He says, in a narrow set of use cases, we believe AI can undermine rather than defend democratic values. One is mass domestic surveillance. The other is fully autonomous weapons. Now, again, to our knowledge, these two exceptions, Dario writes, have not been A barrier to accelerating the adoption and the use of our models within our armed forces to dates. Regardless, he says, we are not going to change our position. We cannot in good conscience accede to the Pentagon's requests. I don't want to reduce this to public positioning, but I'm going to just for this sake of argument. It's almost as if even if the Pentagon's request was reasonable, they didn't need them to agree necessarily to these demands.

13:01

Speaker B

Exactly.

15:18

Speaker A

And Anthropic just ran with it. And now they're going to position themselves as, you know, again, once again, hammer home that branding, the ethical company, the company that works for you, the company that has values. The company is not growth at all costs. And you know, who knows, because there are some consequences that that could happen and we'll talk about them. But I don't think could have been a better situation for Anthropic than the one that they were just handed.

15:19

Speaker B

I think we've been hanging out too much because my, my affliction of looking at everything through a marketing and communications lens seems rubbing off. Because, because I'll admit, like, as this is all happening, that's the first thought that's going through my head and I'm like, oh my God, this is gold. From a standpoint of Anthropic, we're the good guys. Do we say like, do you not support mass surveillance? Do not support fully autonomous weapons potentially killing civilians? But, but it. Okay, let's separate those two out. Mass surveillance, bad. Fully autonomous weapons. I don't if that's the direction that warfare is going. I feel like that's just going to be part of whatever China or other countries are developing anyway. So it's just going to be kind of, as awful as it may sound, it's going to kind of be standardized. Unless there's some kind of like global agreement to actually ban autonomous weapons. But again, as someone who works in agentic AI, the more fascinating part of this to me is like having autonomous agents in anything. The assumption is that they can be controlled. And this is where I think this is actually kind of weird for Anthropic to be pushing this. Hard to say if you are actually kind of like at least hinting or implying that there is this world where they're not being controlled. And then even in my day to day enterprise AI workflows with autonomous agents, can I actually rely on them? Like this whole promise of autonomous work being done and agents running around doing all different types of work, it does tie to the autonomous weapons thing. There has to be like, at least this idea being pushed that they can be controlled. And it's weird to me that Dario is kind of saying actually they can't.

15:43

Speaker A

I'm sorry, like, isn't there a difference between Claude code running, running a command and, you know, bugging out on your website and then having to go out and fix it when you're like, this isn't working to like, potentially conducting a military operation where people are going to get killed.

17:39

Speaker B

But it's the same underlying kind of process. It's the same underlying technology. That's what I mean. That like, yes, the scale and the gravity of it all is kind of terrifying, but it's the same way with anything. Autonomous self driving cars. Like, at a certain point, do we all accept that autonomy is good and predictable and will work, or do we say there is this level of like, uncertainty that does lie around it, that it can, it can go haywire and kill the wrong people? Or, or I guess actually is the argument that not that it will go haywire and go kill a bunch of random people, or is it. Are they implying that the Department of War. It's still weird for me to say that actually will use it for nefarious purposes and that's the risk. What do you think's implied in there?

17:53

Speaker A

Again, like, my perspective here, and I understand why Anthropic would not want to sign this away to the Pentagon to like, have like, full use and whatever it. Because if it's a company comes in with values, it has its values. But again, like, I don't know if there's a concrete worry here. That's what I'm trying to say. Yeah, I think it's mostly just like, you know, blanket, no, this is against our values. And let's go. And even Emile Michael was on, I think he was on Fox Business or Fox News saying that, like, we're in the middle of this discussion and this blog post comes out. You know, again, like, I, I do think. And again, I don't want to feel too cynical about this, but I do think that this is, you know, sort of a PR opportunity. But back to your point, I mean, like, you know, I try. I would trust. Think about this. I would trust a Waymo. I would get in a Waymo. I would trust it to drive me. I know it's probably good on like 99.7% of. Of rides or whatever, much better than humans, but I don't want Waymo to be the police. Like, I'm not giving Waymo a gun and Saying if you see a crime, go arrest somebody. Like, I don't trust it to that extent. That's the difference I'm trying to make.

18:45

Speaker B

Okay. No, no. Okay, I'll give you that. Waymo as basically robocop coming to life. You're able to get in the car anti RoboCop here. Yeah, okay, I can see that. But, but going back to the PR standpoint, it is like I was trying to be level headed and take this kind of genuinely seriously, but this is just like. And I mean I have to imagine we'll get into OpenAI and I mean Sam Altman even came out and said that like the company would potentially be working with the Pentagon.

19:53

Speaker A

I think same restrictions by the way, with OpenAI. And Sam is like, well, we're just going to hope that we can defuse. We'd like to try to help deescalate things. Yeah, that's just Open AI being like we Open AI agreeing with my statement that this is not a real disagreement yet. And so therefore where's. Go ahead.

20:27

Speaker B

Where's Sundar going to fall in this? That's what I want to know.

20:46

Speaker A

He's going to sit back and he's going to be like, we're printing. We don't have to be involved in this.

20:49

Speaker B

This is, this is why you got to have an. A monopolistic ad business just printing cash and it always making Gemini better. Don't have to worry about autonomous weapons.

20:53

Speaker A

I mean you can't pay for marketing like this. And I'm sorry if this comes off too cynical, but this is an Axios. This is from a defense official. The only reason we're still talking to this to these people is we need them and we need them now. The problem for these guys is they are that good. You can't pay for that type of marketing.

21:03

Speaker B

That's why it is like the, the cl. But it is interesting how. Okay, yeah, again, not trying to be too cynical here. Again, like Claude catches Maduro. Great headline, great marketing.

21:21

Speaker A

Kind of just not what happened.

21:35

Speaker B

Exciting. Which is. But hey, that's, that's, that's the narrative in the meme. But then yeah, Dario realizing this is a great opportunity for us both to be the ethical AI company to kind of. I mean it's a pretty good positioning nowadays. If you're like setting yourself up for that conflict to have the headsets of the world kind of like coming at you on Twitter, that actually can help your case. So. So yeah, I do think. Yeah, again, I never would have thought on the subject of autonomous web Warfare. I would say it's. It's a meh story, but I think I agree on this one. It's. It's. They're they. And they just raise their giant round. They don't need to. They don't need this marketing. They already have enough. But good for dog.

21:36

Speaker A

If you think they don't need the marketing. I think you're underestimating the level of competition right now. Every bit of marketing helps.

22:27

Speaker B

Yeah.

22:33

Speaker A

I mean, think about it. It. It almost follows the same line as the super bowl ad, right? Like, Claude won't ever do ads. Cl. Claude won't, you know, kill you in your sleep.

22:34

Speaker B

That should have been the super bowl ad.

22:44

Speaker A

Just run this. The fan fiction of this episode. But I think that like, ultimately, like, there is still, I think, part. I don't want to say it's entirely cynical marketing. Like, I think part of Dario really does believe that. That this is not the uses that cloud should be used for. And I think on the Pentagon side, you can totally see their side as well, where they're like, we don't want to be in a mission critical moment and have Dario say, you're not ready to do this, actually.

22:47

Speaker B

So where do you fall on model companies? Like regulating, I guess. I don't know if that's the correct word, but use cases again. Companionship, AI erotica. We've debated in the past and OpenAI has one view of it versus others. Do you think as this evolves, and I mean, it almost kind of comes down back to the great content moderation debates of Facebook and others. Like, do they have the responsibility to do that moderation? Because I do kind of think they do, but this is just going to get messier and messier and more and more complex.

23:14

Speaker A

Yeah, I think they do. I mean, I think that if you're a private company, you. You have, you know, at least the right, if not the responsibility to try to make sure your product is used in ways that you think are beneficial to society. I don't see what the problem is.

23:55

Speaker B

Optimistic.

24:10

Speaker A

Well, allow me to take this moment to, you know, maybe not be as cynical as I've been in our first few minutes of this show and say, yeah, I think that's. That is important.

24:11

Speaker B

Tech companies have some responsibility to society writ large. That's the.

24:21

Speaker A

I mean, I know it's controversial to say.

24:26

Speaker B

That said, that's a hot take, but

24:28

Speaker A

can see people just hitting play on something else right now. But. But that's where I'm going to stand there. I'll die on that hill. But it's not. This is not without potential consequences for anthropics. Let's talk about it. In preparing for they might. The Pentagon might now label anthropic a supply chain risk. And Pentagon officials as from the Journal have reached out to defense contractors including Lockheed Martin and Boeing in recent days to gauge how much they use. Claude, I love that scene, by the way. It's like, can you imagine the Pentagon on the line with like Boeing and being like, how much Claude do you use? Because we might ban it for this hypothetical reason now. Are we not gonna be able to make planes anymore? It's just crazy that we've chatgpt came out three years ago and we're at

24:32

Speaker B

this stage already critical infrastructure right now. Yeah, yeah, I think. I mean that's the, the politics element, I guess that is kind of like that. That part I is almost more terrifying to me in the actual near term of again, like if. If that level of kind of like tit for tat Twitter fighting can actually lead to like that, you know, some. Some kind of like, yeah, the supply chain risk application actually kind of like derailing a private business that. That I don't like.

25:12

Speaker A

They might also invoke the Defense Production act, which would require Anthropic to supply its technology to the Pentagon the way the Pentagon wants, which would be unprecedented. Again, I mean the tweets coming through the timeline this week and I know Twitter is not real life, but a lot of people in the AI world, a lot of buyers are paying attention to this. Here's from another Twitter user. Best proof Anthropic has that it has the best internal models. The Pentagon would rather invoke the Defense Production act than use someone else's AI.

25:50

Speaker B

Do you think they get rate limited?

26:20

Speaker A

That's maybe that's what actually started.

26:23

Speaker B

That's what actually it was they were like about to capture. Actually that's. This might be too dark, but I was going to say that's why we didn't invade Iran yet they're getting rate limited on Clutch.

26:26

Speaker A

I don't know, Ron. I'm not going to go there.

26:39

Speaker B

I'm going to take that one back.

26:41

Speaker A

I don't know what. Maybe by the time we publish this podcast. Anyways. Let's, let's.

26:43

Speaker B

We'll let that move on to lighter news like funding rounds.

26:47

Speaker A

All right. OpenAI announces $110 billion funding round with backing from Amazon, Nvidia and SoftBank. So it finally here the round we've been talking about. Man, that was quite a Transition with the round we've been talking about has arrived. It is bigger than expected, REM. It started out at 50 billion, Amazon, and now it's 1. And then it went to 100 billion. Now it's 1 10. And this is from CNBC. Other investors are expected to join as the round progresses. So it's not even over. We just have these big commitments from these three big companies. 50 billion from Amazon. I mean, that is. That's wild. Speaking of Dario, I wonder how he's feeling now that, you know, one of his biggest partners in Amazon is making a deal like this with OpenAI. We won't spend too much time on it. But, Ranjan, your takeaway from the size of the round, who's in and the other things that I love, how we're going to glaze over the biggest funding round of all time.

26:50

Speaker B

Only in February 2026 could $110 billion round in a private market actually be like, let's not spend too much time on it. But. But I think I want to call out it being. And I'm very glad that OpenAI is making again this funding round the Most open air. Ish.110 billion is the headline. It is impossible to tell what the actual round is because, again, one of the. From the information, Amazon's decision to invest up to $50 billion in OpenAI could hang on whether OpenAI goes public or reaches a loosely defined milestone known as artificial general intelligence. That was my favorite part, because we might after we lost that benchmark with Microsoft and OpenAI. But now it's back and this idea of like declaring AGI actually potentially unlocking tens of billions of dollars, it's back again. And. And again we have our benchmark here on the podcast. Waymo's operating in New York City officially will mean AGI is here, but. But I think. I don't know. Did you like the complexity of the funding? Do you call this. Do you genuinely call this $110 billion round? There's so many stipulations here.

27:45

Speaker A

No, it's not. It's not. Certainly not that.

29:00

Speaker B

It's.

29:01

Speaker A

I think it's a very important point that you're calling out here. Remember when OpenAI and Nvidia said that they were going to do $100 billion together and it was another one of these. Well, it's 10 billion now. And in time, turns out that $100 billion was actually 30 billion, which is what Nvidia will be investing. Although Jensen Wonga said, we hope they invite us, you know, to come back. But certainly a intent to invest 100 is very different from an actual action to invest 30. To me, the interesting thing here is where you know what happens as a result of these major deals. So this is from CNBC. OpenAI said it's expanding its existing $38 billion agreement with Amazon Web Services by 100 billion over the next year. Eight years. So, so it's going to get 50 from Amazon, but it's going to put back either 100 or 138.

29:02

Speaker B

That's why I'm so happy about this announcement. It's got everything. It's got kind of like nebulous benchmarks and tranches, it's got circular funding and financing. It's, it's a classic Sam funding round again. Yeah, like as you said, potentially putting in 50, potentially over eight years, getting that 38 billion up to 100 billion. It's, it's, it's perfect. OpenAI funding.

29:58

Speaker A

That's right, yeah. I mean, Sam was on CNBC earlier today and basically said, look, this is only going to work if the revenue goes up. And answering the circular funding thing, and I think he's right. It's only going to work if the revenue goes up. And that's basically it.

30:25

Speaker B

That's good business right there.

30:44

Speaker A

Good business. But also like, yeah, of course, if the exponential continues, then he'll continue to get the money and it all makes sense. And if it doesn't, he won't get the money.

30:45

Speaker B

Yeah, I think, I mean, I definitely want to get into kind of like where you see OpenAI's business at this exact moment. But, but like, one thing that was also interesting to me as well was there wasn't a lot of talk around, like, where this money gets invested into. Like in the old days of a year or two ago, I feel any of these big funding rounds would really kind of center around getting to that next generation model, building data centers. It was still a bit, I don't know, did you see anything like, it still wasn't. There wasn't a big kind of flagship push around what this money actually is going to mean to both OpenAI and the ecosystem at large?

30:56

Speaker A

Yeah, it has to be infrastructure. Right. And just the support for inference, especially when you're working with partners like Amazon and Nvidia, I think that's not an accident. And OpenAI basically told us what the game is, right. They're like, if we are able to build more infrastructure and serve more demand, we're going to make more money and we'll keep building until that Proves to be untrue. So to me that's, this is, that's just one step here along the way on that front.

31:42

Speaker B

Well, but where do you view OpenAI competitively right now? I'm curious. I'm just going to say, like it is crazy to me. My ChatGPT usage has declined dramatically between Gemini actually for like day to day, just basic stuff. I'm using a lot more like it just again. And we've talked about switching costs and moat actually. And I know like the idea of memory, which everyone has been talking about for a long time, is supposed to kind of start to build that moat, but it's still such a reminder to me of how brittle a lot of these kind of like foundations are that might seem again, what is it, 900 million users now?

32:06

Speaker A

Yeah, they just said today. 900 million?

32:46

Speaker B

Yeah, 900 million. Yeah.

32:49

Speaker A

They're definitely on track for a billion by mid, mid to late March. Yeah, exactly, March or april.

32:50

Speaker B

And again, ChatGPT is the like Google kind of like trademark brand name, whatever of like AI for the average person. It's like a verb. But still like I actually was looking this up the other day because I was curious back us on this show a year ago, there were headlines, Anthropic was screwed. Like usage was going down on the consumer side and again, massive credit to them. They had such a clear bet and we outlined this very early that like they were going all in on coding API, they were giving up the consumer product basically and it worked brilliantly for them. But again, 12 months ago, 14 months ago, the narrative was very strongly anthropic is in a bad position. It's kind of where perplexity is now. OpenAI is just dominating. Gemini is on the rise. Two years ago, Gemini and Google are dead. It keeps reminding me just how quickly things can shift in this market right now.

32:55

Speaker A

Yeah, it changes fast. I mean, obviously like when people think about generative AI, they think about ChatGPT. That's what you hang your hat on right now if you're OpenAI. Some of the other bets, Sora, you know, haven't worked exactly according to plan. We still don't have the device. But I think that basically you, you have this two prong strategy. You, you're growing chatgpt from the ground up. It's the leading consumer product and you use that to leverage and move into enterprise. And yeah, they're making their move into coding and it's very interesting now what's happening in the coding market, wouldn't you say? Because you have Cloud code, which, you know, I've been, you know, I've been in like, like crazy. I'm hitting my limits every couple hours just in cloud code. It's amazing. And there's some other, some other players like Cursor that are, you know, starting to go up and down, you know, as the two big boys get involved. So what's happening with Cursor, Ranjan?

33:59

Speaker B

Okay, so I wanted to highlight this story. There was kind of like a. There was one tweet from a Kyle Russell around how the company was removing 90 seats and basically over slack. People were like, hey, can you unsub me from Cursor? Yeah, I'm not using it anymore too. And I think, again, this like, momentum, the speed and like inflection with which people can shift or this idea of moat, it's just so fascinating to me because again, a year ago, Cursor was synonymous with Autonomous Coding or CodeGen and saw like any kind of like AI driven coding and just how quickly people can switch, how the mote was never really there. And so I think it raises two questions. Like, one does everything kind of like only condensed to the foundation model labs. And my argument that it's the product, not the model, is completely wrong. And if that happens, I will, I will say that. But, but, but I think like, to me that was one side, the other that the Cursor story, and I mean, we don't know definitively like where things are internally for them from a revenue perspective. But like, I hope that this starts to raise the every claim around arrow or like annualized recurring revenue, that starts to go away. Because anyone who knows it's like taking one month of data and extrapolating it times 12 when you have a good month. I mean, there's like, I've seen people joking, but maybe it's the case you have one week or one day and times by 365 and call it ARR. I think, like in the market actually trying to understand stickiness over time is going to become much, much more of like a valued thing right now.

34:54

Speaker A

Yeah, I think that's great. I mean, I think that there's definitely been a. And some tidal inflation when it comes to ARR. Right? Like, certainly companies are putting releases out there about ARR, and you have to like, you know, kind of shake your head a little bit about, you know, is that really the number or not? But I think going back to your previous point, I think that's the most important point is that when it comes to AI applications, because of generative AI's general purpose nature. You always have to worry about one of the big companies gobbling up what you're doing. And certainly that's what's happened with Cursor, I think, is that Claude code, right. Which was initially like, you know, seen as a frenemy or maybe just a different version of Cursor that for different use cases. And it's not like not being an ide. Maybe now it's fully competitive now and people are just working within code and they're working within Codex. And so that's what's happening is the big models are just gobbling up smaller competition.

36:45

Speaker B

So question, 12 months from now, is Anthropic still the king of the hill or have things shifted again dramatically? Because if. I mean, if we're looking at there's some kind of directionally similar things, I mean, OpenAI is growing and raising more money, but still kind of who owns the narrative and conversation and kind of like the next wave of innovation. Do you think it's still anthropic six months from now, 12 months from now on coding?

37:45

Speaker A

Yes.

38:14

Speaker B

No. No.

38:14

Speaker A

Everything else? I don't know.

38:15

Speaker B

Overall, yeah.

38:15

Speaker A

Well, they're there. I don't. I would argue that they're not like. I mean, clearly they're ascendant right now, but I would still argue that OpenAI is the leader. But I think coding is going to be very interesting because that is. That is the use case right now. That's. That's clearly economically valuable and exploding. In fact, I think some of the numbers on Anthropic paid subscribers are. Are quite impressive, actually. I have them in my inbox. You want me to read them?

38:16

Speaker B

Read them to me.

38:41

Speaker A

All right, let's see. So it. Let's see. Free users on. On Claude are up more than 60% since January, fastest growth in Claude's history. Daily signups have tripled since November. Every single day. Okay, I'm just. Sorry, I'm just making sure that this is on the record. Every single day this week has consecutively broken the record for Cloud's largest ever day of signups. And paid subscribers have more than doubled since October. People are staying and upgrading because they value Cloud's most advanced capabilities and consistently say it sharpens their own thinking. So it's more than a Super bowl bump. They're saying it's adoption from months before the ad campaign. So they're doing really well. I think that the coding fight is going to narrow, but a year from now, I think they're still going to be in the Lead and I still think that will be the biggest use case for these models as the vibe coding stuff continues. What do you think?

38:42

Speaker B

Okay, well I'll differ again as the company work for rytr. Autonomous Knowledge work is where we play Claude Cowork is in there. Manus is kind of one of the only other competitors really. I'm still standing by my prediction that that's going to be the big trend of the year. It's self interested in talking my book, but really. And I gotta say I was. It's been interesting to me because like how Claude code has been the entry point for most people because like our company, we only work with enterprises. So it's not a consumer product. So just not as many people feeling it. But I just say like, I think you get it now, right? Like in Claude Code gives you that feeling of like what I've been trying to say since October of like actual autonomous agentic work. Like agents out there doing stuff for you that actually works with many steps. And you feel it now, right? I heard you talking about it.

39:40

Speaker A

I feel it, yes. But I also think there's a long way to go. Although it's done a great job building some internal tools for me, I have to say.

40:32

Speaker B

Okay, you're pro agentic now you're coming around.

40:39

Speaker A

I'm feeling the agentic. I'm feeling it, but I'm not. I'm not. I don't haven't fully drank the Kool Aid like you have. Okay, we have to go ahead.

40:43

Speaker B

I was going to say what I've actually come away with is it's the word agent. And agentic was so just beaten down and kind of like mischaracterized for all of 2025. That's why everyone's just has a hard time saying the word agentic. Whereas in reality all this. What's happening is it's act. This is actually agentic. That this is what we were promised but we just heard it for so long and it wasn't working or none of it made sense. That that's why people. People are uncomfortable saying it.

40:52

Speaker A

No, here's a. I think this is a good distinction of where we sit. You are happy to. You believe that AI will. This agentic stuff will eventually be good enough to take the shots. And I'm like, do not take the shots. Let. Let's. If we're gonna have to shoot, let a human do it.

41:22

Speaker B

Okay, that's maybe that is.

41:38

Speaker A

That's the breakdown. Yeah. Okay. I think you're far too trusting of it. But Anyway, go, go down that rabbit hole another day. Or you can respond if you want.

41:42

Speaker B

No, I think that that's, that's a reasonable characterization. That's gonna, I like, I like our regular standing debate around is it the product or the model probably is a little more tasteful than should AI take the shot or humans take the shot.

41:53

Speaker A

But these are both real questions.

42:10

Speaker B

These are both real questions. Yeah.

42:12

Speaker A

Yeah. All right. I'm going to take a break and come back. We're going to come back after this and we're going to talk about Jack dorsey laying off 4,000 at block. And then we're going to talk about the Citrini research paper in the time we have left that caused this sell off in the market. We'll be back right after this. Did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by Webbing, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees.

42:14

Speaker B

The world moves fast. Your workday even faster. Pitching products, drafting reports, analyzing data. Microsoft 365 Copil is your AI assistant for work built into Word, Excel, PowerPoint and other Microsoft 365 apps you use, helping you quickly write, analyze, create and summarize so you can cut through clutter and clear a path to your best work. Learn more@Microsoft.com M365 copilot and we're back

43:20

Speaker A

here on big technology podcast Friday Edition. All right, so the news is that Jack Dorsey is from SFGate. Jack Dorsey is laying off 4,000 at Block and saying others will do the same within the next year. So it's not that the size of the layoff is massive, which it is. But the real headline here is that Jack Dorsey has said AI has become, has helped us become so efficient that we're able to lay off half the company and be as productive. And by the way, this is coming for others as well. What do you think about that?

43:49

Speaker B

Ranjan this one killed me. I got to say. I like and there's I, I have two kind of minds here. One is, and again as we've been discussing today, like everything is comms and marketing in my mind and like it really just feels like again blocks revenue has been, revenue growth has been slowing. Profitability, 2025 was not, it wasn't a bad year but it's certainly like they were a company that saw incredible growth during COVID and it's been slowing. So like the stock's down 75%. It's, it's overall business is not great. So to say it's AI kind of bothers me. It feels like a cop out versus listen, like a lot of big tech were a little bloated. We overhired. We're just trying to right size the business a little bit. Like to me that's what this is. And again I'm saying that as someone who genuinely believes workforces are going to get transformed and there's going to be some problems and like dislocation in, in the industry. I felt this one was just kind of Jack being like AI when he has to lay off a bunch of people. Right.

44:21

Speaker A

And I think we should give the context here. Right. So Block is profitable. It's a profitable company making these moves and still it's up 14% today after the news. So here's what I'll say about it. This is not the first time that Block has been doing layoffs this year. Block did layoffs earlier in February. This is from Wired. After hundreds of workers were laid off in early February from Jack Dorsey's Block, some of the people remaining at the company say the internal culture has devolved to a point where it's where performance anxiety is running, running rampant. Using generative AI is required and overall morale is rapidly deteriorating. Listen to this. Black employees are currently expected to send an update email to Dorsey every week, who then uses generative AI to summarize the thousands of messages. I don't know if this is the most effective use of the technology and kind of hate to make the argument here, but is Jack onto something and are we going to see more of this? Because the idea that a CEO could get a weekly email from all of their employees, thousands of employees, throw them into a generative AI engine, get a feel of what's going on in the company, that his reports can do the same with their, you know, legions of employees and become more effective through that. Is that kind of where this technology is heading?

45:41

Speaker B

I mean it was interesting when I Read that like on one hand, to me that actually would be like the wrong process because if you're asking everyone to give you essentially sell themselves and the work they're doing on a weekly basis and using that as your foundation for understanding the state of your company, it's going to be biased, positive, like that's not actually good because everyone's gonna be like, oh, did amazing things, everything is great. And then you summarize it and Jack's just sitting there thinking everything's great. So I think, but, but it's funny that to me that you took that as a negative because again like it is kind of. You don't think it's cool at all. The idea that now you can kind of manage in different ways that like you can act at scale that never would have thought possible before, like really getting a view that's semi true or at least versus having like a bunch of people spend a month doing a report that you're going to have like an all hands meeting or a board meeting where they update you that you can actually have more real time feedback like that you don't like.

47:13

Speaker A

I don't think I'm criticizing that. Oh, you're not? Let me tell you this. I think instinctively I, you know, I'm on the side of workers here. I think it kind of, it's kind of gross that a CEO is, instead of talking with them, having AI summarize them, summarize their notes and making them write these notes, we just think about how much work you have to write and that gets fed in. You're probably writing with AI, you're probably

48:21

Speaker B

almost certainly using your agent is talking to Jack's agent is what's happening here.

48:44

Speaker A

But ultimately, you know, I have to, I think I have to get past that. I actually think that if I was running a company of that size, I would do this. I really think it is a great way to stay on top of a company, actually. I don't think it makes the company 50% more efficient. And it's natural to get these quotes that I read from SFGate from employees who just don't like the mandating of AI and also aren't happy that half the company is leaving. But maybe the truth lies somewhere in the middle. And I think we should really focus on the, on the warning, so to speak, that Jack gave to everybody else saying that, you know, I just think we're early and I'm going to be honest about it and I expect many others to do the same thing because I got this note from somebody who's worked with Jack in the past, this block news is going to cascade hard. Jack just put the question to every CEO in tech and maybe beyond of whether they are carrying dead weight that could be shed. If a few more tech companies pull a move, moves of this magnitude, and we know they will, then the odds of it crossing over increase tremendously. I would say it sounds right and probably we're gonna have many tech companies, CEOs saying, and by the way they know that Jack can run a bloated company. I mean look at what happened with Twitter. But saying, you know, maybe we don't have to do 50 but can we do 20? It's a little bit scary.

48:49

Speaker B

I think it's scary but when the hiring at these companies was up by a hundred percent or 200% in a condensed amount of time based off of like extrapolated revenue and growth numbers from COVID we weren't all complaining. I think like, I think I'm almost cynical isn't quite the right word. But like I don't know, I know a lot of people at a lot of tech companies that get a lot of money for not a lot of work. It's become the case increasingly more so over the last five years, seven years, the Googles and the Facebooks for a while I would say. But like, like there, there is like it's one sector of the economy that became the most valuable sector of the economy for a 15 year period or whatever it is, 10 year period and it became bloated and now like to me, AI, what it's doing, it's, it's just kind of like the, the value of work in software and technology that we had assigned to it over the last decade is not the same and that's happened to many, many other industries over time and it, it causes disruption. But to me that's almost like natural business cycle rather than, again I'm like not too doomer about it. Maybe that's the short sighted but like it's not that different than other shifts that have happened over time.

50:12

Speaker A

Yeah, like I guess you could, you could condense like 2, you know, 75% effort email jobs today into one email job if you have generative AI. But like I also think, you know, as we have this conversation, I don't think either of us are going to discount like the fact that this, there's real people in these jobs and this really sucks. And you know, especially now we're like in a no hire, no fire time period that for every person that gets Laid off at a place like Block. It's just like, it's a, it's a disaster in each one of those cases. And I don't want to, you know, leave that out.

51:40

Speaker B

No, no, I, that's, that's the problem about all this. It's like I don't want to shortchange. I mean, getting laid off sucks. And like, it is just. It's, it's, it's sad. But it's also like, does Metallica and Benson Boone need to play Dreamforce? And is that like the sign of a healthy industry or an industry that might be getting a bit soft? I ask you, Benson Boone and just one of them. Metallica or Benson Boone?

52:14

Speaker A

But look, as long as we keep Metallica, right? We got to keep Metallic.

52:45

Speaker B

I don't know. I'd rather. That always kind of saddens me that they went. That they're playing a dream force. Benson Boone. Yeah, put him up there. Not Metallica.

52:49

Speaker A

This is going to be embarrassing. I don't even know who Benson Boone is.

52:59

Speaker B

He's the guy who, he did the flip at the Grammys now. No.

53:02

Speaker A

No idea who that is. All right, so maybe you cut him. Maybe you bring. Keep him and you, you'd keep Metallica and keep your employees. That would be my preference. Maybe I'm gonna create an agent for

53:07

Speaker B

you to keep up better with pop culture, Alex.

53:18

Speaker A

Well, I would like that.

53:21

Speaker B

Yeah.

53:22

Speaker A

I would not. I would have to put that filter that agents emails just too much. But so, but by the way, so, so here's where this becomes a real problem is if every company. I don't know. I actually don't think Jack is right. We're going to talk about in a moment. But even with these AI tools, the software engineering employment numbers are going up fairly quickly, which is fascinating. But if Jack is right, in the case that he is, that could be rough. I mean, if you think about every company coming out and doing. I mean, we've seen Amazon do these big layoffs, right? Like every company comes out and does a 20% layoff. That is, that's tough. That's tough if you're a tech worker.

53:23

Speaker B

But the amount of money everyone in tech has made relative to every other industry over the last 10 to 15 years, like, I think that that's going to be actually one of the more interesting things politically how this all plays out, I think is that it's targeting. This is causing a disruption in a sector that got a lot bigger but is still a small percentage of the overall kind of like employment in the economy. So do you think people will be as strongly reacting or up in arms around this? Or it's going to kind of be like. And again, as someone who works in tech, I'm saying this that it's harder for me to be that saddened by it, given just how these companies have been able to operate for a long time.

54:06

Speaker A

I mean, if you're asking me whether there's going to be an outpouring of national sympathy for tech workers, I don't believe so. I mean, remember this is a country where many people celebrated the Palisades fire when they saw that there were people in a different socioeconomic status and them that lost their houses. So I don't really feel like we're a nation of empathy right now. At least we should be, but. But we're not. All right, speaking of cascading crises, let's end with this. I'm sure you saw this Citrini letter talked about the 2028 global intelligence crises. I'll try to summarize it as best I can. Basically this research firm who may or may not have shorts, I don't know 100%, but it's been speculated that they do. And some of the companies that have tanked because of this letter basically looked at what happened. If generative AI works, they write, it should have been clear all along that a single GPU in North Dakota generating the output previously attributed to 10,000 white collar workers in midtown Manhattan is more economic pandemic than economic panacea. So basically they say, look what's going to happen, there's going to be a human intelligent displacement spiral where people will automate jobs way. Maybe this is kind of where that Jack memo can, can actually end up in the bad scenario, right? Because then you have people with their mortgages, they can't pay them, stocks go down. And then, you know, there's so much of our economy, so many large parts of our economy that are based off of wanting to avoid annoyance, not wanting to cancel certain things and not disputing certain fees. The agent goes, goes out and you know, cancels these things and, and takes down those fees. And then all of a sudden consumer spending is down, growth is down, and private equity that depends on all this starts to go up in, in flames. And they even these, these at, you know, you can build your own delivery apps, for instance, and then all these displaced white. And so those businesses go away and all these displaced white collar workers end up taking blue collar jobs and there's just no jobs left in the economy. I think I boiled it down. That's kind of the argument. I think you can tell by the tone of my voice. I'm not, I'm not convinced that they're right here. What was your reaction?

54:56

Speaker B

So my reaction to the actual content to the piece and then there's my reaction to like actually kind of like it causing a stock market sell off. Are two different things, I think. I don't know.

57:08

Speaker A

I.

57:21

Speaker B

It was a good piece of. It was like an interesting piece of writing. It was like, like. And it is, it raises these kind of questions I thought were like trying to assign kind of like value to the idea. Like if we're locked into subscriptions, we forget about like imagine if, if I tell you, imagine you have an agent that is actually able to track your Netflix, Disney plus, Hulu, all of your utilization of those services and then the ones you're not using, it goes and cancels them for you. Like that sounds pretty good, right?

57:22

Speaker A

Oh, I would love that. Yeah, exactly. The argument is that that will cause cascading economic problems.

57:57

Speaker B

But, but this is where, if that is the foundation of the US economy, that's the more terrifying part to me than the actual unwinding of it, if that's the case. So I think that part of it is. I, I don't know. It was, it was interesting just like the kind of questions it raised. There's a lot of like I saw arguments over like using DoorDash as an, as an example. And I actually do agree with the idea that that was like. And as not the biggest fan of DoorDash is readers of Margins will know, I do think they're going to be the hardest to displace out of any given. It's a marketplace. There's like physical labor elements of it. So, so I thought that part, like, there's definitely weaknesses in it. To me, the idea of some kind of like cascading potential just downward spiral here. I do think, I don't know, it presented a pretty like interesting consistent narrative that actually told a good story. So I see why it had the impact it did. But I don't know. My, my hot take on this one is. I think it raised actually a more interesting issue is that again, the state of the current stock market. The valuations of a lot of companies have gone in the same direction for a very long time. And I think it is more unmasking just general unease and worries about valuations as opposed to. It's like AI is going to destroy society and people. Again, it's an excuse to just kind of knee jerk sell things that you're sitting there on paper have just been marked up insanely over the last number of years. But you just, you don't feel it's actually value that valuable. That's kind of how I am reading it.

58:04

Speaker A

I like that take. I like that take a lot. I think we're agreeing with each other a little too much. Yeah. Because that, that feels spot on to me. And you know, it is this, I was asked about version of this on CNBC this week and I had to cite the something big is happening paper. I think this is kind of what it is. There is this belief that something big is happening. It is in a way the question is what the magnitude is and there's this instinctive race to go and say, you know, it's going to take our jobs and destroy our economy. The one issue that I had with that paper, and this is sort of my core issue, is it just wasn't imaginative at all. It didn't think that somebody who's displaced has like any dreams of their own that they might go build now that these tools exist. Right. And that the, it sort of felt the economy is stagnant. I really believe that like if these tools work the way that you think that they're gonna work, then you know, they're, they're just not gonna cause vast economic displacement. Here's. This is my, this is the line that I really hated in every way. AI was exceeding expectations and the market was AI. The only problem was the economy was not. I, I just think that like you know, if, if the AI exceeds all expectations then the economy is going to become AI and and enable people to grow much more than they have previously. And, and so the economy will be AI and the economy will grow. That's just my perspective.

59:56

Speaker B

Yeah, I do know what I, I saw this stat around the number of professional photographers where there was worry that like once like first digital cameras and then phone cameras came out, that it would come destroy all the entire industry. I mean I guess this is Jevin's paradox in action. Whereas like actually the like increase in access to taking photos created massive new demand in industries around photography. And like it was actually this nice little encapsulation I felt around like what can happen? Like suddenly now everyone needs professional quality photos or in the past you wouldn't have cared as much. And because you could take photos on your phone, it created social media and like, like it, the there is whatever that means for society. That's another question. But like there, there's that argument that or it was like a really nice, simple picture of in the last 20 years of our, our lifetime seeing something that could have been pure doom actually turning into something like in a positive, unexpected way.

1:01:23

Speaker A

That's it. I mean, if you think that everything is static and that people don't want to do new things or grow or they're satisfied with whatever they're doing, then you believe the Citrine paper. If you don't, if you believe that there's growth and the economy changes and people find new things to do, then you don't believe it. And in fact, this is from Citadel, which wrote a rebuttal. The number of software developer jobs being posted on indeed is far outpacing the total number of job posting in terms of overall, in terms of percentage growth. So that says everything you need to know. If AI is able to code much better than so many people now, why are software engineering jobs, you know, outpacing the rest? It's just that when you have these tools and you're able to be more productive, you're able to do the things that you couldn't do previously. And so you want to do them and you don't shrink into a corn cob and say I'm done because of these things. And that's why these papers like this, the Citrini paper, annoy me because they just don't have any imaginative thought and they're not realistic about the world work the way the world works and they, they, you know, scare people and the fear translates into clicks and God, I guess I'm being very harsh on them right now, but I'll keep with it. Like, it just seems to me to be the worst way to do things.

1:02:32

Speaker B

And I think the other worst way to do things is the idea that the stock market is so brittle right now that an AI sci fi imaginative paper can cause a sell off is still. Get a handle everybody. Come on, Come on, stock market. It's okay, just, yeah, take a, take a breather here.

1:03:48

Speaker A

I mean, if anyone shouldn't be freaking out at the stock market, we know that the market reacts so coolly to any, to any bit of stuff out. Relax. God damn it.

1:04:11

Speaker B

Just not a substack. Not a substack.

1:04:22

Speaker A

Not a freaking substack. Obviously substack was out and be like, we moved the market. It's like, is this the way you want to move the market? Yeah. Yeah, I don't think so. All right, let's pack it up and go home and we'll, we'll cool off and come back next week and hopefully the world will still be standing. Does that sound like a good plan?

1:04:24

Speaker B

I think I hope so. I'll see you next time week.

1:04:40

Speaker A

All right, see you next week. Thank you everyone for listening and we'll see you next time on Big Technology Podcast. Did you know your credit card points and miles can lose value to inflation? Credit card companies often reduce the redemption value of your points and miles. Now imagine a credit card with rewards that can grow in value. With the Gemini credit card, you can earn Bitcoin or one of over 50 other cryptos instantly with no annual fee. Every swipe at the store or gas pump earns you instant rewards deposited straight to your account. Visit gemini.com card today. Check out the link in the description for more information on rates and fees. Again, if you're looking to invest in Bitcoin but don't know where to start, the Gemini credit card makes it easy. Issued by webbing, this is not investment advice and trading. Crypto involves risk. Check Gemini's website for more details on rates and fees. Michael Lewis here My best selling book the Big Short tells the story of the buildup and burst of the US housing market back in 2008. A decade ago, the Big Short was made into an Academy Award winning movie and now I'm bringing it to you for the first time as an audiobook. Narrated by yours truly. The Big Short Story what it means to bet against the market and who really pays for an unchecked financial system is as relevant today as it's ever been. Get the Big Short now at Pushkin FM Audiobooks or wherever audiobooks are sold.

1:04:43