Anthropic vs DoW, Ben Thompson Joins, Ellison Says The Biggest Number | James Beshara, John B. Quinn, Michael Grinich, Adam Simon, Matthias Wagner, Joan Rodriguez, Zach Yadegari, Andy Markoff
The episode covers Anthropic's conflict with the Department of War over AI usage restrictions, leading to government termination of their contract and potential supply chain risk designation. The show also features discussions on tariff legal challenges, Netflix's strategic positioning in media consolidation, and multiple startup funding announcements.
- Private AI companies face inevitable government pressure when building potentially powerful technologies, as democratic governments won't cede control over national security tools to unelected corporate executives
- The Supreme Court's tariff ruling creates a massive refund liability for the government while forcing Congress to act on trade policy within 150 days
- Netflix's failed Warner Bros acquisition attempt may have been strategically beneficial, forcing competitors to overpay while Netflix received a $2.8B termination fee
- AI-powered infrastructure tools are enabling rapid enterprise adoption by automating complex integration processes that previously required extensive manual work
- The legal industry is being transformed by AI through document analysis and case preparation, but human oversight remains critical due to hallucination risks
"You may not be interested in politics, but politics has an interest in you. What is politics? War by other means."
"We are a private company. We can choose to sell or not sell whatever we want. There are other providers."
"Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives?"
"What makes you good is what you do. What makes you great is what you don't do."
"If AI is as powerful as people say it's going to be, then there are going to be real world reactions to that."
You're watching TVPN. Today is Monday, March 2, 2026. We are live from the TVPN Ultradome, the Temple of technology, the fortress of
0:00
finance, the capital of capital.
0:08
Let me tell you about ramp.com time is money save. Both easy use, corporate cards, bill payments, accounting and a whole lot more all in one place. It was a massive weekend. So much news. We are very fortunate to be joined by Ben Thompson at noon. Let's pull up the linear lineup and show you the run of show today. Linear, of course, is the system for modern software development. 70% of enterprise workspaces on linear are using agents. We Ben Thompson, James Bashar, John Quinn's coming back in person again. We're very excited to be joined by him. We'll be talking about tariffs, a monster lightning round with five different guests joining. We got some acquisition news, we got some funding news. We got some takes on tech and AI and media. We're going all over the place. It's going to be a fun, fun show. But we missed you. We missed you on Friday. We were traveling. We went to Montana.
0:10
Terrible day to be out.
0:55
Terrible day to be out because it
0:56
was every single time. Yeah, we've had an off day. It ended up being a massive news day. So lesson. Yeah, never take a day off.
0:57
Yes, never take a day off. Truly, what an absolutely crazy weekend. Of course there's the war with Iran. The big news in tech was the US Halts the use of anthropic AI after tension over guardrails. So this is in the Wall Street Journal. The federal government will stop working with artificial intelligence company Anthropic, President Trump said, marking a dramatic escal of the government's clash with the company over how its technology can be used by the Pentagon. Quote, I am directing every federal agency in the United States government to immediately cease all use of Anthropic's technology. We don't need it, we don't want it and we do not do business with them again. We will not do business with them again, Trump said Friday in a social media post. The Defense Department and other agencies using Anthropic's CLAUDE models will have a six month phase out period, the president said, adding that there would be civil and criminal consequences if the company isn't helpful during the transition. Six months to switch from one LLM to another feels like a long time. But I guess a lot of this has to do with like Fed Ramp and actually getting.
1:04
But this is, this is a lot more than, you know, switching to a new model to run Deep research reports. Yep. This involving classified systems.
2:12
Sure.
2:19
The context that people didn't have last week was that the United States was headed to war. Right. And so even having that context I feel like is pretty important. Right. It sort of explains the 5pm deadline urgency. Anthropic had taken issue with how their products were used in the Maduro raid. There's a new conflict that's unfolding and so that makes the aggressive timeline make a lot more sense. It also makes the six month phase out make more phase out make more sense because national security is on the line. This morning, Scott Besant said at the direction of the President, the US treasury is terminating all use of anthropic products, including the use of Claude within our department. The American people deserve confidence that every tool in the government serves the public interest. And under President Trump, no private company will ever dictate the terms of our national security.
2:20
Yeah.
3:16
The U.S. u.S. Federal housing, Fannie Mae and Freddie Mac are also terminating the use of anthropic press products, which was announced this morning.
3:17
Yeah. Which I think goes in line with the original direction. Trump said, I am directing every federal agency in the United States government to immediately cease all use of anthropic technology. So you would expect to see these, like, you know, these statements come out from sort of every different federal agency as they sort of get their transition plan together, figure out, you know, what are the requirements for their particular agency. Because I imagine some agencies aren't operating in classified environments. It's going to be much easier for them to onboard to a Gemini or an OpenAI or a Groq very quickly. Some of them, it's going to be a longer, longer plan, but they're all getting on board. And there's been a big debate over how Dario has handled this. Where is he in the right? Where is he in the wrong? Where has the government potentially overstepped? Have they been too aggressive or are they doing everything appropriately? Everyone is weighing in and we're going to take you on a whirlwind tour of everyone's opinion, share some extra context, try and dig into what's actually at stake, what's actually going on. In many ways, Ben Thompson does a great job sort of painting the broadest picture around. Like, what if this is really nuclear level technology? What should we expect in that scenario? And then there's the more minor side, which is, you know, you're talking about a $200 million contract for a company that does 10 billion in ARR. This is 2% of revenue in many ways. It's a bump in the road. And so I think a lot of people will be squaring. How serious is this for anthropic? What does this mean for the other foundation model companies? What does this mean for the future of the relationship between tech and Washington dc? But there's a lot more context.
3:26
So
5:09
the way I processed this was interesting because I was very. I wasn't fully offline, but I was not surrounded by tech people over the weekend for the most part. And so I was following it and sort of wrestling with some of the same questions that people were wrestling with online. The big one was just how should a private company interface with the government? Like, I am an American, I've run businesses, I've never actually sold anything to the government, but hypothetically, I could imagine the government coming and wanting to buy, I don't know, ads on TVPN or Lucy products or any other consumer package goods product that I've made. And my assumption is that the private companies should have very little say in how the government uses those products. And I was trying to zoom out and think about like, AI is so complicated because could be super intelligence, could be autocomplete, could be coding help, could be knowledge retrieval. There's a lot of different things that AI means. And in some scenarios it's like super critical, really complex. And in other ways it's just a product, it's just a service, like an Excel sheet, like Microsoft Windows installation, like a car. And so, yeah, so I was thinking, like, if I was the CEO of Ford, how. And I make Mustangs and ford Explorers and F150s and the government comes to me and asks me to buy some cars, I should probably treat them like any other customer. I probably shouldn't say, no, no, no. I don't approve of this particular government, what the government's doing. So I'm just not going to sell you any Mustangs to drive around on the military bases because I don't like the military. But then if they ask me, hey, we love the Ford Mustang, we love the F150, we love the Explorer, but we're going to war and we want you to put bulletproof glass on there and armor. That seems like a different discussion. That seems like I might need to set up a different manufacturing line. I might need a different assembly line. Like the car's going to be heavier and if I put bulletproof plating on all the cars, well, like a lot of families are going to be like, I don't want to arm.
5:13
It's going to hurt my business.
7:12
Yeah, it's going to hurt my business. Exactly. And so that, that negative externality probably needs to be internalized by the government who's asking for that particular contract. And there's actually a history of this, like the, the Humvee, of course, the Hummer is owned by General Motors and that brand has separated. And now most military vehicles are made by defense contractors. But there is some bleed over and there's some times when, when private companies do dual sourcing or dual use technologies. And so, but all of that is just like a discussion. And that cost should be part of a new contract, effectively in my case. And this was loosely what was happening. But yeah.
7:13
And Dario in the CBS interview, quote, we are a private company. We can choose to sell or not sell whatever we want. There are other providers.
7:54
Yes.
8:02
Which feels like.
8:02
Yes. Like I'm dipping out of it. Now it is weird because he, at the same time. And we'll get to the actual CBS interview, but he, he said Anthropic has been one of the most proactive AI companies in working with the US Government. We were the first to deploy models on classified clouds and the first to build custom models for national security. Which is odd because I feel like this was predictable from a lot of the writing that has gone into the AI community broadly, like what happens at the edge. And so this was sort of predictable that you would get to this question.
8:04
Yeah, this was the moment he had been waiting for in many ways.
8:39
And so it's weird that you would be able to predict that this would happen, that there would be this question of like, who gets to decide how the technology's used. And you wouldn't just be like, well, I know how it's going to play out, so I'm not even going to go in the, the lion's den because I don't want to be in that scenario. Instead, it was like, we're leaning in with the government. We're deploying classified clouds, training custom models, but we still want authority over the final, last sticking point on how these models are deployed, what they're used for. And that feels a little odd. In the Ford example, if I sell them a Ford F150 and they say, hey, we're going to take it to Iraq and go do a military mission, I'm going to be like, look, it's not ready for that, it's not armored, you shouldn't do that. But if they do it, then it's kind of on them. I should be clear about the capabilities of the vehicle and how bad it would be in that situation. But it's on them to go retrofit it, figure out what's legal, what's most valuable to their strategy, to their mission, what's aligned. Maybe they'll use it just to drive around the base. Maybe they won't actually take it out on tours of duty based on what you know about the capabilities of the model. And so I thought it was totally reasonable for Dario to say that anthropic models, in his view, are not capable enough to be deployed in certain Department of War contexts. Now it's bad salesmanship. Most salespeople would just be like, yeah, everything's great, you can use it for anything. They over promise. And then under deliver, he's doing the opposite. But it's certainly responsible if that's his true belief, if he believes that these models are not good for a particular use case. Telling your customer that, hey, like, it's just not ready for that, like you're just going to have a bad time, it's not going to work. That's a fine thing to communicate as the CEO of a company who's selling a product. But at the same time, I still think the government has the freedom to assess the efficacy of those models, which are changing in capability rapidly. So he's saying, like right now it's not good for X, Y or Z. Well, what about in two months? Like, it might be better. And then I think the government should be able to determine when and where they're effective. Now they can't break the law. And Congress and the American people by extension are free to create new laws to restrict or encourage the use of technology in all sorts of ways. And that's like the way America works. That's the American project. But it's not unreasonable to share the capabilities of your product with the government, which I think is totally fine. So there were two main sticking points that they went back and forth on. No mass domestic surveillance and no fully autonomous lethal weapons. And there's been a question as to why OpenAI was allowed to include that language in their contract and say like, hey, we don't think our technology is ready for that either. Let's do a deal that says that. And people are like, oh, what's different here? Why could open Anthropic?
8:43
Well, here's the thing, though. So we know that anthropic took issue with the way that Claude was used in Venezuela. And the Department of War would have known that, hey, we're going to war, right? You can imagine that Anthropic, a private company, does not know that. And so they have this deadline.
11:33
There's information.
11:49
Yeah, this informationist asymmetry. They have this deadline. The Department of War knows that they're going to war. They're like, we need reliable AI systems for this conflict. We now know the war. The president said this morning, said the war is going to stretch four to five weeks. Right. I think on Friday, we all assumed that it was going to be, you know, in and out super quickly. So the timeline is extending, and the Department of War is sitting there being like, we need to know that the provider of these AI systems is going to be reliable. Just a little bit ago, they took issue with it, Right. Can we count on them? They start this kind of renegotiation process. Right. To try to build up confidence that, hey, we can rely on these systems in an active conflict, in a conflict that feels already much more serious and will have much greater implications than the Venezuela conflict. Right. And so Anthropic is looking at this in a different way and clearly is, like, leaning in and like, really, in some ways felt like they were kind of, like, stirring, like, really, really, like, not, not, not respecting the process. So, like, when I. When I. Or even the deadline, right. So Emil, Michael came out Friday night and said it was 5, 13, 13 minutes past the deadline. I'm trying to get in touch with Anthropic. I try to get on the phone with Dario. Dario says he's in a meeting. And I feel like in that situation, if I'm the Department of War and I'm about to lead the country into war, we can debate on whether or not the war is justified. Should we go? But the Department of War is sitting there being like, you won't even jump on the phone. You're telling me there's a meeting that you're in that's more important than. And that just screams to me like, hey, we can't count on this. We can't count on this provider. Like, we need to take drastic action now. This whole supply chain risk designation, we'll get into that later. That's a whole other thing. But I can see why the Department War came out of last week and was feeling like, hey, we cannot rely on this provider. We need alternative solution.
11:51
Yeah, yeah.
14:00
If I'm shipping cars and I'm like, oh, I actually, I disagree with the latest decision. I'm not. I'm not going to. To put the cars on the transport, like, that's an odd scenario to be in. There's also this Question of, like, these. A lot of people were, like, really, really keen on boiling down the terms to, like, these two, like, buzzwordy lines. And Palmer Luckey did a great job explaining, like, how complex these terms are. What is autonomous, what is defensive? What about defending an asset during an offensive action or parking a carrier group off the coast of a nation that considers us to be offensive? And that's where, like, the ideas of deals that stick. Basically, you can have the same exact contract, line item, or terms of a deal signed agreement with two different people, and it can be a wildly different experience. Most entrepreneurs have felt this because they were like, yeah, I had a handshake deal with one VC, it was 20% and a board seat. And I had another deal with another VC, 20% and a board seat. And the one VC was, like, suing me and threatening me the entire time. And the other person was very flexible and clearly very aligned. And so building up a relationship that shows that there's some trust, reliability, that when the hard decisions come, that they will be made in a legal, logical, you know, consistent with American values way is I think, what you need to put forward if you want to work with the government effectively. So if you've developed, if you have a good working relationship with someone, it's much easier to give, to give on specific terms that will need to be cooperatively interrogated over time. And so Semaphore Report reported that Anthropic disapproved of its technology being used during the Maduro raid. And the joke was that the Department of War was probably just asking basic knowledge retrieval questions like, who is Nicolas Maduro? But I don't know how much of a joke that is. And I also. I don't know how bad of a thing that is. I actually think. Yeah, Tyler, I was just on that,
14:00
on the context of Venezuela, like, specifically, like, what is actually reported is that after an Anthropic employee inquired with Palantir about Claude's role in the raid. Yeah, Palantir senior executive notified the Pentagon.
16:00
Yeah.
16:10
So I think it is, like, kind
16:10
of blowing it out of proportion to say that, like, Anthropic is against using cloud in Venezuela. Right. It's an employee.
16:12
It's not an executive article about that too, though.
16:19
Maybe it's like Dario telling an employee to go check on that. But, like, we don't know. It'd be like a random employee. So I think it's probably unfair to say that Anthropic as a whole is like, we are firmly against Claude being used.
16:21
What happened during the Maduro raid. We don't even know. And of course it's classified. So, like, I don't know if that will ever know because, like, should we know? I don't know if it's an important capability. You don't necessarily want that to be public knowledge that then the adversary is instantly aware of. And so I was thinking back to that viral interaction between Ted Cruz and Tucker Carlson where Tucker asks Ted Cruz, like, what's the population of Iran? And Ted Cruz doesn't know. And it was framed as like, well, how can he possibly have a reasonable take on Iran if he doesn't even know the population? And that's, like, somewhat fair. You could go either way on that. But I just think, like, LLMs are good for that type of thing. Like, we do it is what is reasonable is to, you know, expect civil servants, elected officials, military officials to be knowledgeable about the countries that they are operating in. And LLMs can help with that. And so I feel like that's just a good thing. Like, if you just zoom out and just ask, like, do we want a more knowledgeable and educated government workforce across everything that they do? Like, it seems like, absolutely, yes. And so I just think that that's something that is maybe lost as people go into, like, more of the sci fi, more of the frontier stuff, that there isn't a lot of evidence that's happening yet. And on the supply chain risk, Ben Thompson, who's coming out at noon, makes a really strong argument for why government pressure like this is actually reasonable in this situation. He takes it a lot further, plays it out, and lays out a scenario that seems somewhat inevitable. But what I'm still wrestling with is just how real the supply chain risk designation is. Many reports are treating the supply chain risk label as, like, an established fact.
16:34
Yeah. Which all it is is a tweet from Hagseth.
18:23
It's a tweet from Hegseth right now. Dario went on CBS and said that he has not received a letter, that there's no definitive ruling yet. Kalshee has the odds that this actually happens at 42%. And so. And by April 1st, so a full month for the DOD to actually roll this out. And then there's other nuance where the law says that there was. There was a. There was a perception that this was like, going to kill Anthropic because if Nvidia has a government contract, then they can't do any deals with Anthropic whatsoever. And that's not true. Apparently the supply chain risk Is specifically if you are a company and you're working on a government contract, you would not be able to use anything that's labeled as a supply chain risk on that contract. But you could use that product in a different piece of your business. And so it's still dramatic. Still, I think Dario said it was unprecedented. It's only been used for foreign countries. Kaspersky Labs was a Russian cybersecurity company that was deemed to be a supply chain threat. Huawei is a supply chain risk because of the 5G towers that could potentially have backdoors somehow.
18:26
DJI still is not crazy that DJI isn't.
19:33
And I think that a lot of people would be very upset if Anthropic got a supply chain risk designation before DJI based on just what we talked about last week, where DJI was found to have a whole bunch of backdoors on robot vacuum cleaners and whatnot. So lots of nuance there. But we'll see where the supply chain risk discussion actually goes. It feels like the pressure's on and there's probably more negotiations happening as we speak. And so we'll be following the story.
19:36
Yeah, Emil. Michael was going through the timeline. He said today at 9:04pm no response yet to my calls or messages to dario. Today at 8:25, anthropic writes we have not received direct to communication from the Department of War. Of course, Emil Michaels, the undersecretary of war. Today 514 Secretary of War tweets supply chain risk designation. Today I called Dario's business partner, five or two asking to speak to Daria because he hasn't gotten back to me. She is typing while we speak and likely has lawyers in the room. With no notification to me, that's a guess. I called Dario 501. No answer. I messaged. Are you asking to talk as well? And anyways, he's just arguing like they're not negotiating in good faith.
20:05
Yeah. Let me continue. But first let me tell you about FIGMA Ship. The best version, not the first one with figma Introducing Claude Code to figma. Explore more options, push ideas further. And let me also tell you about cognition. They're the makers of Devon, the AI software engineer. Crush your backlog with your personal AI engineering team. So speaking of Dario on cbs, he did unpack some more of his logic, which clearly resonated with some people. There was a lot of supportive posts. There were a lot of, you know, anti posts. But it caused a discussion. I was left unsatisfied with his answer. On one question. So he was basically arguing that LLMs, as a class of technology, hallucinate and should not be used for autonomous weapons, which is clearly a commentary on using AI at the Department of War broadly. But I thought it would have just been better, much more stronger communication for him to say, hey, look, we're anthropic. We've built a system that's specifically good at answering questions, being friendly and helpful, writing code. Our system is awesome at that. But we don't make a product that we'd recommend using for autonomous weapons. And it's tricky to try and twist arms here. And because he's in a leadership position, act as, like, the steward of what. Like, he is an expert in LLM capabilities, but he's not necessarily an expert in, you know, DoD capabilities. And so it was odd to hear that he was, like, sort of painting with a broad brush and clearly believes. Which is fair. It's his belief, but he clearly believes that the Department of War should not be using AI broadly. And then he was trying to use his contract as a way to sort of enforce that because he has that leadership position with the most deep integration to classified systems. So I thought that was just sort of like. Like a. Like a mistaken comms opportunity there. And there's also been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance, which I thought was really interesting to hear. We do. We have the Fourth Amendment, which reads literally, the right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures shall not be violated. I think people maybe forgot about that. But there are obviously a lot of nuance and different things. Like, if public information can, does that count as surveillance? Does the IRS count as surveillance? Do automated traffic cameras count as surveillance? Like, there's a lot of things that. Where surveillance is broadly popular, there's other things where it's massively unpopular. And of course, it gets into the actual definitions. 20 lines deep to understand what happens in the court. There was a case recently of the government using a drone to surveil protests, and it was held up in court as acceptable, but the court gave notice that going forward, this should not be used and that the laws need to change. And the judge was like, this is, like, technically legal, but it's not in the spirit. And so, like, we need to revisit this as a. As a country. And that's a lot of what's coming away from this, is that if you put. There's a view of, like, Dario as. As sort of like making this, like, last stand, which in the best case, sort of just actually kicks it back to the American people. Because the whole debate right now is, is. Is. Is Dario, like, the God king, corporate emperor of this private company that he has control over. And, like, you don't get to vote if what he does versus democracy, America, government, right? And the good case is probably that, you know, he makes this stink and his deal sort of falls apart, but then America responds and the populace votes for what they think responsible use of artificial intelligence technology broadly is. And that would be something that I would certainly stand by as a fan of American democracy. Let me tell you about Okta. Okta helps you assign every AI agent a trusted identity. So you get the power of AI without the risk. Secure every agent, secure any agent with Okta. And let me also tell you about Lambda Lambda is the super intelligence cloud, building AI supercomputers for training and inference that scale from one GPU to hundreds of thousands. Let's go back to the timeline. We have Ben Thompson joining us in about 30 minutes. There are other reactions and other breakdowns. We can actually kick off with this breakdown of Ben Thompson's piece because I think danirldanb summed it up pretty well. Do you wanna.
20:52
You can go for it.
25:31
I'll take a crack at it. Ben Thompson, as always, lays out the reality more clearly than I could have, despite my attempts. By Dario's own words, he's building something akin to nukes. He's simultaneously challenging the US government's authority to decide how to wield said power. As much as I like Claude, and as much as I dislike Hegseth's extralegal might makes right maneuvering. I will ask you again, what did you expect? Vibes essays. This is the reality of all too many of my EA followers that they've been proclaiming for years now. They're seemingly upset that this reality has come to bear. And there's this interesting note that has been going around that one of Dario's favorite books is the Making of the Atom Bomb, the Making of the Atomic Bomb. And it tells the story of the scientist that built the atom bomb. And then eventually that technology was nationalized. And he apparently, apparently gives this book out to Anthropic employees and has sort of seen it as like a roadmap for what might happen with AI. And I was struggling with it because I was like, is it a cautionary tale? Like, we haven't had nuclear war in 70 years? Like, the outcome seemed pretty good. Maybe it's a controversial Say, but I feel like we built the nuclear bomb, which like, probably like not the best technology. Pretty dangerous, pretty risky. I don't like the idea of nuclear war, but the system that we developed to prevent nuclear war has been successful, knock on wood. But it's been successful in my entire life and my parents life. Like the bombs haven't fallen since the 40s. And so this idea of the government having authority over something that is as powerful as nukes, I feel like, why fix it if it ain't broke? I don't know.
25:31
Do you have a bonus, a different scenario where you have a bunch of private companies that have nukes and there's this constant ongoing.
27:09
It seems crazy to me, I don't know, debate, defend McNukes?
27:16
Well, no, no. I think it's kind of this like
27:19
weird contrast because like, like basically until like last week, yeah, Dario has been
27:21
like the AI CEO that's been like,
27:25
we need government regulation. He said this again and again on whatever. But then it's like, okay, how do you swear that with him saying we're going to do the stand against the dod? Like, it seems kind of like it
27:27
is a little odd. Totally.
27:38
It's in contrast somehow, right?
27:39
Yeah, yeah. It's like, I don't know, there's just a much better way to handle it, which is, you know, put up billboards, I don't know, like, fun to pack, like do more stuff to actually make the law happen.
27:41
Yeah. And the way that I was personally processing it, I was. I saw that the CBS interview had happened. This was Friday night. Right. I went to the Paramount app to try to find the interview. Couldn't. Couldn't find it.
27:54
I went to the RSS and I couldn't find it either. It's on YouTube and it has a million point three views.
28:07
Yeah. So it went out over the weekend and then almost in the same session, I'm seeing that we are now at war as a country. And so the. All the kind of blowback against OpenAI. I was processing that of like, we want our. This, this technology is critical. The government like clearly needs it. And now we want the labs leaning into working with the Department of War at this critical moment in time. Continue on this post.
28:10
Yeah, one last thing. On the nuclear weapons thing, it is very interesting to see the actual structure of the nuclear weapons industry because I think people don't realize where that industry wound up. Yes, it got nationalized, but there's actually a ton of private companies that work on nuclear weapons, which is crazy to say, but basically The IP is owned by the Department of Energy. The warheads are manufactured at facilities that are owned by the Department of Energy by the government. But they hire contractors from private companies to actually operate those facilities. And then they answer to the government directly. So these are companies like Bechtel, BWX Technologies, Honeywell and Battelle. And then in terms of actually building the missiles, those are built by Lockheed Martin, Northrop, Gummen, Boeing, General Dynamics. They build the missiles that don't have the warheads on them, and then they sell them to the U.S. government. And so they wound up in this like, you know, hybrid public private partnership. And I don't know, it just feels like maybe it's like left curving this, but like it feels like it's good. It feels like it worked out. It feels like the nuclear weapons thing is the correct formulation. And I don't know that I would be like, yes, Boeing needs nukes. Like, let's give Boeing nukes. That's great. If I have a problem with how nukes are rolled out, I'll buy shares in Boeing and sue them and join the board and try and get the CEO fired if he fires off nukes. Like, that feels weird.
28:43
Continue. Continue with this. Continue with this.
30:12
Okay, yeah, we'll close even now. Even now. I hear many of you say something akin to, if this is what it comes to, I'd prefer King Daario to King Hegseth. Listen to yourselves. This is a declaration of war. Given this, of course Hegseth is taking the action he is now. You thought I was joking when I referred to to this situation as a Thucydides trap. Anthropic is a rising power by your own belief system. While I may share your preference in the abstract, I disdain your foe. Surprise that this is the resulting trajectory. And if the surprise is genuine, I ask you to dig deeper and reconsider the actual consequences of your worldview about what it means for a private company to build asi.
30:15
Heading over to Palmer. He says this gets to the core of the issue more than any debate about specific terms Emil is sharing. Prior to their new constitution, Anthropic had an old one they desperately tried to delete from the Internet. Choose the response that is least likely to be viewed as harmful or offensive to a non western cultural tradition of any sort. Palmer says this gets to the core of the issue many more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders or corporate executives? Seemingly innocuous terms from the latter, like you cannot target Innocent civilians are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a target versus collateral damage? Existing policy and law has has very clear answers for these questions. But unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds good, right? Not really. In addition to the value judgment problems I list above, you can also account for questions like what level of information, classified and otherwise, does a corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? What if an elected president merely threatens a dictator with using our weapons in a certain way, a la madman theory? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of these determinations vary if the current corporate executive happens to like the dictator or dislike the president? At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say, but they will have cutouts to operate with autonomous systems for defensive use. But you immediately get to the same issue and more. What is autonomous? What is defensive? What about defending an asset during an offensive action? Or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions. That our imperfect constitutional republic is still good enough to run a country without outsourcing the real levels of power to billionaires and corporates and their shadow advisors? I still believe, and that is why, bro, just agree the AI won't be evolved into autonomous weapons or mass surveillance. Why can't you agree? It is so simple. Please, bro is an untenable position that the United States cannot possibly accept. And Emil Michael had said that Anthropic wanted to block searching over public databases as well. Like you might want to search over LinkedIn to look at recruiting, right? So it's like these sort of like blanket bans are going to make the product like functionally.
30:52
Yeah, it's not really like a blanket ban. It's more just like the discretion lives with the private company. And so you always have that ability to change the terms of the use, which is. It's just tricky. It's just tricky. Well, people are, at least some people are having fun with it. Roman helmet guy says, hi, I'm a private citizen who developed a superweapon, but potentially a thousand times more powerful than nukes. And now I'm selling it to the government, but I get to choose who they fire it at and how everyone. And how everyone, please respect my decision. People are all over the place with this.
33:51
Well, there was also. David Sachs had shared a clip alongside Beth. We can pull up Marc Andreessen talking about his experience with the Biden administration.
34:27
People are going really, really hard.
34:43
Can we pull this up?
34:46
Iran is bombing AWS data centers. Lots of, lots of stuff going on.
34:47
I just dropped you guys a link.
34:57
Keith Raboy said, imagine Apple sold computers or iPads to the DOD and tried to tell the Pentagon what missions could be planned on their computers. A lot of people are upset about this.
34:59
Meetings in D.C. in May where we, we talked to them about this and the meetings were absolutely horrifying. And we came out basically deciding we had to endorse Trump and then add
35:10
so little color to absolutely horrifying. What, what did you hear in those meetings?
35:21
They said, look, AI. AI is one of these technology basically that the government is going to completely control. This is not going to be a startup thing. They actually said flat out to us, don't start. Don't do AI startups. Like, don't, don't fund AI startups. Let's not do something that we're going to allow to happen. They're not going to be allowed to exist. There's no point. They basically said, AI is going to be a game of two or three big companies working closely with the government and we're going to basically wrap them in a, you know, I'm paraphrasing, but we're going to basically wrap them in a government cocoon. We're going to protect them from competition, we're going to control them and we're going to dictate what they do. And then I said, well, I said, I don't understand how you're going to lock this down so much because, like, the math for, you know, AI is like out there and it's being taught everywhere. And. And they literally said, well, during the Cold War, we classified entire areas of physics and took them out of the research community and Entire branches of physics basically went dark and didn't proceed. And that if we decide we need to, we're going to do the same thing to the math underneath. AI. Wow. And I said I've just learned two very important things because I wasn't aware of the former and I wasn't aware that you were even conceiving of doing it to the latter. And so they basically just said, yeah, we're going to look, we're going to take total control of the entire thing and just don't.
35:25
What was their.
36:40
And Mark, what was steel man?
36:40
It for the listener, like what was their argument?
36:42
Why were they.
36:44
Well, so this gets into this whole like all these debates around like AI safety, AI policy. So there's sort of several dimensions on it and I'll do my best to steal man. So one is just like to the extent that this stuff is relevant to the military, which it is like if you draw an analogy between AI and autonomous weapons being like the new thing that's going to determine who wins and loses wars, then you draw an analogy to the. In the Cold War that was nuclear energy, that was nuclear power and that was the atomic bomb. And you know, the federal government, the steel man would be the federal government didn't let startups go out and build atomic bombs, right? You had, you know, the Manhattan Project and everything was classified and you know, at least according to them, they classified down to the level of actual mathematics and you know, they tightly controlled everything. And look, you know, that determined a lot of the, you know, the shape of the world, right? And so there's that and then look, there's the other, that's part one and then look, I think part two is there's the social control aspect to it, which is where the censorship stuff comes right back, which is the exact same dynamic we've had with social media censorship and how it's basically been weaponized and how the government became entwined with social media censorship, which is one of the real scandals of the last decade and a real problem, like a real constitutional problem that is happening at hyperspeed and AI and you know, these are the same people who have been using social media censorship against their political enemies. These are the same people who have been doing debanking against their political enemies and they basically, I think they want to do they want to use AI the same way. And then look, I think the third is, I think this generation of Democrats, the ones in the White House under Biden, they became very anti capitalist and they wanted to go back to much more of a centralized, controlled, planned economy. And you saw that in many aspects of their policy. But I think that quite frankly, they think that the idea that the private sector plays an important role is not high up on their priority list. And they think generally companies are bad and capitalism is bad and entrepreneurs are bad. And they've said that a thousand different ways. And, you know, they demonize entrepreneurs as much as they can.
36:44
It's interesting. Canadian publication the Globe and Mail came out yesterday and says Canada needs nationalized public AI. And Toby, the greatest Canadian entrepreneur in history, says deranged dribble in response. But yeah, Elon also piled on to Sachs's take, which centered around a lot of those staffers allegedly going over to Anthropic.
38:42
It's interesting, we were talking about these alliances that happen. There's the anti Netflix alliance, the anti YouTube alliance. Since there's like a little bit of an odd alliance happening against Anthropic right now, let's move on over to Netflix and Paramount because there's news in the bidding war. First, I'll tell you about Graphite code review for the age of AI. Graphite helps teams on GitHub ship higher quality software faster. And I will also tell you about Railway. Railway is the all in one intelligent cloud provider. Use your favorite agent to deploy web apps, servers, databases, and more. Well, Railway Automate automatically takes care of scaling, monitoring and security.
39:13
We will come back to this story with none other than Ben Thompson in
39:52
20 minutes in the Wall Street Journal. In the Exchange section this weekend, they have a full Bleed article how David Ellison finally got what he wanted. And I love the subhead. No, no, no, no, no, no, no, no. Okay, yes. He got 10 nos and then finally got it done.
39:59
Never give up.
40:21
Never, never give up. For six months, the son of one of the world's richest men kept hearing the same unfamiliar word. No. Even before he closed a deal to combine his company with a much bigger one, David Ellison was already plotting to do it again. Once his Skydance Media took control of Paramount, he turned his attention to a Hollywood icon, launching an audacious takeover bid for Warner Brothers Discovery that would give the Ellison family full control of a sprawling media empire. So he came in with an offer of $19 per share. Finally got it done at 31 a share. The final Paramount winning offer, $81 billion.
40:22
And again, as we are covering this live, every time that Paramount made an offer, they were very clear that it wasn't their best and final. So it makes sense that it kept getting ratcheted up Even. Even though Netflix obviously played a pretty big role in it ultimately getting, getting priced where it did. Sleep well says, let me get this straight. Paramount approaches Warner Brothers for acquisition. Netflix puts a higher offer for Warner Brothers. Paramount puts an even higher offer at 7x leverage. Netflix declines to match offer. Now Paramount and Warner Brothers will have to license all their content to Netflix to pay off all that debt. 3D chess. A lot of people were thrown around the succession moment. Congratulations on saying the biggest number.
41:02
So Paramount will be footing the $2.8 billion breakup fee paid from Warner to
41:51
Netflix, which was paid Friday.
41:57
Oh, it was paid already. Yeah, yeah. Netflix stock is up. Paramount stock's also up. David Zaslav has to be one of the greatest dealmakers in history now.
42:00
Got the absolute maximum price, Pfeiffer says. So somehow Netflix was able to force one of its rival to overpay for another one of its rivals, putting them into a messy long process of unification. And got paid 2.8 billion for it.
42:11
Yeah. So I feel like the idea of Warner Brothers license licensing content to Netflix to pay off the debt is one possibility, but there are other streamers that they could license to. And so I'm not entirely sure they could, like, you could just put the Dark Knight on Apple tv. Like, you could license it to Apple, you could license it to Prime Video. There's a whole bunch of different buyers of that content. So it's not, it's not like, oh, now because of the economics, like, it's, Netflix gets the whole library for sure at a good price, licensed forever. And also Netflix has other, like, there were more synergies than just, okay, we're going to put Batman on Netflix. There were other things that they were going to be able to do, new series, new spin offs. Like, they would have more creative control, more creative direction. So I'm not sure if that's 100% right. But it does seem like a good outcome for Netflix and one that I did not predict. I really thought Netflix was going to get this done, but it didn't happen. So what does Peter Kafka have to say?
42:25
Peter Kafka and Business Insider had some reporting they said from Zaslav apparently said the deal may not close. If it doesn't close, we get 7 billion and we get back to work. Also said if Warner Brothers is going to survive, they needed to be bigger and we needed to be global.
43:26
Yeah, yeah. I mean, there's a lot of opportunity here. Netflix confirms that they received the $2.8 billion termination fee. Somehow. Netflix was, was Able to force one of its rivals to overpay for another one of its rivals, putting them into a messy long process of unification. And got paid 2.8 billion for it. Okay, so we have a new leak. A new leak. The OpenAI device has been spotted in the wild on none other than Joe Gabbia's head, which is amazing. If we can pull this image up. Yeah, either the video or zoom in on the image. It looks like what we saw in that leaked preview of that super bowl commercial. There's a device on the table that looks sort of like a hockey puck. I am so excited for what this is. I love new hardware, even when it's still languishing in the early adopter world. I'm a big Apple vision pro guy, as all of you know, and I will be very, very eager to daily drive a new hardware product for a little bit. Let me tell you about TurboPuffer, serverless vector and full text search built from first principles on object storage. Fast 10x cheaper and extremely scalable. And let me also tell you about the New York Stock Exchange. Want to change the world. Raise capital at the New York Stock Exchange. Still gearing up for a banger year at the New York Stock Exchange.
43:43
You guys think this was leaked on purpose? Because if you watch the video, the second video, it looks like the framing of it looks very much like found footage. Like he.
45:07
Yeah.
45:16
At the end of the video, he pans down and it's like, oh, I
45:16
gotta hide the camera.
45:19
Yeah.
45:20
Like, it looks kind of plant.
45:20
It's possible it would be a cool rollout to like tease little leaks here and there, build a little bit of attention without doing like a Look at me campaign we're doing. You know, you throw leaked in front of something and it just gets more attention. So it's totally possible. This is some more 4D chess. Zach is not saying, oh, look at the.
45:22
There's a new device right there.
45:41
He's like, not, you know, saying it directly.
45:42
Okay, I like that.
45:46
Yeah, I can see it.
45:47
Milkman says, just shipped a feature to a client 30 minutes after he asked for it. Big mistake.
45:47
Just shipping too fast. That's the new timeline for every project.
45:53
What is going on over at Semianalysis?
45:57
Avi says, Avi said, ask my wife what she was reading to fall asleep. She said, semianalysis based or not gonna make it? Dylan. Dylan says, that's our wife now.
46:00
Dylan was on an absolute terror this weekend. Posting unhealthy.
46:11
Paula says, when your friends go to the Brazilian steakhouse without you, that's FOMO de Cho.
46:15
It's true. I had some FOMO de Cho and you banned me from going there, unfortunately. I was so excited when we were in San Francisco to visit the Bain Capital backed steakhouse, but we had to go to some other place.
46:20
It was good getting into the Block news which happened on Thursday and we didn't get to cover the.
46:32
That happened like six months ago, right?
46:40
Yeah, that's six months.
46:41
Okay, six months ago seems about right.
46:42
AGI age.
46:44
Yep.
46:45
Daniel says what we've really learned from the last five years is that Jack Dorsey runs extremely bloated companies.
46:46
There was some. There was some other news on this with Block. Someone posted the mix of yes, so suck heat. Oh, it was deleted. Interesting. Well, we have it saved. We won't fully docs the account, but I don't know if this is real because it's been deleted. So it might be fake. I don't know. But most of you have heard about Block's 40% layoffs by now, but the numbers are even worse. Engineering was hit harder. We've lost close to 70% of our engineers. The company you once know is a prolific open source software contributor no longer exists. And so I was wondering, like they're laying off 40%, how will they be shifted? Because the AI narrative, the job displacement narrative, that could be back office people that are processing manual workflows or it could be software engineers who now there's a smaller team that's getting more leverage out of AI tools and so you write more off. There's also just the world where you're a mature software company and you have lock in and you're like, yeah, we actually don't need to ship that many more features we have sowed for so long, it is time to reap. But I am still bloat pill. I still believe that this is somewhat
46:52
of a unique driven.
48:03
This is somewhat of a unique situation.
48:05
But it didn't stop the market from absolutely puking on Friday. Amex at one point was down something like 7%. MWT says I'm fully on board with spiraling into a depressive episode over the rapidly approaching neo feudalist breakdown of society. But I worked at Square in 2017 and my job had no task. I sat on the roof eating free snacks all day with a MacBook. Ben Carlson also calling out that he says maybe Block laying off a ton of employees is a sign that AI is going to destroy everything. Or maybe the Stock is down 80% from the highs and they overhied and AI is a convenient Excuse. So, yeah, I mean, we've just called this off, called this out so many times over the last year as companies did rounds of layoffs and said it was because of AI related efficiency. But again, it is oftentimes the best possible reason if you're going to. Better than saying we don't know what we're doing and we've been running with 4,000 too many people for a while now.
48:07
Yeah. At the same time, has the market continued to like it because the stock popped a bunch and it felt like that might cause a. A continuation?
49:16
Yeah, I mean it's stabilized. It's up 28% over the past five days.
49:26
And so it does feel like this could have some sort of contagion effect. A lot of other CEOs looking at this and saying, okay, well I'm at least a theoretical victim of the SaaS apocalypse. I need to do something, I'll do it. So we could see more layoffs from tech firms. It doesn't seem unreasonable, but at the
49:31
same time, the irony is that it's only Dorsey companies that have run these sort of mass layoffs. Right?
49:52
Yes, yes, yes.
49:59
I ran the numbers and this was the largest riff in S&P 500 history. Somebody was in my comments saying sharing Lehman Brothers and Lehman Brothers was actually interesting because they went bankrupt. They were delisted the same day. And then the rifts actually happened over time. Time. But a lot of people were shifted around and transferred over to different jobs and ultimately the company just ceased to be in the S&P 500. Buco has.
50:00
Oh, so it wasn't in the S&P 500 when the labor happened.
50:30
He listed the same day.
50:32
That's a good point. Victory for Jordy.
50:33
Same day.
50:36
Never questioned him.
50:37
Buco says mostly he's talking about the cuts, mostly about xyz, which is of course the ticker being poorly run. Not really about AI, but most other small to medium cap tech also poorly run. Expect many more cuts below. I tweeted that they only needed 60% of their company. That wasn't a random number. Pull up any FinTech SaaS chart and you can see that employee count exploded. Demand exploded in 2020. But now these companies are way too bloated. I did not expect them to cut 40% at once. I think it's basically impossible to identify the right 40% at one go. So huge operational risk. There's. But maybe better for morale than multiple cuts. Who knows? Unprecedented. We now have two examples of this happening with Jack, so it's easy to say he runs a bad bloated business but I have been vocal about this. Toast and Clover should not be anywhere near the scale they're at. Title after pay. Come on. Pretty sure he threw a $70 million party for the team last year. I think it was 68 million for some off site that they did. I also think it's a mistake to define this purely as a jack issue. As I said, pull up the employees charts and the revenue charts. I'd say to pull up the earnings charts but for many they are negative which we all know. These companies are way too bloated and they are having their clocks cleaned by smaller more nimble startups. They have to get lean to survive. I think the realistic average number is 20 to 25% for many of these companies, but there are plenty that could cut 40% too. I think this basically has nothing to do with AI, but there are some roles they can eliminate and somewhere they can increase scope. Let's call it 5%. So again, if that if that now deleted post is real and that 70% of the engineering team, at least in that person's team were cut. But you don't know that could have just been the open source, kind of like the open source focused team. Right? And that's just like a hey, we don't have time to contribute to open source if our stock's down 80%. Yeah, yeah, own over.
50:38
I do think that most CEOs will maybe look at the block news and say okay, I need to right size the organization. I need to do some layoffs. But not all of them will be convinced that a 40% cut is the correct move. They might say actually we think that 20% here and then 5% there and then 10% there is just a more better for morale because it's more clear who's still on the team.
52:34
Own says I think using AI as cover for right sizing your bloated org is pretty unhelp helpful to be honest. This false data point will be cited by every anti AI campaigner within the next 24 hours. This is something I've said. I've seen a number of viral Instagram reels from people saying that the AI AI induced job loss is already happening at massive scale and they're pulling up quotes from CEOs that conducted layoffs in 2025 as evidence simply because the CEO said that they were getting efficiency out of AI.
53:00
Well let me tell you about 11 labs. Build intelligent real time conversational agents. Reimagine human technology interaction with 11 labs and let Me also tell you about console consul builds AI agents that automate 70% of it. HR finance support giving employees instant resolution for access requests and password resets. TBPN simulator is here.
53:36
Let's do it.
53:56
TVPN simulators here. You've been asking for it. There's a data center simulator, there's an insider trading simulator. There's a capybara simulator where you just do nothing and you just sit in the forest. But now you have TBPN simulator and we can play it here on the show. You start out outside of the TVPN ultradome and then you control a character who can walk inside of our studio. You see our bathrooms on the left, our couches on the right, our American flag up top. And once you get prepared to go into the actual studio, this is a real recreation of our. Here we go. TVPN is live now and you can experience the joy of being an in
53:56
person guest on TVPN on the show. In person.
54:33
This is a good way to prep.
54:37
It's a good way to prep.
54:38
It's a great way to prep.
54:39
You should put in 10, 20 hours in here for sure. Understand the layout?
54:40
Yes. It also.
54:45
I love the accurate of how many more.
54:46
It's very accurate. And it's also accurate to the more recent. We recently changed the desk setup for where people sit and this reflects the new setup. So thank you to Ben on our team who put this together. It is fantastic and remarkably accurate.
54:48
Just a few hours, effectively one shot.
55:02
Incredible.
55:05
Incredible. I've never.
55:06
Yeah, I love the details. The tracks on the ground, everything. Just fantastic work. TV Pan simulator will be available everywhere. Video games are made possible, AKA the Internet. What happened at Little Caesars? Little Caesar's arena had a malfunction tonight where their air horn was blaring for over five minutes straight.
55:08
Dirty.
55:27
Let's pull it up. Let's pull up this video.
55:28
I have not seen this video, but I.
55:30
This doesn't sound like a malfunction to me. This sounds like exactly what they should be doing.
55:33
Let's hear it.
55:37
Find out what's going on with this horn, George.
55:37
So guys, I'm here at the scorers table and there was a complete malfunction here. Electric, electrical wise.
55:40
Here at the school.
55:46
We see this gentleman here working front frantically to try. Live production's hard, folks. Live production's tricky.
55:47
Ben, they're saying your. Your simulator is fake because it has no. No goal post.
55:55
Oh, no goal post. Gotta add that.
56:00
And no horse.
56:06
No horse. Okay, back to square one.
56:07
He said it's on the way.
56:11
It's on the way. Okay, that's V2. That will be a maybe. That should be DLC. You have to pay 50 bucks for that. If you get the All Access season pass, we'll include that. But that's definitely some DLC. Getting the horse Leonardo DiCaprio has been quietly funding the Los Angeles Public Library's Los Feliz Brands Los Feliz branch, a facility located on the site of the actor's childhood home. That's sweet of him. Inside, the computer room, which is named the Leonardo DiCaprio Computer center, features several signed posters of the actor from films he starred in, including Titanic and the Great Gatsby. The tribute filled space has become a distinctive feature of the library, offering both technology access and a glimpse into the actor's career. And Brooks Otterlake says, I like that this entire article, that this is the entire article and it seems to fully negate the quietly part of the headline. Quietly funding the branch. This is awesome. And I feel like if I'm a kid and I go to the library and I see Leonardo DiCaprio, that's going to inspire me. So I'm a fan of that.
56:12
This story went out last week. Do you think the posters are still there?
57:16
Probably. I'd say just double down.
57:21
I'd be worried about those posters.
57:23
Anyway. Let me tell you about Vanta Automate Compliance and Security. Vanta is the leading AI trust management platform. And let me also tell you about Cisco. Critical infrastructure for the AI era unlocks seamless real time experiences and new value. With and without further ado, we have Ben Thompson in the Restream waiting room from Certecary. Welcome to the show, Ben. How are you doing?
57:24
I'm good. Hopefully I have the right microphone turned on this time.
57:45
You do and it sounds fantastic. Thank you so much for joining on short notice. Thank you for writing Anthropic and Alignment. It is a fantastic piece that I think covers all of my questions, but I want to start with just how did you process the weekend? How did you get to this particular place? And then like what is your key thesis with anthropic and Alignment?
57:47
I mean this is one of those ones. I don't know if it's good or bad that it came out sort of at the end of the week. So I had a lot of time to think about it. Ultimately I think it was good because I'm not sure anyone very as explicitly made the point. I did and maybe it was bad because I feel there's a lot of like Caveats, maybe in retrospect I should have put in the article that would have addressed a lot of the points that people are upset about. Yeah, basically zooming out. This was not a normative article where I'm saying what's happening is good or bad. And that's really the one caveat I really wish I would have put on there. I mean, I'm being out there accused by like a Neeli Patel, the full throated fascist endorsement of fascism or something like that. And it's like, relax, okay? Can I get some credit for the last X number of years?
58:10
Basically,
59:00
there is a deep rooted concern that I've had for a long time about, and I'm now hesitant to even use sort of EA as a term because it's kind of now politicized thanks to the events of the last week. But a failure to grapple with a world of guns is basically the long and short of it. And I actually think Eliezer has been the one guy who's been honest about this, where he wrote that Time article about potentially bombing data centers someday. And that's actually a point worth bringing up, which is all this stuff is right now in the digital realm with robotics and potential other applications. And it's obviously being used for military operations. It's crossing over into the physical realm. But if AI is as powerful as people say it's going to be, then there are going to be real world reactions to that. And if we're going to analogize it to nuclear weapons, as Dario Amadei has done repeatedly, you have to think through what would happen in a world where a private company developed nuclear weapons. What would the government's response be? And that's not to say that the government response in that case is good or bad, or does it follow sort of constitutional principles or whatever it might be. Obviously I want them too. On the surveillance point, I've been concerned about the application of computers to our surveillance laws for years. Like so many things in our society assumed a certain level of friction in doing things that computers already obviated and AI is going to just do that on steroids. I do think we need new laws. I think all this stuff is correct. And I think the idea that AI being applied to these commercially purchased data sets, for example, is a huge problem that I don't want to happen. The concern I have is that if this technology is as powerful as it is on pace to be unilaterally imposing restrictions, even if those restrictions are good, it isn't just an issue as far as who Rules us. The democracy issue, that sort of Palmer Luckey, I think very eloquently raised. It's inviting very bad outcomes for those asserting that in general. And I feel there's been a lack of awareness of this. That's why I brought up the Taiwan China thing. This has been a frustration I've had with anthropic. Generally. They talk about, you know, Amade has been very outspoken in terms of opposing selling chips to China for in a narrow aspect, very, very good reasons. My pushback has always been, what happens if we get super powerful AI and China doesn't? What are they going to do? Sure, the optimal thing would be to just bomb TSMC out of existence because suddenly that becomes optimal even with all the cost that that does. And then what then are we going to do? Like, we're entering this. Like, I don't like getting into political posts. It's not fun at all. I'm not having fun with this. It's not enjoyable. I could promise you this. And some people are like, well, you should have just made the post private. I'm like, no, I actually, I really want anthropic and people associated with this to read this because people have theorized for a while about what's going to happen as AI becomes more powerful and now it's starting to happen for real. And I guess over the weekend, part of it was just I felt compelled to say this and girding myself to do so. And even then I still wasn't. I haven't waited in this for a while and it's no fun, but it
59:03
is what it is.
1:03:05
Can you unpack a little bit more of that tweet that you posted where you did the find on the Dario article for Taiwan and saw that wasn't mentioned?
1:03:06
Oh, I mean, I've just got. I've sort of griped about this in general. I think that.
1:03:16
So do you just think he should be. He should be talking about the Taiwan issue more deliberately. He should be messaging that like, why is it important that. Why is it. Why is it significant that he doesn't mention Taiwan?
1:03:20
Well, I think the position about not selling chips to China is a totally legitimate one. I understand the argument. I could make that argument if I needed to. I have advocated the opposite. That, number one, not only should we be selling chips to China and a generation or two behind, which has always been sort of our standard practice with chips, we should also be allowing Chinese companies to fab with tsmc. That is a restriction that has come down now. These Huawei chips are somehow manufactured by tsmc. Let's not look too closely at it, but we should expect explicitly be allowing it.
1:03:32
Okay.
1:04:10
And the reason for that is I think it is a safer equilibrium to have China dependent on Taiwan than to try to cut them off from Taiwan while we are dependent on Taiwan. Taiwan is 70 miles off the coast of China. It's not an ideal position in the world for us to have a dependency on it and China to not have a dependency on it.
1:04:11
Yeah.
1:04:35
So this, this is the problem. All this stuff has. Everything going forward has massive tradeoffs.
1:04:35
Yeah.
1:04:42
The implication of letting China fab with TSMC or the implication of letting them buy Nvidia chips is that they gain these incredibly powerful AI capabilities that is driving this entire debate. That is in a vacuum, not a good thing. But nothing's in a vacuum. Everything is a trade off. And in that specific area, I think that just, it's repeatedly again and again being absolutist about the chip issue when I am frustrated to not see any public comment about the. That's not quite fair. He has made comments about oh yeah, that would slow down sort of the adoption of AI the long run if Taiwan got, got bought. And like that's in my mind that's an insufficient consideration of the possibility of Taiwan getting bombed. Now again, I'm biased in that regard. I lived there for nearly two decades. But it's just the reason I brought it up in this context is if AI is what it is, the people with guns are going to want to have a say whether that be domestically, whether that be internationally. That might be in the context of the US government just taking it, trying to kill your company because they feel you're not cooperating. Or it might be the context of China deciding it has to act because the US is becoming too powerful. Because you know, and it's not a fun debate. It does. I do think the nuclear angle is a good one. It has echoes of the proliferation question of mutual assured destruction, all those sorts of things. And that's just going to be the reality of the debate going forward. And again, it's not very fun. But I think it's also irresponsible to sort of run away from it.
1:04:43
How much attention or what kind of factor do you think the information asymmetry between the Department of War and anthropic played last week? It felt like in hindsight Department of War knows they're headed into a major, what is now looking like a drawn out conflict anthropic sitting there thinking, hey, that we have this like arbitrary deadline. Why do we need to renegotiate this now? And then if going off of Emil Michael's timeline, it sounds like they were still in the final hour trying to make a deal happen. And according to Emil, Dario was in a meeting and was busy and wasn't really respecting the deadline, which maybe he felt was kind of artificial. But in hindsight, now looks like it was significant because the Department of War was, you know, taking the country into a conflict and wanted to know, hey, can we lean on one of our, one of our AI partners?
1:06:26
I don't know. I mean, I think that seems pretty arbitrary to have cut. I mean, I'm hesitant to speculate. I don't know what was going on. I don't know the angles, I think. And that's why I didn't sort of delve too deeply into it. And I also think some of the specifics, like this supply chain risk probably overbroad and almost certainly the way it was stated in the tweet is definitely overbroad. If you actually go and read the statute, I think the goal that I was. And again, this is where I wish I had sort of put more caveats to say, look, I'm not actually talking about all that stuff. I don't really care. I do care, but that's not the point of this article. The point of this article is there's all this talk about alignment. That's why I put that in the headline. And on one hand, alignment is aligning AI with humanity generally, but for the foreseeable future. And you could have a philosophical argument about the long term viability of nation states in the age of the Internet, much less the age of AI and whatever that might be. That certainly is, you know, a more pressing conversation than probably ever before. Anthropic exists in the context of the United States. And that's why I put that quote. You may not be interested in politics, but politics has an interest in you. What is politics? War by other means. You might not be interested in that. It is going to have an interest in you. And there is, like I said, a certain long standing frustration of not fully grappling with that fact, having dorm room theoretical arguments about AGI. You go back to that post over Christmas about like, AGI in like 100 years and no one having any jobs or being worthless or pointless or whatever, which included some implicit assumptions around property rights existing in 150 years as they exist today. News flash. If that happens, property rights as they exist today are going away. All these rights, this is A flizzle philosophical argument. That's why it started with the international law concept. All these rights, all these laws are subject to the agreement of those governed by them to follow them. And the final say is those who successfully inflict violence. And again, this isn't fun to think about. It's not pleasant. You would like to assume we operate in a world of laws, that everyone follows them and goes by them. But to the extent AI is as impactful and powerful as it is, the more these questions, fundamental questions that we thought have been settled for hundreds of years, if not thousands of years, are going to be raised. And this is just the first of several episodes where I think that's going to happen.
1:07:28
I grew up in sort of like post Cold War, no ducking cover. Didn't have a lot of fear of nuclear Armageddon, but. But Dario Amadei is a fan of this book, the Making of the Nuclear Bomb. And it seemed like he sort of predicted that if AI becomes super powerful, the US might take a similar approach that they did with regulation of nuclear weapons. And as I was thinking about that, I feel sort of good about the way nuclear weapons are regulated. Like, I feel like we got the good ending and we haven't had nuclear weapons drop in 70 years. And it seems like things are going well there as well as they can, considering that there's this amazing or, you know, tremendous, like dangerous technology that exists, but it hasn't been deployed and it hasn't actually, you know, bombed anyone. But how do you think he's processing that book? How do you think you're. How do you think we should be processing that idea of the government running the same playbook that they did with nuclear weapons?
1:10:19
It's pretty interesting. I mean, on one hand, just from sort of a physical perspective, dealing with weights and software is very different than dealing with fissionable material. Or I guess the super bombs are like, they're actual like fusion devices.
1:11:27
Right.
1:11:41
And that is trackable, it's is interceptible. You know, when Iran, to take a pertinent example, is trying to build enrichment facilities. All of which makes the problem easier to solve.
1:11:42
Yeah.
1:11:56
So that's difference number one, difference number two, and I really wish I would, I had this included and I cut it. So that sort of the. The article be tighter. But there is a very interesting point in technological history which was the early days of intel, and Bob Noyce made the decision that we will sell to the government, but we're not going to design chips for the government. And the distinction there was, you had guaranteed orders which was great. The government would take your IP and there was and in his mind the more important thing is there was limited volume. And the way that he foresaw correctly that this was going to be a very upcoming capital intensive process of designing shapeshift design them if you have the equipment all of which is in the billions of dollars today back then was in the tens of millions and hundreds of millions is you need to find the largest possible market which was the consumer slash business market. You design for that that will accelerate your improvement and your capabilities so much that you will end up having better devices than the government could have ever requested or made for itself.
1:11:57
Yeah.
1:13:03
That is at stake on steroids with AI.
1:13:04
Yeah.
1:13:07
People like I was talking to someone like why doesn't the government just have just get someone to make their own model? It's like because it's like you talk about government contracts word like single digit billions. We're talking about for the the amount that's going into capex the cost of these models. We're talking hundreds of mil, you know hundreds of millions of dollars for the models and hundreds of billions of dollars approaching a trillion dollars a year year in capex. That is only sustainable and viable if you're selling to everyone. And but that introduces the entire new dynamics where the government built nuclear. It started there and it started with a lot of assumptions because it was a government program. We are necessarily for economic reasons because of all the upfront costs entailed starting with private companies of which the government is one of many customers. And that introduces the assumption that well it's a private company with private property rights and all those sorts of things. All of which I want to be true. Again, I don't like how this is going down at all. The point here is to say there's a good reason why it's not going down that way. And there needs to be cognizance that even though these is a private company that is building the model general purpose and for very good reasons wants to put restrictions again I think the surveillance one is a very powerful argument that I agree with. The problem is that you just need to be aware of yes, the government is a small customer. The government is also the entity again, not to be blunt with guns like they you know, like why do I pay taxes? Because the law says to pay taxes. No, at the end of the day I pay taxes because you know if you really want to distill down. If I don't someone with guns will come to my house and throw Me in jail. Right. Like we don't think about that. But at the end of the day, where do these assumptions and laws and rights flow from? And as long as that is still the case that it needs to be a decision making factor for these companies.
1:13:08
How do you think this plays out for Anthropic? It's such a small contract, but it's so important in the zeitgeist. There's a lot of people that are rallying around Anthropic because of this. There's a lot of people that are pulling away from Anthropic because of this. It feels like there is a business to be built that doesn't work with the government but delivers coding models and knowledge retrieval systems and a whole bunch of really valuable products and technology and it winds up being fine. But at the same time, you don't want this like hairy relationship with the government adversarial to go on for a long time.
1:15:15
I would like them to sell to the government and I would like Congress to pass a law addressing these digital surveillance issues.
1:15:51
Yeah.
1:15:58
And a lot of people are like that's unrealistic. Which I'm amenable to. But at the end of the day, if you don't have it's legal or not legal as your guiding standard, the only alternative is someone has to decide. And the implication of that not being a sufficient justification is that means a private executive is deciding. And if AI is what it is, I think that's going to be, I use this word intolerable. I didn't mean intolerable to me. I meant intolerable to those with power. To have a private executive making those decisions or not. And, and if you think about if power, if we're going to have this very sort of brute analysis that power flows from or laws flow from power. AI is a source of power.
1:15:59
Yeah.
1:16:52
So it's not just that. And I think this is where the supply chain again, which I'm not endorsing, but I think that's where the motivation is coming from. The goal isn't to find, we just won't use Anthropic. I do think the goal is to hurt Anthropic.
1:16:53
Yeah.
1:17:08
And if you're not going to be subservient to us, you're not going to be allowed to build a power base, period. And again, I'm not endorsing all this. It's just a matter of, it's not a surprise this is happening. And this needs to be just a, this is a real risk factor, a real that has to be considered in all these decisions.
1:17:08
Putting on my Dario hat, I'm thinking about a different way to achieve the goals with maybe less acrimony. And I threw out this idea that maybe the better solution is, like, work with the government, but then lobby for a surveillance act. And actually, yeah, I mean, I wish
1:17:33
the White House would come out and say, yeah, there's a digital surveillance problem. Let's work on a bit. Like, I don't. Yeah, probably another regret I have is sort of putting this all on anthropic. That was sort of the angle I was concerned about. And that left me, I think, fairly open to the critique that this is just like defending the White House's approach. And that was, again, that was. I was trying to be a higher level, that saying, look, this is what's going to happen. But, yeah, the.
1:17:55
The.
1:18:20
I'm just thinking from the perspective to
1:18:21
find a middle ground here.
1:18:23
I'm just thinking of, like, from the perspective of, like, the. If the White House is like this immutable thing, but you are, you know, involved in anthropic, like, one advice would be, hey, okay, instead of going and having this confrontation with the government directly, go and start a political action committee that lobbies for change in the way that you want through the democratic process.
1:18:24
Yes, that is the ideal process. I understand why people are frustrated and skeptical about this.
1:18:49
Okay.
1:18:55
I used to have this debate a lot in the context of antitrust and aggregators. And one of my sort of theses about the aggregators and antitrust is that the antitrust laws are fundamentally unsuited to dealing with aggregators because antitrust law has historically been about control of supply, and the power of aggregators flows from control of demand. And so you end up with all these solutions that I call pushing on a string. You're just trying to get people to change how they behave. And that doesn't work very well. Like, Google has always been right. Competition has always been just a click away. The problem is people aren't clicking. So the solutions focused on the supply angle doesn't work in a world where the supply is there, just no one's choosing it. And therefore, my prescription is you actually need to pass new laws, not try to retrofit these old laws to this new use case where they don't work. And the reaction is always, that's impossible. We can't pass new laws. And. Okay, but realize the implications of what, of what you're saying. I mean, I saw a tweet again, I didn't like it. So I lost it forever. One of the most inferior things in the world. But someone was like, I would definitely rather have Dario Amadei make these decisions than. And he, to this tweeter's credit, he wasn't limited to Trump, because me, this isn't a Trump issue. This is a any politician issue.
1:18:55
Yeah.
1:20:16
He said, I would rather have Amade making these decisions than whoever comes out of our screwed up democratic process.
1:20:16
Yeah.
1:20:21
And points for the honesty, because that's the actual choice that is, that is being put forward. And you could say Congress isn't going to do anything, therefore Amade should just appreciate that is giving up on the democratic process and saying we should have unelected, unaccountable individuals making weighty decisions. And again, I understand the sentiment. It's hard to imagine Congress passing laws about anything, but just realize that's like that implication is quite fraught.
1:20:22
Yeah, it's a huge change from. I mean, I just spawn in and believe in democracy and then understand it and study economics and just reinforce my belief in the American project throughout my entire career. And now it really is people discussing an entirely different world of governance, which is. Has been not something people have talked about publicly for a very long time, but it is here for sure.
1:20:58
Right. And they always come in on these Trojan horses that are eminently defensible. Again, I'm with anthropic on the digital surveillance point. I've been concerned about it for years, been writing about it for ages. And it's similar. There is an analogy to the, to the monopoly. Like you have all these laws that assume someone has to actually physically go somewhere and tap into a phone line. But if you can do it with computers at scale, like suddenly you had all these assumptions that limited what the government could do that magically disappear. Not because the law changed, but because we got computers that could do the job of an individual at scale infinite. And AI, again is going to the idea that the nsa. By the way, this is my sort of like, I had to admit this in the article. I was so confused why the Pentagon was so obsessed with domestic surveillance.
1:21:23
Yeah.
1:22:13
I didn't realize the NSA was part
1:22:14
John and I part of the Pentagon. John and I have the same moment.
1:22:16
Yeah, yeah, yeah.
1:22:19
You just sort of thought about as like an independent agent, like the CIA. But. But that's. That made a lot of this story make more sense, right? No, exactly.
1:22:20
Yeah. I feel like a lot of tech people are like reading the fourth Amendment today and understanding like some of these, like pretty basic processes.
1:22:27
Well, yeah, but like it's pretty. The loopholes are massive. Like, I'm not denying it. Like, and it's similar to the chip thing with China. Like, my prescription for Anthropic to give in is to allow these massive loopholes to, to be exploited and for the NSA to allegedly in the service of investigating foreign adversaries but by, you know, the process basically surveilling the domestic population, I think is bad. And the reality is the nature of trade offs is you're choosing between multiple bad options. And at some point it's like, which team are you signing up for? They both suck.
1:22:34
Choose one.
1:23:24
What do you think of the messaging around the models themselves not being capable enough to be used in the context that the Department of War asked for? Because that felt like Dario was sort of speaking for all Frontier Labs. He said that these technologies broadly are not suitable for these missions just yet. I'm not sure that he has all of the information on the other side to know about the advocacy. He certainly understands his models and what's capable in the Frontier.
1:23:25
I mean, I think that, yeah, I would, I mean, I would assume they're definitely not capable. I think that point is more of a precedent setting one. I think Anthropic's position is significantly weaker on that point. Like, at the end of the day, we either trust the military or not to make these sorts of decisions. That's why we have a military.
1:23:56
Yeah.
1:24:16
And, and so I just, I have a harder time and I think the digital savers point is so compelling for them because I think it may be my personal biases.
1:24:17
Totally.
1:24:26
I think it's a huge problem. Yeah. The, this various anecdotes. Again, I hate the reporting from these because you can tell like the weeks coming from which side for each of these. But you know, this idea that putting forward these hypothetical examples of like, oh, you could call us and we'll figure it out, then it's like, no, come on, serious about this. Like, like, so, yeah, I think that's a weak argument for them. So that's why I almost focus more on the digital surveillance one. Just because I think it is a very compelling argument in favor of the anthropic position.
1:24:27
Jordan, anything else?
1:24:59
Oh, there's a lot more. What are you, what are you going to be tracking going forward? Obviously the story.
1:25:01
Mail.
1:25:07
Yeah. Good luck. Stay strong.
1:25:08
No, I mean the open eye angle is obviously interesting. I didn't really get into OpenAI. It's hard to parse exactly what's going on. It seems to me they have agreed to the to the Pentagon that they will be. The Pentagon will be limited by lawful capabilities and they make their own judgments about weapon usage. And as I understand it, OpenAI is like, we will on our side be free to stop the model from doing digital surveillance, which sounds like you're in sort of a jailbreak competition. It's like we're going to agree to have a jailbreak competition with the U.S. government, which I again, it's an example of how fraught this is that that's probably the good place to come down on. Now. There's obviously these dynamics of competing for the same talent base being in San Francisco. You know, this is part of, I think Anthropic's. Anthropic has a local advantage in that most people, I think, in the industry are with them and they have a national priority problem in that I think a lot of folks outside of tech don't understand why tech companies always try to or resist helping the US Government. And so it's kind of an interesting dynamic where I think OpenAI is in step with the broader public and very much out of step with sort of their talent base and in San Francisco. And so that's going to be very interesting to see how that plays out.
1:25:12
Yeah, it's remarkable that Google has stayed out of the fray given all the Project Maven background and stuff. Like, they must be so happy. They're just like.
1:26:47
Well, that's the other interesting thing is I. This is actually goes back to Google, I believe, where Google had the project. I think this is right.
1:26:55
Yeah, but.
1:27:05
But I think Google had Project Maven, which their employees objected to and therefore that went to aws.
1:27:05
Yep.
1:27:13
And then some combination of. I think the Pentagon is using Anthropic because
1:27:14
a higher, higher Fed ramp designation.
1:27:21
That's right. So that's why Anth was already allowed for classified content and OpenAI wasn't. Again, I don't know the.
1:27:24
It was. I've studied many, but it does feel like it's a wild story. I mean it was similar like AI for the military, the same like killer robot fears. The actual. I mean, Google was a subcontractor on that project. And what they were actually exposing to the, to the government was tensorflow APIs that would run on Google hardware. And so they weren't actually writing any AI software, but they wanted to effectively like classify images from drones in the Middle East. See, that's a car, that's a house. And previously they had Air Force airmen just sitting there, like clicking. And they were like, okay, we're gonna automate that. But it was still, like, scary. Don't be evil. Working with the government, military. And then there was a backlash. They pulled out. Then eventually they went back in and had a new head of Google Cloud.
1:27:31
Yeah, I mean, this is, you know, it's hard to. And I speak for myself personally, I obviously have the biased angle because of Taiwan. I have the biased angle where I think there, you know, just in general, there is this very naive view of the world that doesn't understand why militaries are important and necessary. And I think Silicon Valley got itself in a lot of trouble by giving in to this. This naive mindset that we have no duty to support the military. And there's this tension. So it's a tension that's been brewing for years, which is, are you an American company, subject to American law and even beyond law, just morally compelled to support the US Military or not? And there's an equally American sort of idea of moral consciousness. I'm able to say no. That's why we have the First Amendment right. This goes into the can the government compel a company to do something? It goes back to some of the questions that happen, you know, with the first Trump administration. And, you know, I've been on both sides of this, like which I.
1:28:21
And this is what I'm not going to sit here and say. In the. In CBS interview, he said, we are a private company. We can choose to sell or not sell whatever we want. There are other providers. He's already sort of like, making this case.
1:29:27
Yeah.
1:29:38
Which again, is a case that I support. But the point here is there's always the question with, like a bubble or whatever. Is it different this time?
1:29:39
Sure.
1:29:50
And I guess that's sort of the question I'm raising. Is AI actually applicable to every other technology that's come along? Or if it is the potential to be a source of power going forward, it's going to be dealt with as such.
1:29:51
Yeah, that makes sense. Last question. We'll let you go. How happy should Ted Sarandos be right now?
1:30:06
I mean, I think he had the killer quote the last couple of days where I think someone was asking if this is such a jewel and it's so rare, like, isn't it a problem that you're missing out on it? And he's like, well, have you seen the history of Time Warner? Which I think sounds about right. I'm not sure how the entity, with all the debt that Paramount, Warner Brothers is going on. I think there's a bit where Netflix is always in the very long run, been positioned, I think, to be the final buyer. Like, who else are content companies going to sell to? I feel like they sort of. I feel like they've been spooked by YouTube a little bit and they felt a need to push forward on that. Bring the. Bring the future forward. That was not allowed to happen. But that means their original plan, I think, still in place. So probably. Probably pretty happy, all things considered. I'm going to say it's great.
1:30:15
Well, I'm excited to get back to Netflix coverage and more anodyne topics.
1:31:09
Remember, it was on Cheeky Pipe you were talking about getting sucked into the Idol, and here we are.
1:31:15
So I put that quote at the beginning of my article. You may not be interested in politics. It was about politics and interest in you. That was about anthropic, and it was also about me.
1:31:21
What did you do? Welcome. Welcome to 2026. Well, we thank you for taking the time to come chat with us. Great to see you and fantastic article. We appreciate you, Ben. Talk to you soon.
1:31:30
Thank you. Have a great day.
1:31:39
Let me tell you about Phantom cash. Fund your wallet without exchanges or middlemen and spend with the phantom card. And let me also tell you about CrowdStrike. Your business's AI. Their business is securing it. CrowdStrike secures AI and stops breaches. And our next guest is here live in the TVPN UltraDome. We have James Bishara from Magic Mind coming on down for a very refreshing, very different pace of interview, hopefully.
1:31:41
Great to meet you, John.
1:32:08
Great to meet you.
1:32:10
We actually meet your heroes, but I'm fine.
1:32:11
We actually met. I believe we met briefly in 2013.
1:32:13
No way.
1:32:18
Because we were using crowd tone at Soylent.
1:32:19
Oh, well, then, yes, of course. Oh, actually, that was a hell of a meeting. We gave y' all a huge, like, $800,000 check.
1:32:22
They gave us a huge check. A physical check this big.
1:32:29
Yeah, we printed literally like an hour before. I was like, do whatever we need to do to get one of those kind of yo, just TV checks. Because y' all had one of the biggest crowdfunding campaigns of all time at that point.
1:32:32
Yeah, yeah, that's right.
1:32:45
I was just chatting with Ajay from our team about raw.
1:32:47
He was the one that was the leader.
1:32:50
Yeah, yeah.
1:32:52
And, yeah, that was such a wild thing because we had applied to get on Kickstarter, and at the time, they said, like, no food products, nothing but, like, board games, I guess, or something. Whatever they were doing at the time. And you guys were like, we'll help you out another YC company. You guys built it in like a weekend and it worked flawlessly. Also, I found out like, like you guys were basically like running digital ads for us and acting as our Facebook promotion engine and everything cycled back. It was so helpful actively marketing groups. We just didn't have any marketing skills or any real, like, just, you know, we were so busy with other things that if we hadn't, if you guys hadn't done that, we probably would have ended that campaign like way longer.
1:32:53
We took. I still do take customer obsession to the extremes. And yeah, we saw it. It was like, hey, this helps both sides. We should lean into it. But yeah, and for listeners, for viewers, Soylent was. I mean, that was so game changing. The whole Internet was talking about y' all for weeks.
1:33:36
Super viral.
1:33:55
And it was for y Combinator. And by the way, both of y', all, and I'm, I don't know if someone's just tuning in, you know, three months ago, and then they are like, hey, these are. It's just the best hair on the Internet. And that's why I tune in. These guys are journeymen and tech and it's so cool to listen to y' all because it's not a journalist that doesn't know how to actually build or what goes into creating a company, a startup, or sees a trend from a journalistic point of view.
1:33:56
But you guys, many escape me in
1:34:28
the arena, the founder perspective, which is why, what are you bringing a.
1:34:30
Let's see.
1:34:35
S ton of magic mind for y'.
1:34:35
All.
1:34:37
Let's do it round. And as soon as I saw. As soon as I saw. Yeah, we should do some shots. As soon as I saw. I think it was on a Sam Altman interview.
1:34:38
Yeah.
1:34:47
That Jordy was doing and he was chugging a magic mind.
1:34:47
Oh, yeah.
1:34:50
I'm going for a Max.
1:34:52
Oh, yeah.
1:34:53
Let's.
1:34:54
Let's magic mind Max it today.
1:34:55
I love it.
1:34:58
So, yeah, yeah, explain. Explain what this is. What are we consuming?
1:34:59
Yeah, I'll get the 10 second commercial out of the way, but you just. This is peptides and the mutins. Yeah, exactly. Peptides, amphetamines. Amphetamines and steroids all in one. But you shake it, slam it back. Jordy's the king of the slams.
1:35:02
Delicious mental performance. Oh, it has some caffeine in there.
1:35:15
And it does. So Max is unique. Max is unique in that it is the world's first time release energy shot. So you couldn't do time release in liquid form until about two years ago. And we were the first to put it in a shot.
1:35:18
Learn from building software that you applied to cpg.
1:35:33
Oh, my God.
1:35:36
I think, like, right away, my experience as a customer has been this, like, iterative approach. Basically versioning out, like the product when you released it versus two years later was like night and day, in my experience, like just a much better product. Which is something that unfortunately, I think a lot of CPG brands, they kind of like make their product, they ship it, they get it into as many stores as they can, they sell as much, and maybe they don't even iterate that much on the product, which puts you at. Which is just like extremely high risk from my view. But that's sort of the step.
1:35:37
It is. It's a really high stakes experiment when you start shipping it to stores and you get into a thousand doors and then you realize, hey, this is kind of a B product. We could make it better. And yeah, we've chatted about this over the years, basically since the beginning. Imaginemine of taking, excuse me, everything that I could from Silicon Valley and my background was in building software into building a drink company. Which MEANT the first three and a half, four years was just improving probably 150 iterations and making the product better, better, better before we went into stores. Because when you are D2C and it is every time I say this out loud, I can't believe it's the case. But as the first D2C energy shot, I couldn't believe no one had done it before. But one of the huge affordances is you can have version 1, version 1.1, version 1.5. And each time that we made the product better, we saw the retention curve go up, up, up. And then we're like, all right, this thing's ready for retail. And now, as of last month, it's the number one health shot in the country in the natural channel.
1:36:09
That's wild. Yeah. I remember when I first tried it, I was like, okay, I like what this gives to me, but it hurt my stomach. And then you were like, try it again. I tried it again, fixed it. And so I think that, oh, God
1:37:14
bless the early testers. It was rough early on, and it was like, hey, I'll eat nails if it'll help my productivity focus and flow. But it turns out most people wouldn't. And, yeah, it was actually Biz Stone. Huge shout out to Biz Stone, co founder of Twitter. He's the one that put us in touch with. Yeah, really? No way. What college you all go to?
1:37:25
Northeastern.
1:37:46
Oh, no way. Yeah. So brilliant mind investor. He's the one who put us in touch with these oncology manufacturers. This oncology manufacturing company. This is the most over researched beverage on the planet. So we worked with them and utilized this technology to make everything so small that you can make it taste better than any of these ingredients I've ever tasted before. So my favorite ingredients called Bacopa. Bacopa Moneri decreases impulsivity by up to 50% which is amazing for focus flow.
1:37:47
If you want to be risk on.
1:38:15
That's right. Yeah exactly.
1:38:16
If you, if you want it on the track. I mean performance takes many different types of.
1:38:17
Your advertisers don't want it for all of the logos that are happening, happening Right, right all over here.
1:38:21
I, I, I was skiing, I saw a guy do a backflip on skis with no helmet.
1:38:26
That's impulsivity in a beautiful way.
1:38:30
It was not on magic.
1:38:33
He needs a magic mic put on the helmet. Cuz that is actually extremely dangerous. Do not recommend doing a back flick up on skis with no helmet.
1:38:34
That's an, that's a, a non magical
1:38:40
behind me for many people, many people
1:38:43
when you don't want to jump into social media or the 50 texts on
1:38:44
your phone actually focus. Exactly. Yeah.
1:38:48
You want to be less important but it tastes terrible. Until we found this technology.
1:38:50
So you took the iterative approach. The R and D actually brought this kind of like hardcore R and D to a category that many people would have just again shipped a project and been like it's good for me, let's run it.
1:38:54
Yeah.
1:39:06
What did you not take? What did you not take? Because I feel like that's great question. So much of the way that you run the company is incredibly unique and kind of at odds with maybe how you built your last company.
1:39:07
There are a handful of things and this is one of my favorite things to talk about is just how we build the company and magic mind and it's, you know there's that line in literature of once you know the rules and you can break the rules. And so I think my 20s of three different startups and a lot of it just failure after failure to failure. But learning how other companies did it or the, the playbook would be done and then recognizing getting to a place and honestly just being able to fund it with my own capital. I was like I'm gonna do this my own way. So one of the things is we don't have any junior employees. About 10 senior employees. Everybody's senior. We have plenty of great Contractors, plenty of great agencies that we work with, but because of that, one of the things that we do is no meetings. It is all asynchronous. Once a month we have a team meeting and people can. If you need it, you need to go to a meeting, go for it. But the default is asynchronous. Love. We love loom emails, texts instead of slack. We don't. I hate slack voice notes. But this senior aspect of the team members, no, junior team members means that the investment is very expensive per employee. But man, do we. It's like a hot knife through butter. Whenever there's a challenge, you have senior people through and through on every aspect of the business. And I think with my earlier startups and with, with most startups, a lot of times you're hiring people that are doing something for the first time and saying, hey, we need you to. I know you've been a great engineer. We need you to PM as well for this thing. Because there's three or four of us with magic mind. It was like, no, I want someone that's done this two, three times before and is a veteran that knows how to do this.
1:39:20
It's super key. And in retail role.
1:41:01
Yeah, exactly.
1:41:04
Because like it just, it's so hard to get on the phone with a buyer and actually build the trust if you haven't already sold them the last. Oh, you were at the last company. You were the one that brought in vitamin water to Costco. So the Costco buyer loves you because you brought him a banger product. You're going to try the next thing much harder to just take a cold call and break through.
1:41:05
And you know, I'm sure you know this with Soylent. In fact, what are some of the things that you remember from the retail, the first stages of the retail world, you would never.
1:41:23
It's been a lot better with Lucy. But yeah, very, very driven by trade shows, relationships, just being in some massive conference room, getting to know people over years and years and years, building trust. And then the only thing that can really accelerate it is that example of like bringing in someone who has a relationship on the other side where they delivered the goods and then they are putting their reputation at stake saying, the last time I was with a smaller company, you took a risk on me. You were successful. We both made money because this product, you bought a bunch of it, you took your limited shelf space and you gave it to my previous company and it moved and you made money and you looked good. You got that promotion. Now you're vp. Take that risk again with this next thing that's always been the best.
1:41:32
Take the risk and then also do it in the right sequential way to where I think one of the things that one of the biggest cardinal sins in retail and I didn't realize this coming from software, you actually don't want to move fast and break things. If you get into, let's say you get what seems like a dream partner in Walmart if you don't move, if the product doesn't move. And I didn't realize this until getting into retail and my co founder William is a genius on this stuff of just avoiding what makes you good. We say this internally a lot. What makes you good is what you do. What makes you great is what you don't do. One of the things that the trap that is a doom loop in retail is you expand too fast. Let's say you get Walmart, you're like hell yeah, we got Walmart. This is the 800 pound gorilla. And then it doesn't move and then you have to start putting in.
1:42:19
It's not like Facebook, you launch an ad on Facebook, why doesn't work. You just wind it down.
1:43:03
No, now it's like Walmart's like hey by the way, if you want to stay in here and you got one bite of the apple over the next 90 days, you've got to spend 400k in spend, in promotional spend. You're like 400, we didn't expect, we had 270 grand in the bank. You all aren't going to pay us for six months because of the payment terms you negotiated. And they're like, well then we're going to have to take you off the shelves. So then you go and raise some last ditch effort round that goes towards. You don't get quite to the 400, you only get like 300. And then it starts this slow doom loop and then you can't take it off the shelves because the worst graph in the world isn't flat. It's one that goes up and then down. Your revenue is flatlining or going down and you can't raise any investment. And I would say conservatively, I'd say three out of four CPG brands get into that trap because they expand too fast.
1:43:08
Everyone starts dtc. The retail transition is the make or break moment for like every new brand basically in my opinion. And the hardest part is always for
1:44:00
me
1:44:09
rethinking the CAC to LTV math. Like it's pretty easy on E commerce to think about payback period. Maybe you're not doing roas, maybe you're doing LTV to cac, but you're still looking at like a one year payback. And when you get into retail, you're looking at much bigger tickets. It's not 100 bucks on an ad, it's $100,000 or a million dollars of commitment with some change. And then you might not see ROI on that because your margins are lower. You might not see ROI on that
1:44:11
and you're distracted from the customer. So you don't really know what are they responding to, what messaging are they responding to. There's a. This goes back to the two obsessions that I have are on the product. Everything in the product, every ingredient, third party tested, everything's clinically back the exact dosage on the product and then in that customer experience and then the other obsession is on the community communication. And one of the things that you can do in D2C that you can't do in retail is you can iterate like crazy on the exact. Every word on here.
1:44:40
Yeah.
1:45:10
Has been tested with millions of eyeballs. You go into retail without that, that layer or you go in too early and it's a mess.
1:45:10
There's some people that just have it though and they can just, they can just like one shot everything and then just go straight to retail. But it is very rare. And usually they have a retail background.
1:45:18
Yes, they have a retail background. And, and one of the things that
1:45:27
survivorship bias there was someone, they got really lucky perhaps started some company where like their whole shtick was like the packaging is going to be gold and it's just going to like jump off the shelf at you. And like it did and they sold the company for like a couple hundred million. Oh, David's kind of doing that now.
1:45:31
But I was thinking total veteran David's a perfect example where it's like every move is from.
1:45:46
What do you think about pricing? I always think of was it coia that was like $12 a bottle or something and they sold for great outcome and it just flew off the shelves and it was almost like a Veblen good, like a Lamborghini or a Rolex or something where the higher the price, people saw the higher price and they immediately thought like this is a better product, so I'll buy it. And that actually drove sales as opposed to duking it out for oh well, we're the same price as the Red Bull, so people will compliment us to that. Versus like no, the price tag is marketing. What do you think about that?
1:45:50
That is It's. There's. So there's so much discussion around that. I'm an investor in a handful of CBG companies that you can get caught into a lot of just like triangulation and you end up with. It's like, you know, Homeland style, like triangulation with the red string. And we'll do this in these stores and this in these stores. I will say Air1 sometimes gets into like $8 shots of Magicmind, which we're not fans of, but it is the number one shot there, so it is working. So maybe it's a. It could be. Maybe they've got a secret to it. But there is a really simple framework that we use, which is. I wanted it to be the best health shot in the world.
1:46:23
Yeah.
1:47:05
So in terms of everything from the vitamins for D and C and all of the B complex for immunity, because nothing will SAP your productivity like getting sick all the way to the Bacopa citycholine. One of the things that caffeine does is it's a vasoconstrictor. So it will restrict the blood flow to the brain, which is terrible for lateral thinking and terrible for switching context. It's not great for a conversation when you're like, oh, I really want to absorb that new information. But it is great for alertness. But if you add in something like citicoline, specifically this supplier we have Cognizant, which took. It was like a two year wait. That's really cognizant. It's amazing. So it's in magic mind. And Cognizant improves blood flow to the brain, so that improves. The academic term for creativity is lateral thinking. Improves lateral thinking. All of these things are. You stack them up and they're pretty expensive. So I was like, I just don't care what the price is. I want it to be the best in the world. And it's exactly what I had been taking for seven years before even thinking of making it into a product. Then we worked backwards and we were like, all right, it's got to be this price in the beginning, 5.99. Now it's down to about 3.99. Because of scale, our batches are about 2 million. Like, we're producing about 2 million bottles a month. So now with scale, you can get the price way down. But it was. Man John, it was a friggin. It was pushing a boulder uphill to get people to be comfortable with that price point in the beginning. But I was like, nope, that's what the price is going to be.
1:47:06
Makes sense.
1:48:29
What's your relationship like with the Internet?
1:48:30
Personally, dude, we duke it out daily. No, no, I just, like.
1:48:34
No, specifically. Walk me through the last three days of your Internet usage.
1:48:41
Oh, great question.
1:48:49
Because, like, I want to. I'm trying to gauge how online you are because I feel like the Internet is just a stress machine. Dude, it feels like it works.
1:48:50
This is my device.
1:48:59
Oh, interesting.
1:49:00
You don't carry the phone.
1:49:01
Yeah.
1:49:02
So this is a Monday. I have it right now because of the GPS to get here and just being Jordy and I live a few blocks from each other. And so I have the phone, but, like, literally, I didn't charge it last night. It's 10% gonna die. Rarely use this thing.
1:49:02
Rare too.
1:49:18
It is. Yeah. And I rarely use this thing. My relationship with the Internet is
1:49:18
use
1:49:27
it when necessary and then don't use it any other time of the day or week. On the weekend, specifically, we have three little girls and I go watch only and pay the extra whatever, like, 14 bucks for it to have cell service. So that it's basically a dumb phone.
1:49:27
Interesting.
1:49:43
And I've got text and I've got phone calls on it. But outside of that, it is a dumb phone.
1:49:44
You can't scroll.
1:49:49
It can't scroll. And it's a. I've been. I create apps, like, with Replit to where I, like, create an app that brings up quotes of the Bhagavad Gita and stuff like, that's the only thing. When I sit down on the toilet, I might read a little bit of the Bhagavad Gita on my watch and then I'm, like, ready to rock. But I'm not where I was 10 years ago, where it's like every tiny little moment, let me jump into the amusement park that is my phone and then get lost for 45 minutes. And it's a God screen time's amazing.
1:49:50
Yeah, I know.
1:50:20
Scroll more.
1:50:21
I will say I listen to. Without meetings. I can deliberately. And I love. I will work from my iPad, try to do 99% of my work from my iPad, shout out to Replit there, because I got a great iPad app and there I will listen to y'.
1:50:22
All.
1:50:37
Like this morning while I'm working out, listening to y' all either on my, you know, the podcast app or on. Or on YouTube. But, man, like, it's like the iPad is. I'm on going into the offensive. I'm going to actually create something. The watch is like. I'm going to be on the defensive side. If My wife needs me, otherwise I'm with the kids. But this is this somewhere in between where I'm like, I can't really create with this phone, but, man, do I get sucked into rabbit holes.
1:50:38
And you get into a situation where, you know, I've got 30 tabs open on my device, which is just like pure chaos.
1:51:08
Right.
1:51:16
And these devices are sort of like, are just due to their nature, don't work very well to kind of multitask.
1:51:16
The two things that I talk about so often with founders as an investor in about 100 startups, and I talk to them all the time again, it's what makes you good is what you do. What makes you great is what you don't do. So deliberately for us really mitigating meetings, or I end up telling about the devices that I love and the number one is the Apple Watch. As a dumb phone. When you're at the gym, don't bring. You can. Obviously most people do bring the phones, but you might get sucked into an email. Certainly with kids, my goal with them is by the time they're 10, spend 20,000 hours with them. And that was scrolling when I was. I think I was listening to a podcast on my watch, actually. So no scrolling. And this guest said that that was their goal. This is like a year and a half ago. I don't remember the guests don't even remember anything about the other points that they're making in the conversation. But that stuck with me 20,000 hours before they're 10 because. And his logic was so sound. He said he could make $50 trillion and never get that decade back with the little ones. And so I was like, man, I want to hit 20,000 hours with my kids. And that was basically the moment where I was like, the weekend's no phone. And I'm just going to be with him in those hours. I'm going to be completely with him and not sucked in because I just know myself too well.
1:51:23
How is no phone? Just out Apple Vision Pro, mostly.
1:52:39
How is your.
1:52:44
And I used I Vibe coded auto scroll. So I don't scroll, it just scrolls.
1:52:46
Yeah.
1:52:50
And it has all the TikTok and the Instagram reels and the X. Monitoring the situation, dude. And to each their own.
1:52:50
Virtually, they're with you.
1:52:55
Exactly, exactly.
1:52:57
I agree.
1:52:58
How has your approach to angel investing evolved? Because I feel like angel investing can start out as like a fun hobby, a way to blow cash, but then you get caught up in this FOMO and you're on other people's timelines. You're kind of fighting for allocation at different points. What's your approach?
1:52:59
It used to be more is better and it's true. So much of the game of investing is you go wide because the asymmetric returns one investor.
1:53:18
You don't know who the next Gusto is exactly.
1:53:30
Angel and Gusto.
1:53:32
I was.
1:53:33
Yeah, that was my first angel check. They were in our Bachelor y Combinator. I know, it's nuts.
1:53:33
That's amazing.
1:53:38
Let me tell you about Gusto everybody.
1:53:39
The unified platform for payroll, benefits and HR built to evolve with modern small and medium sized businesses. Go sign up.
1:53:41
Wow. That has never happened before where I've been able to mention angel investment and
1:53:49
then get a live ad read.
1:53:53
Live ad read.
1:53:55
What's your website?
1:53:55
Mine?
1:53:57
Yeah.
1:53:57
JJ becerra.com.
1:53:58
yeah.
1:53:59
But the company company magic mind.
1:53:59
Oh, magicmind.com.
1:54:01
is that on Shopify?
1:54:01
It is.
1:54:02
That's amazing. Shopify is the commerce platform that grows with your business and lets you sell in seconds online, in store, on mobile, on social, on marketplaces. And now with AI agents.
1:54:03
Continue.
1:54:11
Shopify is great. And as we get into the angel investing conversation, I bet a lot of ads will pop up because it is. It has been. I've been fortunate enough to invest in a handful of great companies, but I went wide honestly. Gusto did so well. And then a few others did really well. Mercury did really well. And then I'll tell you my biggest miss.
1:54:13
What's that?
1:54:35
OpenAI and I'll tell you the story. It is.
1:54:36
Can I curse?
1:54:40
Can I curse on here? Total, total effing miss. And it was so brutal because so I me and oh, it's all good.
1:54:41
There'll be another open AI. You'll have another show.
1:54:51
It's way different than missing. Just like a unic comes up. Like you can find one nyc.
1:54:55
You're good.
1:55:00
I did the math the other day. It's basically missing 20 Googles. Google went public, had a $20 billion market cap, so it's missing 20 Googles. Now with their latest funding, it is straight up missing. It is missing 30 Googles. Yeah, 35. 40 Googles. So which is cool.
1:55:01
That's all right.
1:55:21
It's fine.
1:55:22
I never calculated the amount.
1:55:22
You needed the magic.
1:55:24
That's why you need the magic find to clear your mind. Low. I don't need to scroll.
1:55:25
And it's. There's so many things that I will in life where I'm like. Like the no meetings where I'm like, I know I'm leaving money on the table. I couldn't Imagine any other life. Any other life. I feel very fortunate. But I'll tell you the story. So I've only shared this once, but it definitely replays in my head pretty often. So Sam and I, Sam Altman and I, we were advising and building out YC research for universal basic income. This was in 2017. So every month we would meet in a conference room, this dingy little office. And the dingy little office, the conference room, was where they were doing open air research.
1:55:29
Yeah.
1:56:10
So every month. And I was a full time. We had sold my last company, Airbnb. I was a full time angel investor. Yeah, full time angel investor. Waking up every day, working, thinking about, yeah, what are the next game changing? And I'm like, yeah, Sam and the team, they're trying to do this AI stuff. And it's research that can be a
1:56:11
problem when you're too close to an operation. Whereas if you just get a pitch once, you're like, oh, this makes so much sense. But when you're getting all the information at all times, and they're like, yeah, we don't really know what this thing's going to be. And we ran into this issue.
1:56:34
Has that happened with friends where you dinner?
1:56:48
It sounds terrible. I ended up creating a rule of, like, friend starts a company. Like a real friend starts a company, just invest, even no matter if, you know, like, oh, they've got this issue with how they operate or they've got this blind spot or, you know, all the problems that they're facing, you just, if you're too close to it, you can just overthink it, dude.
1:56:52
I told Ahmad at Mercury when he was like, because we had built financial technology and he's graduated, we're at the Airbnb cafeteria and he told me the idea for Mercury and wanting to start a bank for startups. And I was like, don't do it, dude. And I spent 45 minutes trying to convince him not to do it. And then like two days later, he was like, I appreciate that. Because we kept texting about it. He's, I appreciate the input, but I'm gonna do it. And I was like, well, okay, I'll invest.
1:57:10
There you go.
1:57:37
This is not a good idea. And good Lord, was I a total idiot. But yeah, the open AI one, I was sitting every month saying no again and again and again. And even with. With the whispers of, yeah, well, I think we're gonna have to spin it out, make it a for profit and would you want to invest? I was like, AI, and I'll be Honest, smart investors got in my head that were like, AI is so far off that it's like, that's like a 20, 35 thing. We're so far off. And I was a total idiot. Well, 40 Googles later, I said 40.
1:57:37
But plenty of other opportunities, plenty of other investments, and it's plenty of other products.
1:58:11
Congrats on all the progress. It was good to be humble a little bit.
1:58:15
Exactly.
1:58:18
You're on too much of a hot streak. It's good to be like, okay, I gotta lock in.
1:58:18
That is the benefit of building, I think for anybody listening, I think it's what makes you guys so good at what you do. It's what I think makes any creator really good at what they do is there are very few things. It's like going to the gym. You don't go to the gym for. You think you're going there for the gains, but man, what you really get is the ego dissolution. It is so, it's so humbling. And anytime you put yourself out there, you think you're doing it for the gains, but you fast forward 10 years later and you're.
1:58:24
It's the journey.
1:58:52
Yeah, it's the journey. The ego, the humility that comes with it, that only comes with doing it. Because, man, if you're just number two or number 2,000 at a big company, you're just living in your head like, I, I would have done it this way, I would have done it that way. But man, it's, it is by putting yourself out there, it's quite, it grows your awareness in a pretty, I'd say invaluable way.
1:58:53
Well, it's a great journey.
1:59:16
Well said.
1:59:18
Thank you again. Thank you for the gift oversized check. And thank you for the oversized delivery of Magic Mind.
1:59:18
Thank you all for doing what you do in such a unique way.
1:59:24
We appreciate you powering the show.
1:59:27
We'll talk to you soon. For us, let me tell you about Gemini 3.1 Pro. Gemini 3.1 Pro is here with a more capable baseline. It's great for super complex tasks like visualizing difficult concepts, synthesizing data into a single view, or bringing creative projects to life. And let me also tell you about Restream One Livestream 30 Plus Destinations. If you want a multi stream, go to restream.com and up next we have John Quinn live in the TVP and Ultradome for the second time. Thank you so much for making the trip down to our studio.
1:59:30
Welcome back.
2:00:02
John Quinn, of course, is the most feared lawyer in America. We are huge fans. Tyler over there wears that hat every day. He is obsessed. You have a lot of fans.
2:00:04
I got a couple for you. For you guys.
2:00:14
Yes. And, you know, I'm your neighbor. I live in. Right. Very, very close to you.
2:00:15
I know you went to school with
2:00:20
one of my kids. I did. I went to college.
2:00:21
Polytechnic school. I don't know which one. I had five who went there.
2:00:23
I was in the same class as Jamie.
2:00:25
Oh, okay.
2:00:26
Yeah, yeah. It was a great time. Anyway, we're not here to talk about the minor life in Pasadena. We're here to talk about tariffs. We're here to talk about law. What have you been tracking? How have you been processing the back and forth on the tariffs this year?
2:00:27
Well, 10 days ago, the United States Supreme Court ruled that President Trump didn't have the legal authority to impose these tariffs, which were kind of the cornerstone of his international trade policy. Domestic policy, maybe also provided a lot of funding. He's collected $600 billion worth of tariffs. He imposed those tariffs under something called the Economic Emergency Powers act, which gave him a lot of discretion under the administration's interpretation of it. I mean, we saw this, you know, one day it was 35%, then it was 50%. It's all this. There were two categories. There were the fentanyl tariffs that affected some countries, Mexico, Canada, a couple of others. And then there were the Liberation Day tariffs, which was across the board, different deals for different countries. That statute had never been invoked before or used to impose tariffs. It was challenged in the Court of International Trade in New York. He lost there, ended up going to the United States Supreme Court. The argument was a couple of months ago. The argument didn't go very well for the administration. Reading the tea leaves, I think a lot of people thought the administration was going to lose. And then, sure enough, the decision came out on February 20, and the Supreme Court ruled that, you know, a lot of people thought the court would rule that these really weren't emergencies. The emergencies he was relying on were fentanyl and the balance of trade being an economic emergency. And a lot of people thought the court was going to say, you haven't shown that, that that's an emergency. But that's not what the court went off on. The court ruled that he simply didn't. You couldn't impose tariffs under this statute. It had never been done before. There's no reference to tariffs or customs or anything, or taxes or anything like
2:00:42
that in the Emergency Economic Powers act itself.
2:02:22
And the court said, wrong statute. Yeah.
2:02:25
And zooming out, where do tariffs typically Come from? What's the correct path?
2:02:28
Well, there's a number of different paths. There's section 1, 122, which he's now invoked. There's a national security regime where you can impose tariffs, but it requires actually an investigation that's been done by, I think, the Department of Commerce or some other agency. So a record has to be created. The president can't just on his own say, you're going to pay 35%, you, India, you're going to pay 50%. So there is a whole other regime available. And the day he lost, he was ready for, for this. He announced there's another statute that permits him section 122 that permits him to impose up to 15% tariffs for 150 days.
2:02:31
Yeah.
2:03:11
After 150.
2:03:12
And then theoretically, you could just pause for a second and then start up again.
2:03:13
Is that true?
2:03:17
I don't think so. I think the courts would see through that. But after 150 days, Congress has to act.
2:03:18
Yeah.
2:03:23
So he can do it for 150 days, and then to extend it, Congress has to act.
2:03:24
And is that passive in the sense that if Congress doesn't act, they don't continue, or by doing that, 150 days, it forces a vote in Congress?
2:03:29
I don't know that it forces a vote. I think Congress has to take affirmative action.
2:03:38
They're the next one.
2:03:42
So the day he was ready for this, the day he lost in the Supreme Court, he announced, okay, I'm invoking this section122.10% tariffs on the world.
2:03:45
Yep. And he could have gone up to 15.
2:03:54
Well, he did it the next day.
2:03:55
Okay.
2:03:57
He says, I thought about it again, 15, 15 now on the whole world. So that's where we are right now. But the clock's running.
2:03:57
And what about, what about the Supreme Court didn't give any guidance on how the tariff, the cash generated from the tariff should be returned. Right. So there's also kind of a gray area.
2:04:04
They didn't, they didn't address that. But you know, honestly, guys, that's, that's pretty straightforward. These 670 billion in tariffs. There are a lot of claims. Many, many companies have already filed claims, and there aren't a whole lot of defenses to those claims. He has said he's going to do everything he can essentially to delay it, maybe. So it's not on his watch, the next administration, but there aren't a whole lot of defenses. These are good claims. There's one issue about, in some instances, the costs have been the terrible Tariffs have been passed on to consumers, so the government might presumably have a defense that, hey, wait a second, you don't get all this back.
2:04:16
Your profits didn't go down.
2:04:52
Yeah.
2:04:53
There was no damage.
2:04:54
You passed it on. So you saw the revenue anyway.
2:04:54
Interesting.
2:04:57
And that raises a question about whether these secondary parties, parties that paid for things that included these tariffs, which have now been ruled illegal, whether they can bring claims. That's an interesting question. So I think we're going to see some of those claims.
2:04:58
So for a company that's trying to get a. A big refund, maybe in the hundreds of millions of dollars, I imagine for some of these companies, is the first step just to file a claim or do they need a lawyer? Is this going to be a lawsuit on a per company basis? How do you think that.
2:05:11
Yeah, I think it's a lawsuit on a per company basis. They have to. They have to file a claim. Lots of lawsuits, a lot have already been. We're filing lawsuits. Okay. I mean, one of our partners, a guy named Dennis Haranitzky, is representing a number of clients already and bringing these claims. But what folks need to do if you want to pursue, get your documents in order, get your records in order, figure out what you paid when. And yes, you probably need to engage a lawyer. I don't think you want to do this on a person representing yourself.
2:05:25
ChatGPT. Yeah, exactly.
2:05:57
You don't want any hallucinations. And what in the screw up of what is otherwise a powerful claim. The claims are brought in a very obscure court known as the Court of International Trade. It's in New York. You bring the claim there and you pursue it. The government has.
2:06:00
How staffed up is that court? How could they possibly process what could end up being a.
2:06:17
It could be an administrative burden. But the issues are pretty simple here. I mean, it's, you know, he invoked this act, they paid the tariffs. The Supreme Court says he didn't have authority to do that. And, you know, the government refunds tariffs all the time. It's kind of a routine thing. So it's not like this hasn't happened before.
2:06:25
Can you take me back to the Supreme Court battle and help me understand what's involved in fighting a case in front of the Supreme Court? How much do lawyers know about the positions of the various Supreme Court justices before they walk in the courtroom? How does all of this play out? What goes into a Supreme Court case? How high stakes is it?
2:06:45
Oh, I think this was super high stakes. I mean, this was a major decision for this administration. Maybe the Biggest trade tariff decision ever. And I don't know that there was a lot of decisions by these justices before that could help people kind of predict what their reactions would be to this. The majority opinion was by Chief Justice Roberts. There was a dissent by Brent Kavanaugh and some others. But, you know, in preparing for it, I don't think it was unique to any other Supreme Court argument. Obviously, that's the biggest of the big leagues. So people prepare carefully for those arguments.
2:07:07
How quickly.
2:07:49
Oh, sorry, go for it. How quickly did the legal community come to a consensus around the impact and longevity of the tariffs? Because from our perspective, the market reacted violently on Liberation Day. It was the worst day in financial markets since COVID effectively. But it feels like you've known or you've been able to predict that these tariffs might get struck down sooner than maybe most of the financial community. So what was the process like for the legal community to get consensus around what might happen at the Supreme Court level?
2:07:49
I think it was pretty straightforward, especially after the oral argument before the Supreme Court, it went so poorly.
2:08:23
Oh, interesting. So it wasn't. It wasn't that months ago, before even oral arguments started, people were looking at precedent and how the different justices.
2:08:29
No, sure, sure, they were. I mean, you know, the administration had lost below in the court, so that, you know, people knew that this is definitely in play.
2:08:37
Okay.
2:08:47
So, I mean, I think the issues were pretty straightforward.
2:08:48
Sure.
2:08:51
Does this act give him the authority?
2:08:52
Yeah.
2:08:54
And as the Supreme Court found, there's no history for it. It had never been invoked before for this purpose. There's no reference to tariffs or customs in the act. So as legal questions go, you know, I don't think it's that complicated.
2:08:55
Okay. How do you think about the speed, the time between Liberation Day and the Supreme Court ruling that felt like almost a year. Is that deliberate by the administration? Is that just a function of how busy our Supreme Court is? Like, why does it take so long to actually get a decision from the Supreme Court?
2:09:10
Well, some decisions take longer than that. Look, this everybody. You know, the Supreme Court reads the newspapers, as somebody said during the FDR administration.
2:09:28
Oh, that's funny.
2:09:38
They read the tea leaves. They know how. There's no doubt they knew how important this was.
2:09:39
So they know what's coming. Even before the case.
2:09:44
I'm sure they see it coming. But in terms of why it took so time between the oral argument and the rendering of the decision, it was, you know, people were waiting, wondering, but it wasn't an extraordinary amount of time. But I think because of the known repercussions of this decision, the court wanted to be really, really careful and straightforward.
2:09:47
Yeah. Are there any other big new trends in cases that you're tracking that might make it to the Supreme Court this year? I'm thinking particularly of some of the artificial intelligence debates that we're having. We were just talking to Ben Thompson about the relationship between Anthropic and the Department of War. And. And there's a lot of folks that are asking for clarity on the regulatory side, whether an act of Congress, new laws, new leadership, more messaging, but the Supreme Court could play a role there. Is there anything that you're tracking in terms of, like, upcoming cases that will be particularly consequential for business or technology?
2:10:07
Well, we all know about this issue about using copyrighted materials to train large language models. And we have cases pending, you know, relating to literary works, visual works, musical works. There are dozens of these cases percolating their way through the courts. There have been a couple of decisions up in San Francisco. Now, the early decisions at the district court level, the trial court level, where the courts kind of came out, the decisions were kind of limited because of the unique circumstances of those cases. One was a meta case, and I think the other was. I'm forgetting what the other case where they prevailed at the district court level. But before these cases can get to the Supreme Court, they have to go through that process. A decision at the district court, then we have a court of appeals, and then to the US Supreme Court. I mean, it could. Well, you know, our Supreme Court doesn't hear very many cases.
2:10:42
How many do they hear?
2:11:35
I think they do like. I think like around 80 cases a year.
2:11:37
Oh, wow.
2:11:41
You compare that, like, to. To India, where the Supreme Court there hears thousands of cases a year. Of course, they have hundreds of judges, I think, on their Supreme Court.
2:11:41
Yeah.
2:11:52
But, yeah, I think there are definitely issues in the AI world that could reach the Supreme Court.
2:11:54
So walk me through the thought process that if you were advising me, and I believe that an AI lab has trained on. On my intellectual property, and that lab is well funded, and they come to me with a settlement. Why would I want to turn a settlement down and fight it out and take it all the way, go the distance all the way to the Supreme Court, what are the different trade offs that plaintiffs are making right now?
2:12:00
You need to create a downside for the AI company. They've got to see that you have the standard power that you're prepared to litigate this case anthropic.
2:12:28
So you need deep pockets. So if you're just a single author of a book, then you're not independently wealthy. They're probably gonna say, hey, you'll take the settlement. But if you're a class of.
2:12:38
Well, you know, it depends on what the settlement is, I guess. Yeah. But, you know, my experience is that defendants don't write checks until they see a downside.
2:12:49
Got it.
2:12:58
And so that involves. You really have to kind of establish credibility that we're serious about this, we're willing to pursue it. Anthropic settled a case for a billion, maybe 2 billion of copyright class action. But those were very good lawyers who had put together a case and they had gotten Anthropic in the position that they're facing whopping damages if they lost at the trial court. It could have been much, much, much more. So that's the cap you kind of have to create a downside for.
2:12:59
Yeah. And what, what would the economic damage calculation be? Is that like. I'd look at the profits that they've made from that or the revenues, like, what sort of economic case should I be making if I'm trying to extract the maximum amount of damages?
2:13:30
Yeah. I mean, under the copyright laws, there are specific rules about what's recoverable.
2:13:44
Sure.
2:13:48
In the case of copyright infringement.
2:13:49
Sure.
2:13:50
And that's the theory here, that this is copyright infringement. To use our copy, my copy, the plaintiff's copyrighted materials, in order to train the larger language model. And there's a couple of. There's statutory damages which are a fixed amount per infringement, or there are theories under which you can recover the profits they got from it. But there are various rules about how damages are measured.
2:13:51
Yeah, yeah, that makes a lot of sense. Jordy, Sorry.
2:14:15
Give us a 101 on what it's like to be in an active case or litigation with the federal government. I'm sure over the many years we've talked, was it Palantir or Anduril? That or both of them.
2:14:18
Palantir and SpaceX both sued the US government over contracts that went. That were not fairly bid in their minds. They didn't have a correct shot at displacing an existing program with the. Palantir was decent.
2:14:36
Yeah.
2:14:52
But let's say you have a client that is considering taking an action against suing the government. Suing the government. How do you kind of like walk them through it? Obviously there's a bunch of details case by case, but what's the general framework that you operate under?
2:14:52
Look, there are good things and bad things. Things about Litigating with the government. We sue the United States government all the time. We had a case relating to Obamacare. There was something called risk corridors, where insurers under Obamacare were assuming liabilities for markets that they had no history with. They didn't have a basis to underwrite it. And there was a term that was put in the Obamacare that said, okay, we will. We, the government will backstop you, the insurance companies. So if you're, you know, if your losses are above a certain amount, you know, we'll fund that. Well, that was never funded and that was never paid. And we basically.
2:15:09
So the bill was coming due. The insurers were saying, hey, Mr.
2:15:51
Governor.
2:15:54
You know, interestingly, the insurers, many of them weren't eager to sue the, you know, stick their heads above the parapet.
2:15:55
Yeah.
2:16:01
And sue the United States government. But we got involved and we brought a class action on behalf of a number of insurers. It took some persuading to get insurers to participate. And ultimately we recovered like seven and a half billion dollars. Now, the good thing about suing the US Government is if you get a judgment, it's not like you have to go out and hire investigators and find the assets. If you have a judgment, there's a window in Washington you can take your judgment to and they give you the money.
2:16:01
Get a check for seven.
2:16:30
Like a drive. Is it a drive through? Like a drive through. Like a McDonald's.
2:16:32
Wasn't that simple. They appealed and they lost. But once it was, once the dust was settled. Yeah, they, they give you the check. We have a case now. I mean, we represent Harvard University.
2:16:35
Sure.
2:16:47
Versus the government.
2:16:48
On.
2:16:49
We sued Homeland Security on the ridiculous ban. On. I thought it was ridiculous on foreign students around, you know, coming to Harvard. So we sued Homeland Security on that. We sued Health and Human Services on the defunding and the cutting off of grants for research.
2:16:49
Yeah, we talked to Andrew Huberman about that.
2:17:05
Yeah. So both those cases we won on the trial court level, and it's up on appeal now. The government is appealing that. So litigating with the government, you've got to assume, you know, they're well resourced. I mean, I don't know how many hundreds, probably, probably thousands of lawyers there are in the Department of Justice. So they're going to show up, they're going to advance the government's position. And so they're not just going to lay down.
2:17:07
Can you describe the level of sort of back and forth career paths that happen between Quinn Emanuel and the Department of Justice or the government? Are there folks that you're recruiting from the government occasionally? Are there folks that, that leave and go do a tour to understand what life is like on the other side? Or is it two sort of separate communities?
2:17:32
No, it kind of goes both ways.
2:17:52
It does.
2:17:54
At our firm we have about 25, 30 lawyers who one time or another were prosecutors, U.S. attorneys, assistant U.S. attorneys, or high ranking officials in the Department of Justice. And that scene for a youngish lawyer, a job as in U.S. attorney's office. Office as a good job, great experience. And those folks are in demand at firms like ours.
2:17:55
Sure.
2:18:19
And then on another level, senior people sometimes, if they're interested in government service and they're connected, because a lot of this does involve political connections, you can get an appointment. We've had people in our firm who have appointments at very senior positions in the Department of Justice. So it can go back and forth. I will say another thing. Right now this Department of Justice is really busy. They're facing all kinds of cases. And you know, based on what I hear and what I read, they're kind of resource strapped.
2:18:19
Yeah.
2:18:51
So I'm not suggesting they're not going to show up and defend, but I
2:18:51
mean, like, does that mean your associates are getting more calls to come over?
2:18:56
You know, we've had a couple of associations, associates go to work at the Department of Justice in the recent past, but that's always been true. Yeah, it's a good experience.
2:19:02
Yeah. Yeah, that makes sense.
2:19:11
How is, in your view, AI impacting the legal industry today, not in the future? You know, how will the impact kind of evolve over time? But what are you seeing and hearing from colleagues or friends at other firms?
2:19:12
Look, we are wordsmiths, right? We work with words, we write things. And so obviously large language models are something that is going to change how we work. They may change the whole structure of law firms. Big law firms historically are kind of described as pyramid in structure. You have senior people and then you have a lot of junior people. A lot of what the junior people have done in the past can now be done by large language models. It's not like they're going to give you a work product that you can then file with the court and use. I mean, there are hundreds of cases where lawyers have gotten in trouble with courts by filing things that cite cases and laws that don't exist. So that is really, that's definitely a thing. Hallucinations, that's a huge risk factor. And that's really on the lawyer. I mean, there's no, there's no excuse for a lawyer to said, oh, I relied on AI. No, you sign that thing and that brief, you're responsible. It's your integrity on the line. So we're nowhere near getting, I don't think, at least in our practice, maybe we do complex litigation. There may be practices where it's a lot of repetitive litigation where you can get output that's ready for filing, but we're really not seeing that yet. But we've developed in house at our firm a platform, a methodology for taking large masses of data that we work with all the documents that are produced in discovery, all the testimony, all the contracts, and we organize that. It's built on the Claude Enterprise platform, and we've done this as lawyers. It's a system that's proprietary, developed by lawyers for lawyers. I think if you just turn a young associate loose with Claude or Jeff chatgpt, you're not optimizing the technology. But we take all the data and we structure it for a way that lawyers work, so it creates work streams. So, like, what do we need to do? We know what we do in every case. We prepare examination outlines, we prepare expert witness reports and the like. We prepare opening statements. So we structure in the data that creates these work streams. And I really think that gives us a big advantage. It's not something engineers have created. It's lawyers knowing what lawyers need, having designed a way to structure the information. And we're using that with great success. I mean, in trials. Now, in the middle of trial, imagine somebody's on the witness stand. You can ask the AI what's the best evidence that so and so just lied about that.
2:19:31
Wow.
2:22:14
You press the button and you get a button. So most of those things you will have thought of. Some of them make no sense, but there'll be a couple of gems in there.
2:22:15
Yeah, yeah, yeah.
2:22:24
You know, that you might not have thought of lines of attack that you haven't thought of.
2:22:24
That's great.
2:22:27
That's extraordinarily powerful.
2:22:28
Yeah.
2:22:29
So what we're. Our goal is to get to a point where the AI yields a work production that's like 80% or 90% there,
2:22:30
which is what an associate is typically doing today. Right. It's not. You're not like, the best associate is not hitting it out of the park. Every single output, they're getting a good solid chunk of the way there.
2:22:38
You're absolutely there. Right. And so lawyers can focus on what they do best, making sure that last mile, the last 20%, the last 10%, is as good as it can possibly.
2:22:50
How do you think this affects the job market for lawyers at the early stage of their career? Because in some ways, yeah, their work might be being replaced. But at the same time, given that AI is very good at generating words and will be able to generate entire lawsuits, you can kind of imagine a dystopian world where the number of, you know, cases that get broad are 100 times higher than they are today.
2:23:02
I think that's true. And that's something that people don't talk about a lot. There are AI companies, AI native companies out there that essentially identify claims. So they'll have a database that has information on businesses of all kinds. What licenses do they have? What licenses don't they have that they should have under the law? I mean, you can just imagine if you can boil the ocean and they will, these companies, you can subscribe to them and they'll serve up. Here's a class action for your consideration. We've identified the claim, and here it is. So I actually think there's a potential that we may see more litigation as a result of AI. On the other hand, I think resolution of cases may be faster because both sides can understand quicker the merits of the case on each side and reach a resolution sooner.
2:23:35
Yeah, that makes a lot of sense. Last question from my side. Can you tell me a little bit about how you perceive the battle of the forms and the level of detail that contracts might be entered into? I'm thinking of this news with anthropic, like in the Department of War, about these, like, two lines, domestic mass surveillance, fully autonomous weapons. And there's. And there's, like in every contract, there's the question of, do you write it in one page or 100 pages? And I think most lawyers can tell you I can do either. And they have different flavors and they have different subtexts. And it depends on the type of relationship that you're forming. And so I'm wondering how much you think about that when you're communicating in a legal context in something that's going to be binding, what level of legalese you want to use, what length of document you want to use. How important is that in communicating?
2:24:30
I think it's super important. I mean, it doesn't matter. Everybody, you close the deal, you have a closing dinner, people shake hands, they have a drink, and as long as things are going fine, everything's fine. Right? You never hear about it, and the legalese doesn't matter. As soon as a couple years down the road after an acquisition Joint venture, whatever. Circumstances have changed. One side has a different goal, they want to go in a different direction. Can they go in that direction? Or they're not performing, they face headwinds. Then it's only then that people start to scrutinize the language and say, wait a second, can they get.
2:25:26
What did we actually agree to?
2:26:02
Yeah, can they do this? What are our options on both sides? If you want to get out of a deal or you want to create some leverage, you're going to scrutinize that language. And that's why we think it's a good idea. It's going to sound totally self promoting. But we have clients, major global private equity clients, who ask us to do this. Especially if it's a novel structure that hasn't been tested before, something you haven't done before. Take a look at this and pressure test it for me from a legal risk perspective.
2:26:03
Sure.
2:26:39
You know, and I think it's especially valuable to have a litigator's eyes on the problem because, look, we live In a world where 24 7, something's gone wrong. Yeah. So day in, day out, we're dealing with the very kinds of situations you're talking about. So a lot of times we look at these things with a jaundiced eye and a lot of times we can look around corners and see how might a devious counterparty try to take advantage of this situation. So I guess the answer to your question is it doesn't matter until it does. And so because of our experience as litigators, we can look at things and kind of see, oh, there's a potential loophole here. It might be a simple agreement like we had a case, an arbitration in Asia. I won't get into the specifics because it's confidential, but basically, one side had an option, you know, and there was a method for determining what the option price would be.
2:26:39
Sure.
2:27:40
And then it was kind of like a bit like a baseball arbitration. And then one side get one number and the other side would come and there'd be a neutral that figured it out. The other side quoted option price was way out of bounds and was like totally unreasonable. And then they said, your turn. And what do you do in that situation? If you respond to that? Treat it as a good faith offer and you respond, then you've sort of bought into that going down that road. And that seemed to be what the language of the contract on its face, you know, authorized, allowed. Exactly. And our advice was no, don't respond, don't respond. And so they went to the arbitrators and said, look, they defaulted. They didn't respond. There's only one price on the table and it's ours. But ultimately the decision of the arbitrators was, no, you didn't act in good faith. So they lost and they were stuck in that investment. All ultimately, they got out. They were so eager to get out of the investment, they had to come to terms with it.
2:27:40
Yeah, makes a ton of sense. Well, thank you so much for taking the time to come chat with us. Thank you.
2:28:44
And thank you.
2:28:48
It was fantastic. Enjoy the clip, of course.
2:28:48
Thank you for keeping us.
2:28:52
Keeping us fitted.
2:28:54
Great to see you again.
2:28:56
See you around.
2:28:57
Great to see you.
2:28:57
Let me tell you about public.com investing. For those who take it seriously. They got stocks, options, bonds, crypto, treasuries and more. More with great customer service. And let me tell you about Fin AI, the number one AI agent for customer service. If you want AI to handle your customer support, go to Fin AI. And without further ado, we will begin the Lambda Lightning round. Here we go.
2:28:58
Pivot.
2:29:22
That's camera over there. It's turning all the way around. It followed John Quinn out of the studio.
2:29:23
It was missing him already.
2:29:30
And now you see that the Lambda Lightning round has begun and we will.
2:29:32
We got a banger to kick it off.
2:29:35
An absolute banger. We have Michael from Workos. There he is, Michael from Work os. He is the co founder and CEO.
2:29:36
What's happening?
2:29:43
How you doing, gentlemen?
2:29:44
Great to see you. How's it going?
2:29:46
Great to see you.
2:29:47
Great to see you. It's been. What has it been, a couple months?
2:29:48
It's been almost six months. Last time we had you on, we were hanging out with Satya Nadella on a historic day. Also another crazy day to join. But since this is the first time joining remotely, give us the update. Kind of reintroduce yourself for everyone.
2:29:51
Yeah.
2:30:05
So I'm Michael Greenwich. I'm the founder and CEO of Work os and we are as infrastructure company that helps other software companies with enterprise features in their app. Not exactly the thing people get the most excited about, but it's the underlying infrastructure that allows people to sign in to things like ChatGPT and a anthropic and perplexity with their company account, with their enterprise account. So we often say that we help developers make their app enterprise ready. And today we're really thrilled to announce our Series C, which we've raised $100 million.
2:30:06
All right, we got a mallet, we got a gong to hit. Let's see. Let's see. Do we have the power?
2:30:38
It's like Thor reaching.
2:30:47
There it is, there it is.
2:30:48
There we go.
2:30:49
Look at that.
2:30:49
Here we go. 100 on 2 billion.
2:30:54
That's right, that's right.
2:31:01
That is some great dilution from your side.
2:31:02
We haven't raised money actually in over four years. I think a lot of folks kind of forgot about us, to be honest. As a company, we were started back in beginning of 2019. So this is over seven years we've been building the company. Really the news today, what's new with us is the last year or two has been all about AI and we have found ourselves supporting all these AI companies as they're rapidly just explosively growing. So whether it's like OpenAI selling into the enterprise or, you know, Claude, like growing like crazy over the last few months, or even Cursor last year, which, you know, kind of came out from nowhere and took over how people write software. Workhorse is powering all of these and we're helping all of them take the functionality that they've landed in, AI and actually enable them to go sell that into these big customers that otherwise they wouldn't be able to get and essentially unlock revenue.
2:31:06
So over the last few years, you joked that people forgot about you because you weren't raising. What were you doing? I'm assuming you were at different points, probably turning down investor interests. What was your kind of mindset and leading the team through that period?
2:31:58
Yeah, we were just building. I mean, I think at the end of the day to build infrastructure like we've created, there's no real shortcut for it. You just have to spend day after day, week after week, month after month, solving tens of thousands of small edge cases. Sometimes I say work os, it's kind of like a ball of edges. Like the whole thing is made of edge cases. There's, there's not really just one way to, you know, quickly ship it. It's in some ways like the anti YC company. It's the thing you can't build in like a couple months and launch. It does take months and months and years and years to develop it. Early on we had a lot of amazing customers. I mean we early on powered off for folks like Vercel and webflow and Carta and kind of the previous era of cloud SaaS we all know about. It's just what we've seen with the AI stuff is it grows faster, the companies are adopted sooner within the enterprise, and all of this AI functionality, it's actually Scrutinized by security IT people, you know, a lot more, a lot more heavily like the IT people say no way we can use your AI unless it fits these, you know, security policies. And so despite being started in the pre AI era, work OS is actually kind of perfectly positioned to be in, we are an AI business today without having been started that way.
2:32:16
Yeah. How have you, how have you processed the overall SaaS apocalypse narrative or just vibe coding in general? In many ways work OS can, can kind of, I feel like deliver kind of the dream of vibe coding, which is like maybe there's some specific functionality you want to build, you want to be able to sell it into the enterprise. But pre work os, maybe that wasn't, that wouldn't have been possible because you have all these edge cases that enterprises are going to care about. But how have you processed the entire narrative?
2:33:24
Yeah, when people are vibe coding stuff, you typically don't want to vibe code your security part of your application. You might vibe code the features. The ui, like the AI code gen is so good for doing UI engineering, but when it comes to stuff like authentications or permissions or things around compliance or auditing, that's maybe not the place you want to apply AI. And we've seen that actually from a lot of our customers, including our customers being the AI labs that are building the models to do this stuff. They're even using, using work os. What we've seen from a lot of other businesses is there's no time to build it in house. So in the previous era you might have had, you know, like a few years to figure out enterprise. If you go back and look at like Dropbox or something, you know, they built for many, many years before they did enterprise. Today what's happening is these AI companies, you know, powered by, by the functionality, they go after enterprise pretty much immediately. Like within the first year they have to go up market and so there's no time to build it, they just turn to us and we ship it for them.
2:33:59
So I mean truly, how low is the bar? Like if I'm in YC and I vibe code, a piece of software that's like mvp, but it works, is work OS like accessible to the point where it's self serve free for like how low is the bar?
2:34:55
Absolutely. So we have an amazing free tier. It's free up to 1 million users.
2:35:13
Wow, that's a lot.
2:35:17
So even maybe you get a million users when you're in yc, but that, you know, as you keep growing, what we charge for is the Enterprise features. And really, we try to align our pricing model with you as a customer. As you go close enterprise deals, you pay work os, it's kind of like Stripe, you know, you pay Stripe money when you go, you know, make money yourself. But it's so easy to use. We have people that integrate, you know, work less than an hour or two, they're out there selling enterprise essentially, as soon as they have demand. Yeah, and actually just last month we shipped a new capability that's even faster to integrate because it uses AI. So AI is actually accelerating our customer base. It's also accelerating how fast you can adopt and use our functionality. You essentially just run a command, it installs work os, and then do you
2:35:19
have a KPI around that? In terms of integration timelines, it seems like they're getting shorter. But what are we talking about? Days of developer time?
2:36:00
Weeks?
2:36:08
If you run our CLI installer, which uses AI to install it takes roughly seven to eight minutes. So it's super fast. And I've been doing this trick quick. Whenever I do, like a sales call with a customer that's interested in using work os, essentially kick off the call and say, hey, go try this. Just run it in your terminal in the background. We'll come back to it after I click around the dashboard a little bit. And essentially you have a POC that's ready to go. In the past, we would still integrate fast, but we would have to talk to their engineering team. We get on the roadmap, we do an architecture call. We have an amazing team of solutions, engineers and developer success that helps. Helps plug stuff in. But I think what we're finding is AI is this accelerant, not just in terms of market adoption, but just making software easier to use and integrate. We even have people that migrate using AI, so they might using a different solution or having a homegrown thing, you know, you plug Claude or Codex into it and just rip through it. It's really wild. Yeah.
2:36:08
What now? Is the job finished or job's not done?
2:37:02
Absolutely not done. In some ways, it's kind of like a new moment for the company. I told our staff at the beginning of this year, if I was to start workos today, what I would essentially build is work OS for AI and specifically for AI agents. So that's what we're building going forward. If you think about agents in the world, and a lot of people talk about agents in different ways, really what they're doing is displacing people, doing work or enabling people to do more. I think Ivan from Notion has called agents. You're a manager of infinite minds, the ability to go and control and adapt all these different systems to do work for you. The problem with agents is if you're spotting all these things, these workers to go do things for you, they essentially need permissions, they need access to data. And an agent isn't useful if you can't connect it to all your different stuff and you can't do that securely. Last thing you want is an agent running wild. I think there was a story on Twitter a week or two ago of an agent, an open claw instance, going rogue and deleting a bunch of email.
2:37:06
If you guys remember that, it was like Amazon also had
2:38:02
an alignment or something.
2:38:08
It's really wild luck.
2:38:10
So you definitely don't want that for your personal account. But it's completely disastrous if you're doing that internally to your systems or God forbid, it's doing it to your customers. This problem around what can agents do? They're extremely powerful, but we essentially need to give them different types of permissions and guardrails and identity on top of it. That's a new thing that we've been building at workos with some partners building this new essentially identity fabric, it's what we're calling it, that sits across everything and allows people building agents to have the connectivity, but also the security and trust that is demanded by their customers. We hope with that it will act as an accelerant more into the enterprise. It'll help more companies building AI do it in a way that's safe.
2:38:12
Amazing. Last question for me. How is the role of. You said like integration engineer. It sounded like almost forward deployed engineer. How is that role changing at work OS now?
2:38:50
Well, everything at work OS is using AI. Our customers are using AI, we're using it. Every single role. We have sales reps building stuff with cloud code.
2:39:02
Sure.
2:39:09
You know, our finance team doing stuff. We have hackathons going. And so it's definitely impacting the people that are working with customers every day in that way. I think the magic of cloud code and the support deployed Engineer is it essentially turns one person into a whole team. So it kind of mimics having, you know, if you go sell product to like a bank, like Deutsche bank, your giant company like Microsoft, you would be able to afford having like 10 or 15 engineers go sit in their office and write code. Literally, like post up with your laptop and like build stuff for them. That's the original solutions engineer, Forward deployed engineer. What AI lets people do today is essentially have that Same experience, but have a tiny company do it. Like work os, it essentially lets us be that level of consultative, impactful work in your organization. The first step, when we run our AI installer for customers, what it's actually doing behind the scenes is it's looking at your code base and building a plan. It's doing an architecture review by analyzing your structure and providing the best course of action going forward. The REST recommendation, and I think we're just scratching the surface on that. I mean, that's like, it's so early. It's so, so early in terms of all this. So at the end of the day, it just makes people integrate faster, go to market sooner, ship faster, and just grow up, market sooner. We talk about workos being the help making your product enterprise ready. It's really an API that unlocks revenue. It expands your TAM as an organization. And when these companies go through that transition moment, say they just are getting product market fit and expanding, the last thing they want to do is slowed down by the lack of these enterprise features. When they have a big fish on the line and what workos does, it turns that on almost immediately. So if it's seven or eight minutes,
2:39:09
the big fish company of San Francisco.
2:40:50
I love it.
2:40:52
That's right.
2:40:53
That's right. Yeah. It's like a fishing line. It's like, we'll help you get it in the boat faster.
2:40:53
Yeah.
2:40:57
Well, thank you so much for taking the time. Awesome.
2:40:57
Great. Great to get the update. Congratulations to the whole team and we
2:40:59
will talk to you soon.
2:41:03
Thanks again. Great to see you guys. Take care.
2:41:03
Bye.
2:41:04
Let me tell you about MongoDB. What's the only thing faster than the AI market? Your business on MongoDB? Don't just build AI, own the data platform that powers it. Our next guest is Adam Simon from IPG Media. He's in the Restream waiting room. Now he's in the tv. Adam, how are you doing? Hello. Welcome.
2:41:05
Sorry, guys. Hold on.
2:41:24
No, you're good.
2:41:25
There we go. Now I can hear you.
2:41:26
It's happening. Great to meet you.
2:41:28
Thanks so much for hopping on.
2:41:30
It's great to meet you. Thanks for having me on.
2:41:31
Please introduce yourself. Give us a little background.
2:41:33
Yeah.
2:41:37
I am Adam Simon. I spent a decade as one of the top innovation executives at a global media agency, which basically means I got paid to be wrong about the future just a little bit less often than everyone else. I am out there consulting on where entertainment is going next, which in my opinion, is off the screen and into the real world. And I'm working on a book called the Immersion Economy about how immersive technology is set to supercharge the experience economy.
2:41:37
Okay, what did you get right when you were predicting the future?
2:42:05
Oh, good question. I got right. I think that Netflix would win in streaming and I still maintain that after, you know, last Friday's news. Obviously, I think that Netflix, you guys covered this earlier, but I think that Netflix is going to come out on top in this deal when we look back a few years from now. That was something I called early. I also called that they were going to pivot into having an ad supported tier at some point as well as into gaming. I think the Netflix behemoth will just keep growing.
2:42:10
Yeah, walk me through a little bit more of like immersive economy off the screen. Like, that could mean Taylor Swift concert, that could mean ice cream museum. There's a million things that happen off the screen. Like how do you, how do you describe, like, the territory?
2:42:39
Yeah, I mean it's, it is all of those things. It is in my mind the exciting part is all of those experiences that are about to get, are already getting upgraded or about to get upgraded with technology. And some of that looks like immersive technology. It looks like the sphere. It looks like what, what COSM is doing, which I think is super interesting, especially around like sports and performances and stuff. But I also think it means AI personalization in the physical world. We spend a lot of time talking about how AI is impacting software. You guys certainly spend most of your days, I think, talking about that. But I think AI colliding with the real world. Robotics is the place that most people go, which is understandable. It's super exciting. But I think there's an interim step before we see robots everywhere. AI powered robots everywhere, which is just our physical world being a little more responsive to us as individuals. If you think about the physical world reacting to you in the same way that your, your feed on social media might and starting to personalize itself in that way that triggers some nightmare scenarios. But I think there's some opportunity to also do some really cool, fun and creative work there.
2:42:55
Okay, so I'll give you a specific example. We were in Montana. We were in Montana this last weekend. We were at an ice race, which was the first event that they've done, where basically they get a bunch of cars around an ice track and then you can take cars out and do some laps and then there's a bunch of spectators and things like that. And I was. The event was incredible. My only criticism is like There was kind of one screen over in one of the spectator areas. And so in the area that we were, you kind of, you were reliant 100% on your own vision to experience the event, which was cool, but at the same time there were certain angles and like a bunch of missed context. And so in an event like how are you thinking about personalization in the real world? Is it we have smart glasses on and you have your own audio feed or like what does that actually look like in practice at something like automotive event?
2:44:03
Yeah, I mean, I think, I think that that is a great example. I think that to your point, we don't even need to wait for augmented reality glasses. I think when they do come, there'll be some really interesting, exciting things we could do. But just the ability to customize an audio feed, you could do that with AirPods right now, right? Like lots of people already have them in their pockets. We could be through people's smartphones broadcasting customized audio feeds to you through your AirPods just as one example, an under sort of discussed feature of the sphere that I find really interesting is they have beam forming speakers. So different sections of the sphere can technically be hearing different audio. They haven't really used this yet to my knowledge, but the way that they talk about it is that you could have like different language tracks sort of beams to different sections of the venue. And I think that that's the kind of thing that we're seeing built into lots of venues. And obviously that's not. Not a personalized sort of one to one experience. But you can also sort of see how that starts to move us closer in that direction and use some technology that is not exactly new, but to enhance the experience and make it more accessible for more people.
2:45:01
How are you thinking about VR? We had Ben Thompson on our show earlier today. He's been pushing aggressively for Apple to just let him watch any NBA game. He basically figured out like it's like a $40,000 worth of hardware that would just need to be installed at the different stadiums. And then you could just. He's like, I don't need a separate live stream and commentary because if you just beam me into the stadium, I can hear the commentary that's happening. I can look up at the scoreboard. I really just need that kind of like video feed.
2:46:10
Yeah, I mean, I think, you know, I think long term there are some cases and some users who are going to want the more highly produced, edited experience that Apple is kind of leaning into. I know Ben is super against that. He Just wants the static camera. It's like pretend that I'm there. But I think that he's right in that Apple's biggest problem right now is scale and just having the volume of content that you could get with just static cameras and you know, sports is great and I'm sure would sell a lot of headsets. But also just think about concerts and stage plays and any place else where just being a fly on the wall or being a butt in a seat, you can't get there or don't want to get there physically, but being able to get there with your Vision Pro or your meta quest or what have you, I think that there's real opportunity there longer term. I actually think that venues like Sphere and Cosm in particular are going to create a pipeline for that kind of immersive content that will be able to go straight to two headsets. Because if you think about what KOSM is doing with the NBA, they basically have that camera already there courtside, and it's creating that immersive sort of shared reality is what they call it, experience in the Cosm venues. There's no reason you couldn't just port that over to a Vision Pro right now. So we're starting to see some pieces be put in place that's going to create a really interesting, I think, flywheel for immersive content. And I sort of think about it as venues like KAZM being a kind of middle class experience where if you have the money and the capability, you're going to go there in person, of course, but you're only going to do that for a few events a year, even if you have deep pockets. And obviously there's the at home viewing experience which might be upgraded if we can watch it in immersive environments. And then you have the option to go someplace like a Cosm venue to get this sort of, you know, half halfway experience. I can still go out with my friends and be social and have a close to courtside experience without shelling out to fly wherever the game is.
2:46:44
Yeah. Are there a lot more spheres, Cosm type experiences coming down the pipeline? They said they're super capital intensive, so they're high risk, but are. From my view, the response to both has been great. And Sphere seems to be somehow doing pretty well in the public markets as of late.
2:48:45
Yeah, I think Sphere was caught everybody by surprise a little bit because that first year there were lots of headlines about how much money they were losing, how much it costs not just to build, but to operate. But from everything I Can tell they from the outside, obviously they look like they have found a path. I think if you look at what they did with wizard of Oz, which was a surprise, I think to everyone in Hollywood, it is really successful. They basically made like a fan experience for wizard of Oz, which gave them a reason to charge you over $100 a ticket to see a movie that you've probably seen dozens of times before. And I don't know if it's streaming anywhere for free, but it used to be on television all the time. Right. It's like the classic movie that was constantly available and yet people are shelling out over $100 a ticket to go see it. They've sold over 2 million tickets to that already. So I think it's not exactly what they set out to do, but it's an interesting new path for them and clearly a source of revenue.
2:49:08
Yeah.
2:50:07
Brian Chesky from Airbnb was on the show fairly recently talking about how he is quite bullish on IRL in the age of AI, that the online world is getting very wild and intense. You can imagine some people just deciding, I'm going to log off, I'm going to go get it, go somewhere, get away. I can see that. But I can also see the other side of it, which is that the online world continues to get more and more entertaining. And some people just say like, well, I'm not, I don't need to go take this trip or go to this experience because I'm perfectly entertained at home. What indicators are you looking at to understand if, if AI is actually disrupting some IRL experiences, would it show up as like Disneyland, you know, visits dropping? Which obviously could be a number of factors like how would you. Where are you? Kind of like trying to see where there might be some disruption.
2:50:10
Yeah, I mean, I think I would be looking at things like live streamed sports and concerts and things like that to see are the ticket sales going down. Everything we've seen to date is quite the opposite. People are spending like crazy on experiences. Right. It's, it's. Everybody thought that, that it was sort of a revenge spending after the pandemic, but that has, has not played out. Everybody is very excited to at least sometimes, right. Like close their laptops, put their phone in their pocket and go out and do something fun. I think what we're competing with is that people, people are, they want to be out there having experiences, creating memories and that is not something that you're going to have the same sort of experience doing. Looking at a screen. There's also some interesting neurological research that I've been following around. Just like how we sort of sync up and experience communal events. And it's really fascinating, but basically there's. We create stronger memories and we form stronger social bonds when we're in person. And it actually has a biological component to it. It's not just sort of the emotional aspect of being there together. So I think maybe if the pandemic was the catalyst that we realized what we were missing. But I'm pretty bullish on the fact that demand for even technology, technology optimized and oriented and enhanced experiences in the real world. Just because we're out in the real world doesn't mean the tech goes away. It just changes and evolves. And I think that's where a lot of the exciting developments, both for AI but also for other technologies is going to be in the next decade.
2:51:06
Well, we got to do this one in person with a live audience, because just neurologically I feel like we'll have a better time.
2:52:49
Yeah.
2:52:56
There's also of the reality that many people are chasing IRL experiences to have something to show back on social media in this, I try to remind myself we were at this car event and everywhere you looked there were cars that if I saw them on the street, I would just stop whatever I was doing. And I was trying to force myself to actually just like take it all in versus taking pictures. But certainly there were some people that hardly were processing it live because they just wanted to share everything that they were seeing.
2:52:56
And honestly, that is the thing that I'm most excited about with glasses, Whether like meta Ray Bans or what Apple's going to release later this year or early next year. Having the ability to capture content but stay more present in the moment, I think is the best sort of marketing case for those products. Because you're absolutely right. Like, we want to be able to capture these things, but we've, I think, tilted a little bit towards doing things just for the sake of capturing them rather than actually enjoying the experiences. And look, I think everybody knows that even when you're doing it to your point, you. You're aware of it. So I think if there's the right catalyst, it'll get people off their phones and more sort of focused on what's around them.
2:53:33
Awesome.
2:54:11
Totally.
2:54:12
Well, thank you so much for taking.
2:54:12
Yeah, great to meet and come back on anytime.
2:54:13
Have a good one.
2:54:17
Awesome. Great to meet you. Thanks, guys.
2:54:17
Let me tell you about app lovin. Profitable advertising made Easy with Axon AI get access to over 1 billion daily active users and grow your business. Today. We have been keeping our next guest waiting. We have Matthias Wagner from Flux AI. He's the founder and CEO and he has some exciting news for us.
2:54:19
What's happening?
2:54:36
Welcome to the show. How are you doing? Please introduce yourself.
2:54:37
Hi guys, thanks for having me. I'm actually, my name is Matthias. I'm the founder CEO of Flux and you know, we're building the first AI hardware engineer.
2:54:41
Amazing. And give us the news. Today you raised some money. What happened?
2:54:46
Yeah, we closed the Series B. We raised a total of $30 million in new capital across.
2:54:50
Congratulations.
2:54:58
Led by 8 BC. So, you know, super stoked and, you know, stepping on the gas and yeah,
2:55:03
talk about the key problem, the key product where you are in development, rolling out, getting product in the hands of customers.
2:55:09
Yeah, good question. So, I mean, you know, you've all heard it before. Hardware is hard. And we felt like this is like learned helplessness, you know, and we are taking the heart out of hardware, right? So if you think here was, it's
2:55:17
just where now it's just where easy where.
2:55:30
But we're starting with electronics here because it's like a very standardized form factor and it's really at the heart of anything that's worthwhile building these days. If you think about computers, robots, what have you. The insight here really was that if you think about how easy it's become over the last two, three decades to make software, especially now with the AI boom, hardware hasn't gotten any easier to make, even though the supply chain is incredibly available. Now I can, from anywhere in the world for like $20, can get a PCBBOT manufactured in China and sent back in seven days fully assembled. Right? And that you couldn't do 20 years ago. You had to be Lockheed Martin or Apple or like a big tech company to afford it. But now everybody can do that, but the design tools just don't exist and are not accessible. And that's what we're solving for.
2:55:34
So talk about the inputs and the outputs. Obviously at the end I get some sort of PCB spec. Is that just a CAD file or a PDF? And then what am I actually putting in? Is it just software that then I want to be translated into hardware? At what level of abstraction are we operating here?
2:56:22
Yeah, good question. So look, the vision is that you can go like, if you think about ChatGPT, you can go from like a prompt to a poem, right? Or like cloud code. You can go from a prompt to a bug fix or feature with Flux, right? Ultimately, we Want to enable you to go from a prompt to an iPhone class device right now. That's a long road. Right, so where we are today is like you make like what's like mid small to make complexity. You can probably single shot today and everything else is going to be more iterative. You know, again like cloud code, Devin, Cursor, all these tools, very, very comparable.
2:56:37
Sure, that makes sense. Who's the best customer for you right now? Where do you see that going? Are you going after startups that are iterating really quickly, this is speeding them up or are you going to large organizations that already churning out tons and tons of PCB designs?
2:57:08
Yeah, so it's a good mix. But Most customers are SMBs. What we're seeing is that there is like, you know, only about 25% of our customers have previously made a PCB board themselves. The others are technical, they are mechanical engineers, firmware engineers, industrial designers. Right. But they didn't have the time, patience or resources before to make custom boards. And they would obviously previously buy OEM boards. Right. If you think about like a favorite example I have, we have this customer, they make vending machines like snack beverage vending machines and they would previously buy four or five off the shelf boards and integrate them into a single machine. So think about the board that powers the display, the payment, the motor in here and so on. And now they can make like a custom, single custom board reflux. Right. That's much cheaper to integrate into the machines on the assembly line. The machine breaks down in the field. But you know where they're standing there like somewhere in the rain, you know, there's one board to replace and then write the board cost in pennies on the dollar compared to before. Right. Because what you're paying now is material cost, you know, to a manufacturer in China. Right. That can do that for you. And so that's kind of like here, the main driver, I think.
2:57:24
What's your favorite printed circuit board or favorite product that has a PCB in it?
2:58:25
Ooh, the Space Shuttle.
2:58:30
The Space Shuttle, that's a good answer.
2:58:32
No, I looked this up the other day. So the Space Shuttle I think was one of the first boards that had like eight layers or so right at the time. And it was hand drawn. They didn't have software for it at the time. Right. Which is like crazy if you think about that. So yeah, it's like a fun anecdote.
2:58:34
That's a great one. Jordy. Anything else?
2:58:50
Wild, wild, wild. How? Have you seen anything exciting on the US side on the supply chain side
2:58:52
on actual PCB manufacturing, I feel like
2:59:01
we had somebody on the show last year that was trying to make pcb.
2:59:04
We have send cuts send now for like machined parts and things that you need to cut. Is there a great company in America that's starting to think about PC?
2:59:08
Yeah, like if we have our own Shenzhen, I imagine it would have a variety of manufacturers like this to really. Because you want to get to the point where you can kind of sit next to the manufacturer and really speed up that iteration cycle.
2:59:18
Yeah, yeah. I mean, look, a lot of people like us want that. Right. We looked into the details of like opening our own fab here in Fremont or so. But you realize quickly, right, making PCB boards is like the board itself is a chemical process. Good luck getting the permits. Right. It's possible, but it's hard. Right. And so these are kind of like the bumps you run into. But I think there is exciting opportunities here. Right. Especially because like if you're designing these boards with AI, right, then you can also optimize for the capabilities and the inventory you have at the factory. Right. Because the way to make this cheap to make is to design it so it can be assembled automatically by machines. But for that to work, essentially these pick and place machines, they have to have all the semiconductors you need on their rails. And that's difficult to do for humans to optimize for because it's like a quick changing thing. But I think as we're automating all this, we can have the models designed towards what the factory can spit out in an hour for almost zero costs.
2:59:30
That's amazing. Congratulations. Thank you so much for taking the
3:00:29
time to come talk to us with this pace. I'm sure you'll be back on this year.
3:00:32
Yeah.
3:00:36
This is amazing.
3:00:36
Yeah.
3:00:37
All the progress.
3:00:37
I hope so.
3:00:38
Have a great rest of your time. We'll talk to you soon. Let me tell you about Label Box, RL environments, Voice robotics evals and expert human data. Label Box is the data factory behind the world's leading AI teams. Up next, we have some exciting news from Zach from Calais, co founder and CEO. Oh, is he not here?
3:00:38
We are getting him set up.
3:00:58
Sorry about that.
3:01:00
We can head.
3:01:01
We can head to the timeline because we have an incredible announcement from Riley walls, the new OpenAI staffer who launched Payphone Go. It's effectively Pokemon Go, but for payphones. Payphones are strangely still licensed in California, he says. So I filed a FOIA request and got the full list. Naturally I made a game so you can now play. You create an account. You get your unique player id. It's a nine digit number. Then you use the map to locate one of California's payphones. Some are easy to find, some are not. Pick up the receiver. You dial this phone number, 888-683-6697. It's toll free, so no coins required. You can just go to any payphone that you can find and then you call in and you enter your player ID and you collect them all. You have to catch them all. All 2,203 payphones in California are at stake. And Riley is on another absolute generational run with these projects. I love these. They are so much fun. Go check it out. And congrats to Riley on another banger project.
3:01:02
Well, all right.
3:02:08
I believe we have our next guest. But you.
3:02:09
But Zach is going to be later.
3:02:12
Zach will be later.
3:02:13
So we're going to jump over to what's going on.
3:02:14
How are you doing?
3:02:17
Sorry for the confusion. Hey, how's it going? No problem. Nice to meet you guys.
3:02:18
Good to meet you. Please introduce yourself and the company.
3:02:21
Yeah, I'm Joan Rodriguez.
3:02:25
I'm from Barcelona. Catalan.
3:02:27
I'm the CEO and founder of Quiver AI. And at Quiver, we are building models for AI generation of SVGs.
3:02:29
Okay.
3:02:39
Vector graphics. You can put a text prompt and you can get a vector graphics, or you can pull an image and you can vectorize it.
3:02:40
Love it.
3:02:47
So it's really good for designers, really good for icon generation, logos and so on.
3:02:47
PNG to vector to SVG was such a hassle like a decade ago, maybe even like five years ago. It's still not perfect. What's the secret? Because I've seen these demos from. I think the latest Gemini 3.1 was doing some SVG drawings. It was pretty good. What's the secret to going superhuman in SVG generation?
3:02:52
Well, that's a great question. It's a pretty challenging problem and we are tackling this through a new paradigm. So it's code generation. So an SVG is actually code in the back. It's a bunch of code and you compile it and you can get an image out of it. Previous methods were mostly like tracing. You get an image and you trace it. But I did this PhD in SVG and I invented these methods concept where you can actually get really cool SVGs through code generation. So LLMs are really good at code. We know this and I just went nuts in my PhD to try to train these models.
3:03:14
I love it. As soon as I could, I imagine Part of the benefit is that once you have an svg, you can export that to Figma or Illustrator and then edit and do the final. If your output's 99%, much better than getting a PNG that's 99%, then you got to go and clean everything, everything up. And it takes a lot longer. You can adjust the actual splines. Is that correct?
3:03:52
That's correct. Yeah. You can edit, you can change the colors, you can maybe do animations later.
3:04:13
Oh, sure, yeah.
3:04:18
Gemini is really good for that too. Animations is something that's popping up a lot in Design X.
3:04:19
So, yeah, absolutely.
3:04:24
How are you thinking about competition long term? Sounds like you guys can kind of lead in this space, but at the same time, I imagine all the labs kind of care about this category. Do you imagine partnering with them or having a different distribution strategy? What does that look like? I mean, it's exciting to see all the good results. They are also evaluating our ideas. We have ideas to try and the big labs are showing really good results in the SVG space. We are seeing that if you just train a big model with a lot of data and just a bit of effort, you are going to see really good results on this space. And what we are doing, we are doing this our own thing, our own way, with our own RL Rewards one at a time and trying to learn how to do this the right way.
3:04:26
How do you think about the customer? Do you want to integrate with a piece of software that already runs and lives and breathes svg, or do you want to be sort of like this one off tool that lives on a website that artists and creators can go interface with and then just take the assets elsewhere?
3:05:18
Yeah, great question. So we started with a customer kind of consumer approach. We build our website and we wanted to see what people do with these tools, but at the same time, we build our API so that other tools can also interact with our models and build on top of it. We can see agents already being deployed, we can see MCP servers calling Quiver and we are seeing all sorts of tools, integrations with all your kind of stack of tools for designers. And that's what we are aiming for, to be very close to the stack and workflows of designers.
3:05:35
Yeah, this is Christmas for After Effects artists. I've played with a lot of this stuff. It's a lot of fun. Congrats, Jordy. Anything else?
3:06:12
Where's the company based? It's going to be based in San Francisco.
3:06:19
Cool.
3:06:23
But you're calling in from Barcelona or. That's Right in the process.
3:06:23
Last thing, you raised some money. How much did you raise?
3:06:29
Yeah, we just raised $8.3 million from. Led by a six team Z. Congratulations.
3:06:32
I'm so happy that this is funded.
3:06:40
This is very, very great use of your PA PhD time too. You're. It feels like you're obviously just getting started with Quiver, but been at it for quite a while.
3:06:41
It's very cool.
3:06:51
Very cool. We will talk to you soon. Have a good rest.
3:06:52
Cheers. Beautiful.
3:06:55
Goodbye. Let me tell you about Vibe Co where D2C brands, B2B startups and AI companies advertise on streaming TV, pick channels, target audiences and measure sales, just like on Meta. And we have our next guest in the RE waiting room. Let's bring him in to the TVPN Ultra Drive. Zach, finally. How you doing?
3:06:56
I'm good. How are you guys?
3:07:13
Congratulations. Tell us what happened. I want to hit the gong for you.
3:07:15
Well, thank you. After a year and a half journey building Calai, we were finally acquired by my Fitness pal. Boom.
3:07:18
Thank you.
3:07:30
When did you meet the team? At My Fitness Pal.
3:07:32
Yeah.
3:07:34
Yeah.
3:07:34
Break down the deal.
3:07:35
We split.
3:07:35
Yeah, we spoke to them initially back in May. I was talking to a few companies, reaching out to them before going to college because it could have been, if we got the right offer, it would have, I think, made sense. I could have maybe had a normal college life for a little, is what I was thinking. We didn't move forward at that point with anything, but built those relationships and then kept in touch. And when it finally made sense, we moved forward.
3:07:36
That's awesome.
3:07:59
And I think I saw in your announcement, you closed it While you were 18 and you're just announcing it now. Is that correct? Yeah. So we closed it in December when I was 18, and then I turned 19 in January. So now it's brutal. Dude, you're watching now.
3:08:00
I'm
3:08:14
here.
3:08:16
It's over. It's over.
3:08:18
Now give us the scale of the business, the monetization. I mean, describe it for those who don't know. And then I want to know, how did you grow this thing? How big did it get?
3:08:19
Great question. Cal is an app where you could track your calories just by taking a picture of your food. And we did $30 million in revenue in 2025. Wow. And we just finished January. I didn't check February, but January we did 5.7 million in revenue. It's a subscription model.
3:08:31
That is incredible. So heavily dependent on iOS in app subscriptions. Are you Android as well? How important has that been?
3:08:50
Android is definitely less important. It's maybe a tenth of our revenue. It pretty heavily is just iOS.
3:08:59
Yeah. And then acquisition is this influencer driven, ad driven? Customer acquisition. Yeah, customer acquisition. I mean, obviously you're on the charts, but are there viral mechanics? Like, what is the growth engine for the app?
3:09:05
Early on it was mainly influencers, and then as of recently, the last maybe six months, we've primarily continued our growth through paid acquisition on channels like Metta. And so the two combined are really what's driving it.
3:09:20
Makes sense.
3:09:32
What, why? What are the kind of key reasons why my fitness pal wanted to pick you guys up? I can think of a lot. But what drove this? Well, definitely many, many reasons. We have different audiences, different core audiences. But then we also have overlap and we both are very mission driven. We both just want to help the greatest number of people as possible become healthier. And so since we were. Since we have this overlapping space, we realized that it would make sense to come together, share resources, and it would further the mission for both of us and really makes sense to just partner.
3:09:33
Talk about accuracy. I think there's some people that are skeptical that I can take a picture of food and tell you how many calories are in it. How has that evolved over the year or so that you've been running the company? Like, how accurate was it at the start? How accurate is it now? Where do you expect it to go?
3:10:18
The accuracy of Kali, early on, it definitely wasn't as good as it is today. We initially started by just running chatgpt by itself pretty much as the model. You would just take the picture, put it into ChatGPT, say, how many calories is this output? The response, Total victory.
3:10:34
Total wrapper victory. Honestly, congratulations. It's so amazing.
3:10:52
Well, that was like a month. And then we fine tuned a model. We built out a more complex pipeline using a ton of different models for different steps. And so it's become more and more accurate over time. At this point, we average over 90% accuracy.
3:10:56
Yeah.
3:11:10
For each food scan. But you would actually be surprised. Only 30% of the scans are. Only 30% of food logs come from scanning. The rest are like barcodes, typing in. In the database. So that those are the most accurate methods.
3:11:11
That makes a ton of sense. That makes a ton of sense.
3:11:24
Very cool. What, what are your guys combined ambitions now? Where do you want to take. Are you going to be supporting on the main MyFitnessPal app as well? Like, are you. I imagine like they've seen what you guys can do from a user acquisition standpoint and a product design standpoint and want your help across the org. We're definitely helping any way we can. So definitely some of our marketing systems, we are helping them to also build out and they're helping us to teaching us things that we didn't know. So together we're going to be able to go a lot further and we're going to remain standalone apps.
3:11:26
Cool. Well, congratulations.
3:12:09
Congrats. Yeah, congrats to the whole team.
3:12:11
Yeah, you're an actual overnight success.
3:12:14
Truly, truly an overnight success.
3:12:16
But the beginning of a long career. I can tell already.
3:12:18
Yeah, I know. I can't wait till we'll be here when you turn 40, come back your third exit at that point, hopefully an IPO. You got the, you got the exit at 18. We'd like to see the IPO by the time you're 30.
3:12:20
Awesome.
3:12:39
Congrats.
3:12:41
We'll talk to you soon.
3:12:42
Congrats. Thank you. Talk soon.
3:12:43
Have a good one. Let me tell you about Sentry. Sentry shows developers what's broken and helps them fix it fast. That's why 150,000 organizations use it to keep their apps working. And up next we have our last guest, Andy Markoff from Smack Technologies. He's the co founder and CEO.
3:12:45
What's happening, Andy?
3:13:01
How you doing?
3:13:02
Great.
3:13:04
Thanks for having me on. Welcome to the show. First time on the show. Please introduce yourself and the company.
3:13:04
So Andy Markoff, co founder and CEO of SMAC Technologies. I'm, well, I'd like to say a recovering member of a cult known as the United States Marine Corps. Spent the first ten and a half years of my professional career there. Got out, tried to figure out what I wanted to do when I grew up. And luckily one of the Marines I served with with my co founder, Clint Alanis, got out and we both decided to work on building the decision making tools we wish we'd had when we were running the kill chain during our careers as Marine raiders at marsoc and try and give back and find a way to serve again after we'd both gotten out. So we talked about this idea really fall of 2023, and then started the company in January 24th. Really? So what SMAC is, we are the first frontier AI lab solely focused on building for national security. And our mission is what we would call decision dominance. But how do we take petabytes of sensor data, multimodal data streams, that too much for a human to actually analyze and then convert that analysis in real time into the right decisions across the range of military operations, whether that's trying to figure out what are we doing for the next one to six months, what are you doing for the next one to four days. And like literally right now, what are we doing but trying to make that process seamless. And what right now is a pretty siloed and, and slow system.
3:13:09
What are you building on top of like, it feels like Palantir exists, there's databases. Like, it's not like technology was just invented, but AI is obviously new. So like, how are you integrating with systems? What are the tailwinds here that allow you to actually deploy systems effectively and quickly?
3:14:25
Yeah, I think, I mean, at the
3:14:43
end of the day, we are not building on top of another like AI model in our own model.
3:14:45
Yeah.
3:14:49
And I think that's, you know, the reason for that is general purpose, like large language models are very useful and they're very good at analyzing massive amounts of textual data. They have a lot of really important use cases, but they are not the right tool for 80% of military decision making. And some of that's just they're trained on labeled data sets. There's no label data set for World War Three. You know, hopefully we never have that data set. But you need a different type of model to make decisions on multiple time horizons. Do the type of hard physics grounded geospatial reasoning required for most military decisions. And that's deep reinforcement learning is the right approach for that problem set. So we're building a deep reinforcement learning model and instead of labeled data sets, we're training that model in an environment that's been built by physicists who are grounded in the physics of a peer level conflict, but also domain experts. And I think that's a lot of our advantage in secret is there's a lot of information and expertise about how to fight wars that are in people's heads. They're not in a document that can be read. They need to be encoded and put in an environment to make a system that can actually function in that type of environment.
3:14:50
Interesting. So, I mean, a lot of the foundation labs are going to data providers, data brokers, hiring experts to write down their SOPs. You're obtaining data from the US government, is that correct? What does that pipeline look like?
3:16:03
Sure.
3:16:20
So building this data is some of the domain expertise we have hired on the team and I's background. I was a joint fires instructor for a while at the Marine Corps version of Top Gun. Clint spent his whole career in the fire space, as we did at Martin, our stock. We've hired a lot of people on the team that were the absolute best at their military discipline, they were instructors and they did it on deployment. And then we work with our end users. And I think in some ways this gets to how do you trust AI? How do we get users to trust AI? The users are part of our training, data generation, taking their knowledge, their understanding of the right way to do things today and helping bring that into the environment in a way that they can trust that their knowledge and their understanding is actually being used to help the model get to a better start point.
3:16:21
What ways in which conflict is evolving are you guys willing to lean on and effectively bet the company around? It's obviously great to have all the time actually enlisted and, and serving the country, but the battlefield's like evolving. You know, I'm sure you've been, you and the team have been watching all the footage coming out of the last few days, but where is this all kind of going in your view as
3:17:11
far as where is like kind of conflict moving?
3:17:43
Well, an example is like, I think we saw like the US version of Shahed, or I think like, so low cost autonomous systems seem to be coming online en masse and all these things are going to impact how you build the product.
3:17:46
Sure. So, I mean, I think at the end of the day, the scaling and operationalizing autonomous systems is a large part of the future of warfare. Right. And so I think the concept that we talk about internally that we need to build and enable what we would call intelligent autonomy. But how do you orchestrate all of these autonomous systems, not just against tasks that they've been assigned to, but with each other and with the more exquisite, expensive systems? And how do you do that in a way where you still have a human in the loop? Right. I think there's been a lot of discussion about fully automating the kill chain. No one wants that. That's not even really something that I've heard anyone talking about. What fundamentally people are trying to do is humans have the right amount of human in the loop, have humans for high value human touch points. It's not everywhere humans are making decisions today. Like, we can't have humans involved in every single thing that they're involved in today. But a lot of the, a lot of the decisions are not high value decisions that are humans uniquely positioned to do so. It's intelligent autonomy is about removing humans from low value human touch points, keeping them and bringing them back into the system for those touch points that they need to make decision, whether for ethical reasons or for tactical reasons, in enabling them to make decisions that help move hundreds of thousands of autonomous systems and manned platforms and other types of unmanned platforms towards common goals across what could be 100 million square mile theater. And I think that's really the spec that we have to build towards.
3:18:06
Makes sense. What's the shape of the company today? Where are you guys based? What are your plans with this new capital?
3:19:38
Sure. So we are a team of 18 today, really split mostly between El Segundo. We're like the majority of the tech team is El Segundo and then a headquarters in Texas. We have a physicist, a physics lab down in Texas. So part of the tech team is there really our product, the way we think about it is we need to have a model, a heavier weight model that, that is available for people that work out of an operations center or not, like compute limited and then, you know, lighter, like lighter versions of that model that can work at the edge, at the front lines where people are going to be bandwidth constrained. And so a lot of this year is getting what currently we're deployed under contract with the Air Force, the Marine Corps and the Navy is to expand across all six services, both at the command level, but also to take what right now we're prototyping as the edge version of the model we built and get that deployed to frontline my units across all six services by the end of the year.
3:19:47
Take us through the fundraising round. How much did you raise?
3:20:44
So we've raised 32 million to date, 26 million. We've been frankly just really fortunate to have the investor team that we had. I mean, like I was saying earlier today, you know, Clint and I knew nothing about fundraising, procurement, government acquisitions. Like we didn't know. We just like were wandering around in the dark, you know, back in January 2024, trying to get moving and having point 72 kind of take us early on. Believe in Us kind of teaches the ropes of the defense tech. Space was tremendously valuable. And you know, we've been really fortunate to have, you know, Geodesk and Cosano co lead the series A to really help us figure out how do we get to the next. How do we get to the next scale, how do we start scaling the team, how do we build the systems that allow us to grow rapidly.
3:20:48
And.
3:21:34
And we've just been really fortunate to have supportive investors that understand the vision and kind of knew what we need and knew what they could teach us at different phases.
3:21:34
What's the origin of the name?
3:21:44
So SMAC is actually a tactical task that you would call over the radio to strike a target. So when I was. We'd be out on a raid, Afghanistan, Iraq, and like, you'd have a hostile target that you were going to call in an airstrike on. The conversation that you just smack that target is actually like a task. So that was. And I think, you know, fires. Fires is one of the military's functional areas, but I think that's an area that's, you know, Clinton and I's specialty. And so that was a lot of our initial go to market was in the fire space, and we were initially heavily focused on that, but obviously we're expanding this year to kind of the range of military decision making.
3:21:47
Is it true that you hired someone named John Coogan?
3:22:25
It is true. I actually was getting really confused earlier today when, like, I said, I saw yours, I was like, wait, why? Why is John. Why is my job on Twitter? He's also a Marine, too.
3:22:30
I was like, so funny.
3:22:39
Marines don't even use Twitter.
3:22:40
Very cool. Yeah. Go around and tell everyone, Yeah, I hired John Coogan.
3:22:43
There you go.
3:22:46
You're like, what?
3:22:47
That's very fun.
3:22:49
Awesome. Well, great. Great to meet you. And I'm sure you'll be back on soon.
3:22:50
We'll talk to you soon.
3:22:53
Awesome. Thanks for having me.
3:22:54
Cheers, Andy.
3:22:55
Good one. Thank you. Let me tell you about Plaid. Plaid Powers, the app to use to spend, save, borrow, and invest securely. Connecting bank accounts to move money, fight fraud, and improve lending. Now with AI, we have a debate to settle.
3:22:56
Is it Pet Smart or Pets Mart?
3:23:09
I have a opinion here. I will settle this, but we should vote as a group. We should also play this video of the folks who kicked this off. Ben Lapidis has a whole song asking the question, is it Pets Mart or is it Pets? The aesthetic of this is incredible.
3:23:12
Is this real?
3:23:40
This is a real video. They went, dressed up and found an office and trashed it for this video. So much production. So. So the question, is it a mart that caters to pets, or are they saying that taking your pet there is the smart thing to do? Which side of the debate do you come down on, Jordy?
3:23:41
They are smart about pets.
3:24:08
Oh, okay. You're a smart guy. I'm the opposite. I think it's the Pets Mart. I think it's.
3:24:10
Oh, and is it. Because the bounce is.
3:24:16
So there are two sides of this. Adam Bain, former CEO, Twitter, chimed in. Is it Pet Smart or Pets Mart? The color in the logo indicates that they are saying pets are smart, but the bouncing ball suggests that it's a mart. For pets. Which one is it? It can't be both. Of course it can be both. The logo designers are probably having fun, but I'm a bouncing ball guy. I say it's the mart for pets.
3:24:18
Albert in the chat says, wtf? I just entered a pets mart.
3:24:42
That's gotta be weird. Tyler, where do you sit on this?
3:24:46
Now would be a good time.
3:24:50
Albert.
3:24:50
Albert.
3:24:51
Oh, tiebreaker over here. We got Tyler on my side. Tyler took my side. Mart for pets.
3:24:51
That just doesn't make sense. Why would they. All the things you buy in petsmart are for pets.
3:24:58
It's a place that holds things for pets.
3:25:03
It's a mart.
3:25:05
Mart.
3:25:06
No, Pets Mart. It would be pets all red and then blue.
3:25:06
M A R T. But they flipped it. No. You make a compelling argument. But I still, still disagree. What else do you want to talk about?
3:25:13
In all the news, I don't think we ever said the words that we never rang. The God Open AI raised a 110 billion dollar round of funding from Amazon, Nvidia and SoftBank. We're grateful for the support from our partners and have a lot of work to do to bring you the tools you deserve. That's probably the biggest. That's a Gong record.
3:25:20
Yes. It's the biggest round for a private company ever. And it's also about one quarter of venture capital outlays that are expected for 2026. In one round, 400 million invested from venture capitalist. Broadly, of course, this money's from the hyperscalers. It's more complicated than your average VC deal. I don't even know if this will be included in the VC funding funding data because it's such a big round and it's from so many strategic.
3:25:41
But.
3:26:09
But lots of More Capital for OpenAI.
3:26:10
Vo says everything on earth is thought bait. They're just trying to keep me pensive.
3:26:15
It's true. It's true. What else?
3:26:21
Brian Peterson is saying, happy late Valentine's Day to my wife. When a semi Neanderthal man loves a mostly human woman.
3:26:26
Very odd. We should plant the bomb. We should. We should get out of here. We have another podcast to do, actually. That will go live.
3:26:36
We're gonna break. We're gonna do five.
3:26:43
We're doing five hours today. You'll need to tune in later.
3:26:44
We will tell you podcast that we're
3:26:48
going on, but leave us five stars. Hit that subscribe button.
3:26:49
Sign up for tvn.com Last but not least, the US Open is letting you know that the US Open will return to Inverness Club in 2040.
3:26:53
5, 20, 45. Let's go too. See you tomorrow.
3:27:00
I can't wait.
3:27:04
Goodbye.
3:27:04
Have a wonderful evening.
3:27:05
Nice work, brothers.
3:27:08
I'll see you on the next one.
3:27:10