TBPN

Meta Tokenmaxxing, Intel Joins Terafab, Frontier AI vs. China | Diet TBPN

33 min
Apr 8, 202611 days ago
Listen to Episode
Summary

This episode covers Meta's internal token spending competition with Anthropic, Intel's partnership with Elon Musk's Terafab project, and industry efforts to combat AI model copying in China. The hosts analyze the financial implications of Meta's token usage, discuss the geopolitical significance of domestic chip manufacturing, and explore how frontier AI labs are collaborating on security while competing fiercely.

Insights
  • Meta's token spending is likely $1.6B annually ($136M/month), not the $900M+ initially claimed, based on realistic input/output token ratios from production usage patterns
  • Vertical integration of AI infrastructure (training, inference, chip manufacturing) is becoming economically essential for companies with massive compute demands
  • Perverse incentives in metrics-based performance systems (token leaderboards) can drive wasteful behavior, but the underlying AI infrastructure investment remains strategically sound
  • Domestic chip manufacturing partnerships are critical to solving supply constraints that TSMC alone cannot meet, creating geopolitical and economic leverage
  • Frontier AI labs are willing to collaborate on security and IP protection against China despite intense competitive rivalry, indicating shared existential concerns
Trends
Token budgets becoming standard engineering cost allocation ($4,500-$20,000/engineer/month) similar to cloud infrastructure spendingVertical integration of AI supply chains: companies building internal models, inference infrastructure, and now semiconductor manufacturingDistillation and model commoditization accelerating competitive timelines, pushing labs toward continuous frontier advancementGeopolitical fragmentation of AI development with US labs collaborating defensively against Chinese competitorsCorporate AI tooling (code generation, infrastructure optimization) driving multi-billion dollar token consumption at scaleDomestic semiconductor manufacturing resurgence driven by AI compute demand and supply chain diversification needsAI safety concerns (model copying, adversarial distillation) creating rare industry-wide collaboration mechanismsSpace-based compute infrastructure emerging as viable use case with thermal and economic challenges still being solved
Companies
Meta
Used 60.2 trillion tokens in 30 days with Anthropic, sparking debate over actual spending and token-based performance...
Anthropic
Reached $30B run rate revenue; providing Claude models to Meta and other enterprises; collaborating on security again...
Intel
Joining Terafab partnership with SpaceX, XAI, and Tesla to manufacture custom AI chips domestically in Austin, Texas
Tesla
Partnering with Intel on Terafab to produce chips for robo-taxis and Optimus robots; addressing chip supply constraints
SpaceX
Co-founding Terafab project to manufacture space-optimized chips; planning satellite deployment with onboard computing
xAI
Merged with SpaceX in February; participating in Terafab chip manufacturing partnership
OpenAI
Collaborating with Anthropic and Google through Frontier Model Forum to combat model copying and distillation in China
Google
Partnering with OpenAI and Anthropic on security; providing TPU capacity to Anthropic; competing in frontier AI devel...
Microsoft
Co-founded Frontier Model Forum with OpenAI, Anthropic, and Google to detect adversarial distillation attempts
TSMC
Current primary chip supplier for AI companies; facing capacity constraints and insufficient CapEx investment for fut...
Nvidia
Dominant GPU supplier; benefiting from AI boom while Intel struggles; mentioned as alternative to TSMC for chip manuf...
Samsung
Currently manufacturing chips for Tesla's robo-taxis as alternative to TSMC
AMD
Thriving chip competitor while Intel faces production capacity challenges
Plex
Free streaming platform that held disastrous corporate retreat in Honduras with 120 employees
Linux Foundation
Participating in Anthropic's Mythos bug-finding model preview for critical infrastructure security
Amazon
Receiving access to Anthropic's Mythos model for vulnerability detection in critical infrastructure
Apple
Receiving access to Anthropic's Mythos model for vulnerability detection in critical infrastructure
CrowdStrike
Security firm experiencing rise in cyber attacks; mentioned in context of AI-enabled vulnerability exploitation
Broadcom
Partnering with Anthropic and Google for multiple gigawatts of TPU capacity allocation
People
Mark Zuckerberg
Pushing Meta to become AI-native; creating token spending incentives that sparked internal competition
Elon Musk
Leading Terafab partnership to build domestic chip manufacturing; planning space-based compute infrastructure
Jensen Huang
Predicted engineers will command $250,000 annual token budgets; discussed at GTC conference
Lip Bhutan
Announced Intel's partnership with Musk on Terafab chip manufacturing project
Sam Altman
Mentioned as AI leader attending DC gatherings; involved in model copying collaboration with competitors
Dario Amodei
Leading Anthropic to $30B run rate; attending AI leader gatherings; collaborating on security initiatives
Tim Cook
Mentioned as attending AI leader gatherings in DC with other tech executives
Sundar Pichai
Mentioned as attending AI leader gatherings in DC with other tech executives
Chuck Robbins
Discussed data centers in space and thermal challenges with heat dissipation
Sean Ha
Planned Plex corporate retreat in Honduras that experienced multiple disasters
Keith
Became severely ill with E. coli during Honduras corporate retreat; missed opening survivor event
Scott
Co-founder of Plex; participated in disastrous Honduras corporate retreat
Sean
Plex executive who experienced military drills and fire ant hill incident during retreat
Will Hershey
Previously held META ticker before giving it to Mark Zuckerberg; known for ticker squatting
Matt Ball
Mentioned as someone involved in ticker acquisition and squatting
Tyler
Conducted back-of-envelope math on Meta's actual token spending costs versus claimed estimates
John
Co-host discussing Meta token spending, Intel partnership, and corporate retreat disaster
Quotes
"When a measure becomes a target, it ceases to be a good measure."
Gary Baseman (cited)Early in episode
"It's all about tokens. What is your token throughput and what token throughput do you command?"
Carpathia (cited)Mid-episode
"If you're running trillions and trillions of tokens of a frontier model, Meta is really burning through a lot of tokens. And you have to generate everything."
TylerToken spending analysis
"The demand side has always been a big problem for Intel, that they have the capability, they have the plans to build a 2-nanometer, 3-nanometer plant, but every other company has been so tied to TSMC."
JohnTerafab discussion
"There is compute happening in space right now, we're going to try and scale it. Orders of magnitude."
TylerSpace compute discussion
Full Transcript
Well, there is a whole bunch of news to run through. The first story is that meta employees are apparently token maxing and competing on an internal leaderboard called Claudeonomics for status as a token legend. This is from the information over a recent 30 day period, total usage on the dashboard, topped 60 trillion tokens. And this sparked a huge debate over how much is meta actually spending with Anthropik? And of course, the other big news is that Anthropik just passed 30 billion in run rate revenue with one of the probably the steepest revenue growth chart in human history. Absolutely legendary. Yeah, this chasing status as a token legend reminds me of kind of maybe it was a year ago at this point you were saying like, will tokens ever become like eyeballs the way eyeballs were during the dotcom era? Just optimize for eyeballs. Obviously not every eyeball visit to a website is created equally, but people were optimizing for eyeballs. And now, you know, I don't the reaction to this I think has been generally at least online, like been, I guess reassuring a lot of people are saying Gary Basin says, you why Marty says good heart slot when a measure becomes a target, it ceases to be a good. Measure. So who knows what's actually going on internally, but we do know Zuck is pushing the entire company to be as a not AI native as possible. And this guy loves spending money too, right? I have a crazy bull case here that I will run through. Let's let's get through some of the story. First, we got to pull up this this comic from XKCD in the comments here when a metric becomes a target, it ceases to be a good metric. It's right under the the leading post. There we go. And says and the other counterparty says, sounds bad. Let's offer a bonus to anyone who identifies a metric that has become a target. It is good. I don't think that's what's going on here. Lider was texting a friend at Metta and sent the the post we just discussed on token maxing. Yes. And said true. And the person said yes, it's pretty sad. But I mean, imagine so so Metta has been there's been rumors of Metta layoffs for a while now. Unclear how many, if any, if any have happened. But if you're sitting there, the company is saying like we need to get AI native. Boss is saying we need to get a native. And then suddenly there's a token leaderboard. You do not want to be at the bottom of the list. I will say that, right? Yeah. You know, you don't want to be the you don't want to be the guy who's having to explain like, no, well, I've actually getting the most out of each incremental token. And the other guy is just like set up an agent that just counts one, two, one, over and over and over and there's something. Yeah. Yeah. I mean, you have to measure the actual output, the impact on the business. I mean, fortunately, Metta has been a huge beneficiary and a huge winner of AI. The ads are getting better targeting. They're seeing they're delivering more ads and the quarterly earnings have been strong. The headline number here that that sort of took everyone by surprise is that Metta staff used 60.2 trillion tokens over 30 days, which would pencil out to about a third of enthropics. ARR was the number that was thrown out, but both of these claims are pretty questionable. And so Tyler did some back of the envelope math to show that the one third revenue estimate is way, way too high. And I don't know, do you want to take us through some of the reasoning there? And then we can talk about the knock on effects of all this. Yeah. Okay. So 60.2 trillion tokens is the number. Like we can just assume that's true. So basically, I'm going to assume all the employees are basically just using opus for six. Yeah. So then there's basically three numbers you need to look for in like the API cost. So there's like input. There's a cashed input and then there's output. So for opus for six, it's $5 per million tokens on input. It's 50 cents per million tokens on input, cashed, and then it's $25 on output. Yeah. So if you multiply that 60.2 trillion tokens at the highest possible rate, $25 per million tokens, then you do get like a billion dollars in them, which is crazy. That's not what's happening. Crazy number. But like you have to think about it. If you're using like Claude code or any of these coding agents, the vast, vast majority of the tokens used is input. Because like, so imagine you're working on some coding file, right? There's like a thousand lines of code in the file. Maybe the model is only changing like 10 at most, right? So that's a very small percentage. So the output tokens are going to be a very small percentage of the total tokens going in, right? Open router publishes like a lot of this data. So you can kind of use those ratios to figure out what is actually like, what are the actual numbers of the input versus cash versus output? Yep. So just to get sort of like market standard averages, like baseline benchmark. Now, Meta could be using these tools differently. But if we are to assume that the shape of their agent to coding efforts are similar to the average, this is what the numbers. Yeah. So maybe there is like some bad incentive where people are just saying to the model, like count up to a billion and then do it again. So then it's like totally skewed. But if they're doing it relatively normally. So on open router, it's about 98.9% of all tokens are input. Input. That's including cashed ones. Yeah. Because you're stuffing the context window with all your code base or a huge amount of context. That's not changing every time so you can cash it. Yep. Yep. So that's like 1.1% is output. Yep. So basically, if you basically get all the numbers, million tokens is going to be $2 and like around 26 cents. So that'll get you to something like $136 million a month for the $60 trillion tokens. So that's like way less than the $900. Yep. So that would be $1.6 billion a year like run rate. Still huge. That's a lot. But that is still in the max. Assuming they're in the top. Yeah. That's assuming that open router, the kind of breakdown of how they're using the tokens is the same as open router, which I think it's not. If we assume that that's like $4,500 per engineer, if there are, I think, 30,000 engineers at Meta every month, $4,500 on tokens. $4,500. That's actually in line with what I've heard a lot of other people spending in terms of their token budget. Yeah. That's not like absurd. That's not absurd. If you're trying to incentivize people to use that. Yeah. Yeah. Not at all. But so you can actually see the breakdown on open router of how people are using tokens. So 17, the biggest plurality is open claw, which is 17.6%. And then Claude code is 16.8. Sure. So I think if you think about Claude code, you would imagine that in Claude code, there's the percentage of cash tokens is going to be higher than in open claw. Yeah. So I think Meta's usage is actually going to be more heavily based on the cash tokens. Sure. So if you do it just based off Claude code usage, you'd actually see a higher percentage of the input tokens, be of the total tokens. So it's only like 0.8% is the output. So then if you get all those numbers through again, it's only like 55 million a month, which would be 669 million a year. And each engineer would be like $1,800. Yeah. That's actually pretty low. Which is like, I think very reasonable. John Chu over at Coastless has plenty of my Meta friends told me folks have been building bots that just run in a loop burning tokens as fast as they can due to this policy. It is an app. It's an absolutely stupid policy and is similar to have Meta uses lines of code to measure engineering output. Managers are supposed to use it as a proxy and dig in to understand where complexity, but plenty of managers are lazy and just don't. That was in response to Christine over at Linears saying, ranking engineers by token spend is like me ranking my marketing team by who's spent the most money. Yeah. We may not have had our KPIs, but Joe spent 200,000 on a branded blimp that only flies over his own house. So he's getting promoted to VP. I'm pro branded blimps though. I like that idea. So my take on this was that yeah, it sort of ties to what Jensen Wong was talking about at GTC. He was saying that an engineer that's making $500,000 might soon command something on the order of $250,000 a year in token budget. Under Carpathia had a similar line. He said, it's all about tokens. He said on a podcast last month, what is your token throughput and what token throughput do you command? And so Metta actually has two different harnesses internally. They have a version of Open Claw called My Claw. And then they also of course acquired Manus, but it appears that they're running Claw, maybe Opus under the hood to actually generate the tokens that come through those harnesses. The interesting thing is that at $250,000 AI budget per engineer, you're at like $20,000 a month. And so based on Tyler's math, this feels like, okay, there's going to be another maybe 4x to get to Jensen's prediction. I think it makes clearer the strategy with Metta Super Intelligence Lab, because if you're looking at, you know, it's clear that they're spending hundreds of millions of dollars on this just for internal code gen tooling, like running their business, they are going to spend an inordinate amount of money on frontier inference. And so training a model there, they will be able to amortize the training cost of the next model that they build, not just over, can they get a product out that goes viral and becomes its own standalone chat app that people pay for, or maybe it's ad supported, like just on the internal usage, they could be running a, you know, multi-billion dollar token bill that they would have to pay another lab. And so if they develop that internally, it's pure vertical integration. And then you also have everything that's happening on the actual ad targeting and content delivery side. And when you add up all of those, all of a sudden, the big question has been like, does Metta is Metta going to be able to launch an entirely new AI product, like Vibes or something like that? And this is a data point that to me says they don't need to just from a pure vertical integration story, the investment in MSL can pencil out. I just want you to get to your schizo theory. What's a schizo theory? That this whole token maxing thing is like a mirage while they distill the model. Oh, yeah. Yeah. I mean, there is a world where if you're running, if you're generating trillions and trillions of tokens of a frontier model. Metta is really burning through a lot of tokens. And you have to generate everything. It's like, oh, we're just token maxing. Yeah. I mean, there's another story about distilling we'll get to later in the show, but there is a question about if I have a, if I write an essay and then I have a model rewrite it, those tokens, they are from that model provider. They, I buy them, they become mine. Can I train on them? That's probably out of terms of service. So you would think, no, but you sort of wind up in this ship of theses world where if Metta pays Anthropic $100 million or a billion dollars to go rewrite every line of code, every email, every Slack chat, every internal message, like basically map the entire organization, rebuild it. They wind up with an incredible training corpus that they can use for their next model. But I would imagine that they can't. And I imagine that the, that the enterprise contracts go both ways. They can't, you know, the lab can't train on the corporate information. That's standard in all of the enterprise contracts. And I, and I would imagine that the opposite is true as well, although it is this fuzzy ship of theses world where if you're using coding agents to upgrade your infrastructure and then you want to run and train some model on your infrastructure, do you have to pull out the tokens that were revised by the AI lab that you don't have the right to train on? It's all very interesting. Apparently startups that have gone out of business are able to sell their corporate histories for something like a million dollars to data brokerage firms and AI labs now. Have you heard about this? Yeah, heard about it. Yeah, not skeptical. I'm skeptical. I'm skeptical. I mean, I mean, certainly there's a market for it, but basically all the codes that a company built over a few years, maybe they were code, but also usage within different enterprise. Yes. Yeah, all sorts of different stuff in other news. Yes. Intel is joining Terrafab. Yes. Let's see. Intel is proud to join the Terrafab project with SpaceX, XAI, and Tesla to help refactor silicon fab technology. Intel says our ability to design, fabricate, and package ultra high performance chips at scale will help accelerate Terrafab's aim to produce one terawatt a year of compute to power future advances in AI and robotics and throw on up a post of hanging with Mr. Musk himself. Let's go through the Wall Street Journal's coverage of this. Elon Musk is partnering with Intel on his ambitious Terrafab project, which aims to build specifically designed chips for SpaceX and XAI as well as for Tesla. In an announcement Tuesday, Intel said it would work with the companies to design, fabricate, and package ultra high performance computing chips at scale. The company shared a photo of chief executive Lip Bhutan shaking hands with Musk, CEO of SpaceX and Tesla. The partnership is a win for Tesla, which has struggled in recent years. Intel. Intel, which has struggled in recent years. Leaving the company to cut production capacity when demand was surging for data center chips and when competitors like Nvidia and AMD have thrived. That was always just such a tough pill to swallow when you would talk to the ASIC companies, like Cerebrus, and you would say, hey, you're doing something new. You're not doing Nvidia chips. Is there any way you could get off of TSMC? And they're like, no. Like, we still need to be in Taiwan. Obviously, there's a huge geopolitical component here. We can get into all that. Last year, the Trump administration reached a deal to acquire an equity stake in Intel for around $9 million to help secure the American chip makers business. The US government held 8.4% of Intel shares outstanding as of March 20th, according to securities filings. The figure doesn't include warrants that could increase the government's equity stake in Intel. TerraFab represents a step change in how silicon logic, memory, and packaging will get built in the future. Tesla and SpaceX confirmed the partnership in posts on X. Musk unveiled the plans for a single facility in Austin, Texas to make chips to be used by SpaceX and XAI, which merged in February as well as by the publicly traded Tesla. He pitched the project as an opportunity to quickly experiment on chip design by designing and manufacturing the chips in one facility. The Fab will make chips for use in Tesla's robo-taxis, which they're already fabbing, I believe, at Samsung, although they do have Nvidia Dojo chips, I think, that are TSMC. Optimus will also need chips, and they are planning to use Intel for that as well. So these are two areas of priority for the electrical vehicle maker, as it ships its focus to artificial intelligence-enabled products. It will also make chips optimized for use in space, where SpaceX is planning to deploy huge numbers of satellites capable of handling a computing task. Who else do you think they need to get involved here? Because just the two of these got, you know, Intel and Tesla coming together. It's good to have more involvement, but still I think the entire project. No, we've seen a few of those like AI leader gatherings in DC, where you see Tim Cook and Sundar and Sam Altman and Dario and all the leaders are together. And I was always hoping that at one of those dinners, they would say, okay, everyone's going to try and say the biggest number, but this time it's going to be how much you're committing to Intel and how much you'll buy from them if they come online with a competitive product. Because the demand side has always been a big problem for Intel, that they have the capability, they have the plans to build a 2-nanometer, 3-nanometer plant, like a frontier plant, leading edge fab. But every other company has been so tied to TSMC, but I think everyone now acknowledges that TSMC is not investing super heavily in CapEx, they're not going scaling up as much as the industry would like them to. And so lots of folks have sort of signaled towards a chip bottleneck coming in the next few years, and Intel has the opportunity to communicate that. This seems like the first step in that chain. So companies, including Tesla, often design their own semiconductors, but need a supplier to actually make them in a so-called chip fab. Must companies have sourced chips from a wide range of suppliers, including Nvidia, Samsung, Taiwan Semiconductor. Oh, I got it. Musk said that tariff fab is needed because his company's demand for chips is slated to far outstrip the supply it gets from partners. I was listening to Chuck Robbins from Cisco talk about data centers in space, and the heating issue came up, and he was like, I don't really have a solid answer for that yet. But I do think that if you are bullish on data centers in space, you have to start with the fact that Starlink works in space currently, because it is doing compute. It's not doing... You couldn't possibly put... Let's be honest, John. We couldn't possibly put a computer up there. Yeah, there are computers with... They can't inference frontier models. It's not gigawatts in space yet, but there are, I believe, across the entire Starlink cluster, megawatts of compute in space with solar panels, and they do heat up because you are running a chip that routes packets across the internet from one satellite to the next to get you your internet via Starlink. And so it's not that it's a solve problem. We are on a path to deploy some level of compute in space, Tyler. Yeah, we've seen Philip Johnson, there are chips in space right now. There are GPUs. I think Arthur, he said there were like five or six H100s, right? Yeah. So they do work. I think most people's problem with space data centers is that it's economically doesn't make any sense. Well, so yes, that is the correct angle, but a lot of people are getting hung up on... No, there is a whole conversation about it is impossible, and you need to move past that into the economic equation, which then gets you into timelines and actually thinking about what needs to happen to dissipate that heat. But clearly, yes, you can. I mean, you can put humans in space on the ISS and cool that. We have created ways to move heat around in space for decades. It's obviously a new challenge. But I think starting with the baseline of like, there is compute happening in space right now, we're going to try and... I mean, Elon wants to 1,000 exit, 100,000 exit, million exit. I don't even know what the scale is, but orders of magnitude. And so there's new engineering challenges. Speaking of space, it looks like Elon is going to use SPCX as a ticker for the SpaceX IPO, which he had to acquire from Matt Tuttle. Hence the ETFs ticker change shown below. Eric from Bloomberg says, we predicted this could happen in a December note. Nice catch by Will, who famously gave the meta ticker to Zuck. I did not know that Will Hershey had the meta ticker previously. So we know somebody that squats on... Who had the meta ticker? A guy named Will Hershey. Oh, interesting. There's a company called Round Hill. But we know somebody who's been... That was Matt Ball. We had somebody here come in outside of show hours and say that they were squatting on a bunch of tickers. And the idea seems so... I think what might be the reality is that you actually... It needs to be further along than just reserved. I don't know that having it... You can go... If you're a startup today, you can go reserve your ticker today. But I'm not sure that that actually gives you enough leverage to when when Elon comes knocking, ready for an IPO. You actually have priority. All right, we got to talk about a corporate retreat that went badly wrong. Okay. Technology company Plex took its 120 employees to Honduras for a week-long bonding experience. It was a disaster from the moment they arrived. Senior executives at the tech company Plex were eager to treat their 120 fully remote staffers to a week-long corporate getaway in a tropical paradise. Pop quiz, Tyler, do you know what Plex is? I don't know about Plex. No. Have we seen Plex? I don't know either. So we all failed. But now it's your job to figure it out. I will continue. The plan for the Honduras trip was simple. Company meetings and team building by powdery soft beaches during the day and island front at night at a cost of roughly half a million to the company. They'd build the trip around a survivor theme with teams and challenges, but it'd be fun not too physically grueling. The CEO of Plex, a free streaming platform, would play a role similar to that of survivor host Jeff. Perhaps the executive should have taken it as a sign that just as the first bus of staffers pulled up to the resort, the chief executive was already in his hotel bathroom experiencing the initial waves of a violent stomach infection. What followed was a comedy of errors including military drills that outpaced anything this group of office workers had in mind, a rogue porcupine stranded airplanes, and one syringe to the butt of an employee. Corporate retreats are generally assumed to be torture or at least a semi-stressful chore, what with their force-fun activities and hybrid work play environments that leave workers confused about boundaries. Is that the industry standard? That seems wild. I don't know if I've ever been on a corporate retreat. I've been on some founders fund events, but those aren't really retreats. Those are more just conferences, but I don't know. Corporate retreat seems unexplored territory for me. It's no wonder the new season of jury duty, a comedy series that tricks an unsuspecting non-actor into believing his off-the-wall fictional circumstances are actually happening is set at a corporate off-site. But in real life, Plexcon 2017 beats anything on TV. Here's the story of an all-staff company getaway told by six people who were there, a trip where most everything that could go wrong did go wrong. Nearly a decade later they're still working together and still talking about it. It's crazy that they... It was bonding experience. Yeah, well yeah, it's crazy that this is now coming out. So, Sean Ha, 42, founder of Monica Partners and Independent Corporate Retreat Agency that planned the trip. About three weeks before we arrived in Honduras, we got an email from the hotel's general manager that said, I will be departing. I wish you the best with your retreat. I knew something was off. Three days later, another email. The head chef was no longer going to be at the hotel. Scott, 52, chief product officer and Plex co-founder. We get there. We've got to take the bus from the airport. Dirt roads, you start getting closer, and there are guard towers around the property. People with machine guns and stuff. A lot of people were like, where are we going? Keith, the CEO of Plex 54. We usually go a day early and we set up. If there's any little thing, we have to get it right just so the employees have the best experience possible. Keith woke up the day that people were coming in Sunday morning and he is sick as a dog. Everyone there is fried. Basically, people are telling me, don't eat the vegetables. Don't eat the vegetables. That's like the same thing. No, no, no, no, because they clean it. They wash it in water. It's usually not filtered water, right? Because it would just be kind of crazy to... Yeah, yeah, here it is. I've got to have a salad, just one salad. So I got E. coli, which may be the worst thing you could get possibly ever. Just as people were arriving on the buses, I was like, I had lost eight or 10 pounds. They had a doctor come to me, which apparently is pretty standard. They nailed an IV bag to the bedpost. People are arriving for a party that night. The next day is survivor theme kickoff. There's not one person on the planet more excited about survivor than Keith and his wife. They have watched every single episode. My wife and I met Jeff, the host of Survivor. What I wanted is when everybody shows up, I do a Jeff, welcome to the island. Here's the theme for the week, but Scott got to do it. The opening survivor thing was a contest where people on their different teams open up a platter. You have to eat what's on the platter. Sean, who's the plex head of business development. Are you going to call me? Yeah, somebody is cold texting me, pitching me their startup, and they've called me a bunch of times today. Is it actually them or is it their AI agent? I wish I could pick up. It's just like a little bit too much. But yeah, cold texting somebody, getting their number, I don't think that's the new meta. We heard from an executive in tech that they are getting dozens of emails every single day trying to recruit them. Every email comes from a new Gmail account that's unregistered, brand new, but it's all LLM written, very different. It doesn't really do all the research, but has a few keywords in there. It's clear that someone is building a next gen recruiting agency that's basically just a lot of spam. Feels like the end result will be a return to relationship building and not broad top of five. I should read the cold text from this morning. I have nothing against cold email and just being bold, but I did read this out loud to you, John, so I'll read it to everyone. I got a text from an unknown number today at 7am. All right, Jordy, good news or bad news first? This is blank, and I'll leave the name out. And then I just get a PDF of a deck and then a text. All right, Jordy, the bad news is this was an unplanned introduction and on the surface probably lukewarm outreach. The good news is that there's zero doubt you're now in touch with the founder, with the most grit of anyone you've interacted with the past 12 months and likely anyone you'll interact with over the next 12 months. 50,000 seed round passes over the past 10 months here to make 50,001. So you know, you should be coming in being like, I've been passed on 50,000 times. I'm hoping this is the one that gets through. That seems like a rough estimate, though. Every month's of feedback integrations have made it better. So you're seeing more quality presentation than rejection 10,000. Looking forward to your message. The chat wants the builder to pitch. They want you to hear this out. Everyone's in favor of this. The chat wants you to get on the phone with them. Do it live. I mean, they wanted it to do live. I don't know if you should do live, but you should take the call. I will take the call. Let's go back to the corporate retreat. Okay, so they hire a former Navy SEAL to basically haze the team on the beach, and you can pull up an image here. This is not a super fit group in general. One of our biggest mistakes was hiring a former Navy SEAL to pump the team up as I'm in my room dying. I could hear them out there doing all the drills and yelling, and so I'm in here thinking, this is terrible, but it sounds terrible out there, too. We're doing Army crawling on the beach. It was 100 degrees. I bailed out partway through. I went into the ocean just to cool off. I went in probably on all fours because I was tired. It's not a super fit group in general. The X Navy SEAL is like, we can tone it down. No problem. We get up there and it's hot and humid, and people are passing out. I don't think he'd ever seen quite such an unfit group. We ended on, I guess, what's probably a golf course. On command, everyone had to hit the grass. Everyone's silent. We're pretending we're Navy SEALs, but I happened to land in the wrong spot. I'm just like, oh, God, what is happening? I was sitting on a fire ant hill. I was wearing shorts. I jumped and had my, I had hives and bumps from the bites. This is ridiculous. Someone saw an alligator on the golf course. Sounds like a ridiculous... There was a porcupine that fell through one of the ceilings. This is like a fire festival for corporate retreats. The fire festival of corporate retreats. Anthropic is taking steps to arm some of the world's biggest technology companies with tools to find and patch bugs in their hardware and software. The company is making a preview of its new AI model called Mythos, available to about 50 companies and organizations that maintain critical infrastructure, including Amazon, Microsoft, Apple, Alphabet-owned Google, and the Linux Foundation. Cybersecurity researchers and software makers worry that artificial intelligence is becoming so good at exploiting vulnerabilities that it could cause widespread online disruption. Security experts have predicted that AI models will discover an avalanche of software bugs and the effort is set to help companies stay one step ahead of cyber criminals and other threats. This feels like a very good rollout strategy, generally, both because we've seen a huge amount of cyber attacks and hacks and accidental releases. Like, even if it's not, you know, there's been, we had a member of the security team from CrowdStrike on the show last week talking about the rise in cyber attacks broadly. Getting the most frontier models in the hands of big companies early, great from that perspective, and also just great as a product demo, which will get the entire organization excited about deploying the technology broadly. So, very good as a B2B go-to-market motion. This makes a ton of sense. In some other more positive news, OpenAI and Thropic Google are uniting to combat model copying in China. This is a bigger discussion around AI safety. We've talked about this. Look at that. Some, some... Who knew. Who knew that you could get along. Yes. I mean, I mean, I'm sure people in the chat have seen the New Yorker article where there's just tons and tons of quotes from various AI leaders all, you know, upset with Sam Altman. And the inter-AI drama has been bubbling up since the dawn of OpenAI. Like, OpenAI was started as a reaction to Google, and then, you know, and Thropic leaves and teams up with Google, and then Elon doesn't like Anthropic, and then Ilya Sutskiver and Mira leave, but they don't join Anthropic. And so, there's been so many personalities and so many disputes. I feel like the takeaway is that this is all extremely high stakes. There's a technological transition happening, a huge amount of money on the table, a huge amount of influence on the table, and so everyone is sort of clamoring for their share, and it's creating a lot of friction. So, rivals, OpenAI, Anthropic, PBC, and Alphabet, Incz, Google have begun working together to try and clamp down on Chinese competitors, extracting results from cutting-edge U.S. artificial intelligence models to gain an edge in the global AI race. The firms are sharing information through the Frontier Model Forum, an industry nonprofit that three tech companies founded with Microsoft in 2023 to detect so-called adversarial distillation attempts that violate their terms of service, according to people familiar with the matter. The rare collaboration underscores the severity of a concern raised by U.S. AI companies that some users, especially in China, are creating imitation versions of their products that could undercut them on price and siphon away customers while posing a national security risk. And so, I was trying to square this question of distillation and model commoditization with the news that Anthropic has reached $30 billion in run rate and has an agreement with Google and Broadcom for multiple gigawatts of TPU capacity. Like, clearly there is insatiable demand for Frontier tokens, Frontier models. They're incredibly expensive to train. We saw in the Wall Street Journal that these- Expected training costs from both labs. Yeah, it was training in France, but it was hundreds of billions of dollars. And so, the hope is that you're able to amortize that over at least a couple of years, you know, a long time, ideally. The shelf life of a model after you train it is pretty limited if you're being commoditized and copied. If you're being distilled, it's even faster. At the same time, it's just staying on the Frontier clearly leads to an incredible ramp in revenue. So, is commoditization a real problem? It feels like it's almost just more of a problem from an AI safety perspective because you can't have the geopolitical conversation, like what Bernie Sanders is proposing around different labs working together- Pausing development. Potentially pausing or slowing down or just just even adding more constraints and reviews before models get released. It's harder to do that if you have a different country that's racing ahead and moving much faster and trying to close that gap. Leave us five stars on Apple Podcasts and Spotify. Sign up for our newsletter at tbpn.com and we will see you tomorrow. Goodbye.