Practical AI

The mythos of Mythos and Allbirds takes flight to the neocloud

45 min
Apr 23, 20265 days ago
Listen to Episode
Summary

Daniel Whitenack and Chris Benson discuss Allbirds' dramatic pivot from footwear to AI infrastructure, the emerging 'neocloud' market for AI-native compute, Anthropic's unreleased Mythos model and Project Glasswing security initiative, and the implications of token maxing in AI development. They also cover a federal court ruling that AI chat logs are not protected by attorney-client privilege.

Insights
  • Distressed traditional businesses are pivoting to GPU infrastructure as a capital reallocation strategy, though $50M is minimal in the multi-billion dollar AI data center market
  • Neocloud (AI-native cloud) represents a specialized alternative to hyperscalers, optimized for GPU workloads rather than general-purpose computing, with companies like CoreWeave gaining traction
  • Frontier AI models with security vulnerability discovery capabilities create governance and control opportunities, but also expand threat actor tooling availability
  • Token maxing as a developer productivity metric is emerging as a business practice, but lacks clear correlation to actual output quality and organizational absorption capacity
  • AI chat logs are legally discoverable and not protected by privilege, creating significant compliance and confidentiality risks for businesses using public AI systems for sensitive work
Trends
Neocloud infrastructure specialization gaining market traction as alternative to hyperscaler dominanceDistressed companies pivoting to AI infrastructure as capital reallocation strategyToken maxing and AI-assisted development becoming standard practice in tech organizationsSimultaneous growth in both centralized data center compute and embedded edge AI across industriesAI security capabilities creating dual-use risk (offensive and defensive applications)Legal and compliance frameworks lagging behind AI tool adoption in professional servicesGated release of frontier models becoming standard practice for safety-conscious AI labsEnterprise governance and AI control solutions emerging as market opportunityLack of standardized metrics for measuring AI development productivity and ROIPrivacy-preserving AI chat systems as potential market gap similar to Signal/ProtonMail
Topics
Allbirds business pivot to AI infrastructureNeocloud and AI-native cloud computingGPU supply chain constraints and centralizationAnthropic Mythos model and Project GlasswingAI security vulnerability discoveryToken maxing in software developmentAI-assisted coding and developer productivityAttorney-client privilege and AI chat logsLegal discovery of AI conversationsEmbedded and edge AI deploymentFrontier model capabilities and safetyAI governance and control frameworksHyperscaler vs. specialized cloud providersAI development metrics and measurementConfidentiality and AI tool usage in legal/business contexts
Companies
Allbirds
Exited footwear business in March 2026, pivoted to AI compute infrastructure with $50M capital, stock jumped 700%
Anthropic
Developed unreleased Mythos frontier model with exceptional security vulnerability discovery capabilities; launched P...
CoreWeave
Neocloud provider specializing in GPU-first infrastructure for AI training and inference workloads
Together AI
Neocloud infrastructure company mentioned as example of AI-native cloud provider
Lambda Labs
Neocloud infrastructure provider specializing in GPU workloads for AI applications
OpenAI
Referenced for gated model releases (GPT-3) and marketing approach compared to Anthropic's strategy
Meta
Implementing token maxing culture with developer scoreboards and gamification to maximize AI tool spending
NVIDIA
GPU supplier; CEO Jensen Huang referenced regarding token spending recommendations for engineers
AWS
Traditional hyperscaler cloud platform mentioned as general-purpose alternative to specialized neocloud providers
Prediction Guard
Daniel Whitenack's company; focuses on AI governance and control capabilities
American Exchange Group
Acquired Allbirds' footwear business assets in March 2026
Kama AI
Company developing embedded AI for autonomous vehicles mentioned as example of edge AI deployment
Apple
Mentioned as GPU supplier alongside NVIDIA, TSMC, AMD, Intel, and Qualcomm in chip ecosystem
Microsoft
Hyperscaler mentioned in context of traditional cloud platforms vs. neocloud alternatives
Alphabet
Hyperscaler mentioned in context of traditional cloud platforms vs. neocloud alternatives
People
Daniel Whitenack
Co-host discussing Allbirds pivot, neocloud, Mythos model, and AI governance implications
Chris Benson
Co-host providing insights on neocloud market, frontier models, and edge AI deployment trends
Jensen Huang
Referenced for statement on engineer token spending recommendations ($250K on tokens for $500K salary)
Sam Altman
Referenced for flamboyant marketing approach compared to Anthropic's more cautious strategy
Quotes
"What do you do with a bunch of cash but buy GPUs? Of course."
Daniel WhitenackEarly discussion of Allbirds pivot
"The market is endorsing it. And I think this is like prior to this announcement, you know, this is the kind of thing nobody would have bought into. Like it would have been, it would have been seen as a joke."
Chris BensonOn Allbirds stock jump reaction
"I think the giant growth area is going to be in what you might call far edge...there's huge, huge, huge growth potential in that across so many different industries."
Chris BensonOn embedded AI vs. centralized compute
"I would be curious if someone was token maxing to take a look at a few of their chat logs to see maybe how they were doing that token maxing, which it appears could actually be discoverable information in a court of law."
Daniel WhitenackTransition to legal discovery discussion
"If you're putting information into an AI system, you're essentially waiving privilege of confidentiality, right? Because that is explicitly discoverable unless it is explicitly private or you're running a private model locally."
Daniel WhitenackOn attorney-client privilege implications
Full Transcript
Welcome to the Practical AI Podcast, where we break down the real-world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind-the-scenes content, and AI insights. You can learn more at practicalai.fm. Now, on to the show. Welcome to another episode of the Practical AI podcast. This is Daniel Whitenack. I am CEO at Prediction Guard. I'm joined as always by my co-host Chris Benson, who is a principal AI and autonomy research engineer. How are you doing, Chris? Hey, I'm doing great today, Daniel. Looking forward to catching up on one of these fully connected episodes where we get to talk about kind of whatever we want to talk about. Whatever we want to talk about. No one, I mean, I guess normally we just talk about what we want to talk about, but at least when we have a guest, we try to center the conversation on maybe a few things they want to talk about. I'm pretty excited, Chris, because I just got a brand new pair of shoes. And I've been wearing my new shoes all week. And I didn't think that that would be a relevant topic to bring up with you on the Practical AI podcast, because I thought shoes really didn't have any overlap with the AI world. um although i guess this is not the topic we're going to talk about but i did see a company that had like uh like you take a picture of your foot and like the ai figures out the shape of your foot or whatever and then like uh could i guess advise on shoes or something anyway um but speaking of shoes uh today um i didn't even see this but folks in in the office here at prediction guard were like, hey, did you hear about Allbirds? And I had not heard about Allbirds, but apparently Allbirds is now an AI company, which is quite interesting. So Chris, do you have a pair of Allbirds? I guess they're AI Allbirds now? I actually don't, but I got to say, that's a terribly interesting way of retreading your business model, you know? Yes, yeah, yeah. They really, yeah, kicked it to the curb, I guess. So from shoes to AI data centers? Yeah. Well, I guess kind of background information for people here. I sort of only know this because my wife was really into Allbirds. She had a few pairs. But it seems like kind of 2016 to around just after COVID 2021, there was this huge rise of the Allbirds brand, which was a favorite in terms of shoes that you would order online. I think eventually they did have retail locations, that sort of thing. But from 2022 to 2025, so through this kind of last year, they kind of consistently had a decline, all growth, margins kind of compressed, and their stock price declined, making the business distressed. And so I guess March of this year, so as we're recording this, we're in April of 2026. In March of 2026 of this year, Allbirds exited the actual footwear shoe part of their business. So selling off all of those assets to American Exchange Group. Don't know a whole lot about them. But basically, that sort of ends the shoe operation portion of Allbirds. They still, of course, had the shell of a company, which had a name and an entity and a stock ticker, etc. And they had a bunch of cash, right? So what do you do with a bunch of cash? And I guess they did also raise additional cash. What do you do with a bunch of cash but buy GPUs? Of course. What else is there? Apparently, apparently what happened. Is that what you do with all your cash? Yes. Rebranding as AI compute infrastructure, which I'm wondering if they'll give me AI compute infrastructure for cheaper. That would be kind of nice. You know, it's amazing. Like, you know, we're joking about this being kind of the pivot you didn't see coming. and quite a pivot too, but their shares jumped, you know, at least 700% based on what I'm looking at here, which is quite, that's quite a jump, you know, when you, when, when, in terms of like the market accept, not only accepting, but endorsing that kind of a decision. It's you gotta, you gotta be wondering if there aren't many, many boards out there and CEOs that are kind of going, we're in kind of a tough spot in our business things have been struggling recently you know maybe we go buy gpus and go into the ai business i mean it apparently is a is a perfectly legit business plan now yeah i guess on the positive side this could seem to be a rational allocation of capital right so i have a bunch of capital well i don't have that much capital i wish i had that much capital. But a party has that much capital. And, you know, if your core business is dying and you're able to sell that off, you have a company, a stock ticker, then what, you know, what's the hot thing? And obviously compute is a core part of the expansion of AI everywhere, the running of these models at scale. Many might not be self-hosting models, but they're certainly consuming models that are running on infrastructure somewhere, right? And it's, yeah, so I guess from that perspective, it could be kind of seen as a very positive and useful kind of pivot. What's your thought? Well, apparently so. I mean, the market is endorsing it. And I think this is like prior to this announcement, you know, this is the kind of thing nobody would have bought into. Like it would have been, it would have been seen as a joke, you know, it's, and so, but the fact that at least at this point, the market's doing that really does make such a pivot into a concern that companies may be evaluating. And, you know, in these articles, as they, as they talk about Allbirds kind of, you know, once the point in time being the next Nike, you know, I've seen that bantered about in some of the articles. And it got me thinking for just a second there, like, what if Nike were to do the same thing? What if Nike were to pivot from shooting? Yeah, just do it. But I'm wondering, would they brand themselves as AI? Sorry. Yeah. AI Jordans. There you go. Yeah. Speaking of terms, I was running across this term, which there you know sometimes we try to clear up jargon on the show uh pretty practical and sometimes jargon doesn't make any sense to me but this term neocloud um is this is this something that that you've run across across or is this is this new to you this is new to me so you'll have to you'll have to take us into neoclouds so apparently and this is um related to the all birds thing because apparently neocloud or sometimes you know uh referred to as ai native cloud is kind of a shift that we've seen uh recently where a neocloud is kind of cloud infrastructure that's built specifically for ai workloads not general computing so in that way all birds kind of would be potentially putting together a neo cloud so like the kind of old old cloud model is you have you know you have your web app infrastructure you have databases managed databases you have managed storage of some type you have some it or logging monitoring services. It's kind of general purpose, flexible, lots of different services that sort of the idea of this Neo cloud or AI native cloud. Think of other companies, maybe like CoreWeave or Together AI or Lambda Labs, right? Is infrastructure that's built either for AI training inference or both massive GPU workloads, kind of GPU first, not CPU first. And this exists kind of because GPUs are scarce also in the hyperscalers in the general cloud platforms. The workloads are different, right? Because maybe you are running a lot of things across many nodes, running a large model across many nodes, lots of movement of data often, and you're kind of supply chain constrained in terms of what you need to support. So that's this idea and kind of how it intersects here. This was a new one for me as I was looking a little bit at this story. I'm curious, do you have any insight into it? like if you're looking at neocloud companies and you're comparing them against kind of the, you know, the traditional cloud players, you know, which are the alphabets, the Apple, the Microsoft, the Metas, those, are you, you know, how is the business model changing and how much is neocloud eating in to that? You know, I mean, is it, are we seeing it very specialized or is it making kind of general market traction? I would, well, I don't have exact numbers right in front of me. I think if our listeners do have that, let us know, point us to those on social somewhere. But I do know that I'm seeing a lot of CoreWeave and other, I guess, other neocloud type of companies being talked about quite a bit And I think that it is partially because there is this sort of there can be this specialization towards the AI workloads and the specific compute there And as you know you know going into a hyperscaler, if I go into AWS, or if I go into some of these platforms, you can do just about anything. And there isn't that focus, which all which is good in one sense, because you can kind of support a lot of different types of things. But if you're an AI native AI forward company and, you know, maybe you're quickly spinning up no code applications, you're not maybe doing a lot of that hosting and management in a traditional way, then maybe it makes sense for you to run a lot of that stuff, serverless or otherwise and kind of have this pay as you go in AI specialized clouds, which is kind of interesting. I guess that's one of the things about the Allbirds case that you could talk about on maybe the negative side. My hot take on this is like there's no really what Allbirds is bringing here is a company shell and capital, right? They're not bringing any domain expertise that maybe I'm aware of. Maybe there are some, there's some domain expertise around supply chain and maybe manufacturing or, or industrial settings like that they're bringing, but they're not bringing AI specific expertise in terms of building this kind of Neo cloud. That's true. The other thing is like approximately $50 million, although that's much more money than I can imagine generally, is very much a drop in the bucket in terms of the AI data center market. So part of my question is like, OK, like create your little data center. It is very much a drop in the bucket in terms of whether you look at like what China is spending on data centers or just like companies in the U.S. investing billions of dollars in in AI data centers. that maybe is the cynical take on this is like, okay, you have a little bit of this capital, you don't have the domain expertise in AI, and you're going to what, spend $50 million on a little data center? How is that going to make a mark? And maybe part of it is like, this is the foothold and more capital will be infused and they'll figure it out. And I don't wish them bad or anything. It's just more of a skeptical take. Yeah, I mean, in another industry, that money would seem like quite a starter. But in this industry where the availability of GPUs in the ecosystem is already quite strained based on the demand. And if you look at the fact that kind of globally, there's basically half a dozen key players in the GPU ecosystem in terms of supply as companies may pivot to this kind of business model, which is really NVIDIA, TSMC, AMD, Intel, Apple, and Qualcomm for the most part. And each of those is cranking out the types of chips in this capacity for AI purposes that they make. And so I can't help wonder, but if this becomes a trend where you see a lot of companies that are struggling pivoting into that, what does that chip supply chain start looking like? it gets even more strained going forward. So this will be a really interesting kind of see if this turns into a trend to watch and see what happens with that. Yeah. And what do you think, Chris? There's kind of two elements happening here. One is the centralization expansion of these very much centralized compute resources in data centers, which will grow. But there's also this push towards, I think, I don't know if this was one of the trends that we talked about at the beginning of the year for 2026. It's certainly one of the trends that I'm thinking about in terms of the market in general is this shift towards kind of physical or embedded AI, where AI is kind of living everywhere in a bunch of environments, whether that's like kiosks in a retail environment or, you know, actually on the manufacturing floor, not in a data center for a manufacturer, of course, in phones, or we just had the conversation with Kama AI, who has AI in these devices that they're putting in cars to make them self-driving. So yeah, what is your take on this and how people could think about the, is it both are increasing simultaneously, like we'll just see more data centers and we'll see more physical AI, or is there a shift more towards that embedded edge centric model versus kind of everything being centralized in data centers? I mean, I think it'll be all of the above in my view, but I think the giant growth area is going to be in what you might call far edge, meaning it because people define edge differently. You know, some people would say kind of the edge of the data center, edge of the cloud is edge. But if you're talking about embedded devices that are out on embedded in physical devices that are used that are not directly cloud connected or are, but are not relying on that for all of its functionality, then I mean, there's huge, huge, huge growth potential in that across so many different industries. And that's still in its infancy. But yes, I mean, I do think that, you know, the that will go this, you know, your notion of Neocloud, as you as you instructed us a few minutes ago, is an opportunity that many, many companies will go at. My gut is that in the long run, that is not as profitable just because there's already huge players dominating that. And as others fill in the niche, there'll be many, many players there. So it'll be interesting to see if that continues to be an amazing strategic opportunity versus specializing out in various devices that are embedded. Yeah. Yeah. Well, I guess that is your set of beliefs or assumptions about something or shall I say your mythos about that, which brings us to an interesting topic of mythos. What's the right way to say it? I'm actually not 100% sure. I've heard people say it both ways. So either one is fine for today. Okay. Unless folks have been under a rock than the last week, they hopefully have heard a bit about this already. Yeah, I'll maybe switch between the two. That way, at least for part of the time, I can seem smart. Yeah, I mean, go ahead. But the mythos model from Anthropic has been in the news, or I guess the supposed mythos model that Anthropic has and is somewhere and not seen by people yet is in the news, I should say. Correct. So the short of it, there is a new frontier model that I think logically you would say is kind of the next thing past the Opus model, which has been the powerhouse driving Claude Code. We've talked a lot about that on Opus and Claude Code on the show. And so the next generation being Mythos from Anthropic is a powerful model. But back before any of us had heard of it, I think what's been reported is that Anthropic had it in a sandbox environment. They discovered it was particularly adept at uncovering security vulnerabilities in just about every meaningful software package, arena, whatever that you could imagine. They discovered they claimed many thousands of vulnerabilities in every operating system and every browser. And they realized that it could have profound effects out there on its own. So they, instead of releasing it, as I think they had been planning, they kept it close hold. And they started a new project called Project Glasswing, which is a security project that is kind of closed. And they brought in apparently 40 companies, but only about a dozen of those companies are public. A number of companies are not. and those companies are being invited to use Mythos to make sure that their various systems are not exposed or give them time to fix those. So there's not a lot of information, as you would expect, about the specifics of that process. And so that is ongoing right now. And so we'll see what happens. We don't really know what the future of Mythos is, but I think I would finish by saying if you just kind of look at it in the same way that you and I are often talking among ourselves and with guests about these types of models the fact that people know it's possible now means that it is probable that other people will be developing models and stuff as we've seen in every case ever on frontier models And so it might bespeak of a very interesting future where we are seeing some tremendous capabilities from frontier models that are, once again, a significant step beyond the generation that's public. So who knows? But I think in the months ahead, I suspect this is a topic we will end up revisiting from time to time. Haven't we been here before, Chris? I mean, you and I have been doing this for a while, and it seems like this is the same conversation we've had with respect to some open AI model releases and gated releases of this and that because it's going to end the world or something. It was GPT-3, I believe. Am I remembering correctly? I believe it was. It was like an earlier one. Yeah, we talked about the whole gated release thing, right? Yes, and they were holding it. And then finally they were out and they didn even try that on the four side And now like you look at you talk about GPT even GPT and feels like oh man that thing sucks Yeah. So it's like at the time it was going to end the world, but it kind of sucks. I was talking to somebody the other day and they were using in their business GPT-4O. And I literally said, why, why are you using that dinosaur? You know, why would, why would you do that to yourself? And so, uh, so yes, we're definitely, this is like definitely coming around to the same story. I mean, I'm not saying it's not better. I think my point is just, hey, I don't I don't think it's world ending. People can probably rest a little bit at night. I do think like I guess they emphasize the kind of that it is. and this is, I think, from Reuters and a couple other places that it's, you know, especially strong, maybe at discovering vulnerabilities, exploiting those vulnerabilities. And so this does expand. I mean, even already, right, I can use whatever models and agentic coding techniques to create malware, just like I can use to create great software and I can exploit systems. So like there is definitely this narrowing or which maybe has always been the case in the cybersecurity world where like the threat actors get better and there's better availability of tools and that sort of thing. This is certainly a different level of that. I'm not saying it's equivalent. I'm sure it's a different level of that. Right. But, you know, when I look at this, like regardless of what mythos capabilities really are, like whether they're really high, low, whatever. I think that, you know, Anthropic has historically been kind of the little bit lower key and kind of the safety oriented and not quite as flamboyant and kind of over the top as the open AI, you know, folks have been. I mean, Sam Altman's known for his, you know, the kind of statements he makes all the time. And people over time have kind of learned to take those with a grain of salt. But it is, it's starting to look, you know, with some of the cloud code stuff and get into this, like maybe Anthropic has started taking a page from that playbook at OpenAI in terms of kind of the, the marketing, you know, aspect of this. Because regardless of what Mythos capabilities are, whether they are amazing or less or just whatever, this is still an amazing amount. I mean, you and I are sitting here talking about it. We're contributing to that. They were all over the news. And so it's fantastic marketing strategy on their part, no matter what the reality is. And who knows how much is tied to also recent problems in terms of interactions with the government. I do think that this is, you know, And I would be lying if I did not have a personal bias and hope for this. But I do think that this emphasizes kind of a tailwind for governance and control capabilities within the AI world, which is, of course, an area where I work. But also, like, letting people know that there is a risk is very different than controlling that risk. And so whether it's like AI SOC that's using AI like within security operations kind of, you know, on the offensive side or defensive side, I think that like it ushers in great, you know, a tailwind for those companies. but also on the like, hey, if companies actually want to use a model like this, there's bad things that can happen as well as good things that can happen, which emphasizes kind of a push towards governance and control, regardless of what model stack you're using. and one of the you know shout out if you're out there and we'd love to have you on the podcast but there's like things like the AI underwriting company and others that have received funding recently where they are actually trying to establish some of those auditable certifications for companies in terms of how they institute governance what evidence there is for that that's a very different thing than saying there is a risk right it is it is yeah but interesting I look forward to trying it out whenever I get my hands on it. I'm not part of the, I don't have a golden ticket. So I'll have to wait with the other ones, I guess. Yeah, I'm right there with you. Until I go for token maxing my Mythos endpoint. Oh, you threw that out. Now we got to talk about that. I mean, that's like token maxing is the hottest new term over the last few weeks. First off, I feel really old because I just don't like the whole whatever maxing. I feel old talking about anything with that sort of term. But yeah, I guess there is the token maxing thing. Oh, okay. So for those who have not heard the term, again, it's been all over the place recently. so you know stepping back and because I think this really ties back ironically into the anthropic conversation that we had but stepping back a little point to to us talking about Opus as the greatest thing since sliced bread and the fact that as we have discussed you know Opus combined with Claude Code you know made a substantial difference or change in how people were approaching coding and stuff. I know, you know, me coding last year with AI assistance in various forms versus me coding this year has, the workflow is quite different. And so, and really, it really has accelerated in a lot of ways, aside from, you know, where the models are and stuff like that. So the tool set has been great. We've, and we've talked about this on the show fairly recently as well. So, you know, acknowledging this process, we've had a number of the kind of, especially like in the big traditional AI companies, especially like meta, you know, you know, can't imagine the meta culture embracing this. Yeah. Meaning, of course it has. If you look at who's meta CEO is, they have gamified the use of developers and they're trying to basically get them to spend as much as they possibly can on on cloud code and other competing development tools to try to accelerate what any given developer can do to the point like at levels that the rest of us look at and go that's insane you know where it's like you know go spend you a developer go spend hundreds of thousands of dollars on tokens that you're, you know, to accelerate your capability. And I guess this is trying to, you know, to kind of 10 times, you know, if you will, to use another buzzword, what any given developer is able to do from a, in terms of producing work. And they're orchestrating teams around token maxing and stuff like that. And I know at Meta, I don't know if it's still up or not, but they had a scoreboard that was kind of in one of their main areas where everybody could see who was token maxing the most. And then people were gaming the token maxing system. So they would actually spend tokens on kind of trivial things just to make sure that they were showing up on the scoreboard. So yeah, I mean, in those kinds of stories, absolutely doing it to excess. But that's trickled down. And so there are many other organizations that may not have the budgets of some of these top AI companies, but they're trying to figure out what can we afford for our developers to do, you know, in terms of spending money on tokens and what will that get us in terms of the production capability in our own businesses. So that's now another big thing that's out there in business right now. Yeah, I would say just anecdotally, I very much think that we are, we meaning the company that I'm leading, I don't think we are spending enough. We are not, we're not token maxing. We have no leaderboard, but I also don't think we are spending enough on, on AI usage. Uh, I, I think that certainly one of the things as a founder, I think about, and I'm pushing, I think very much like there's, uh, it, it kind of, it, It kind of reminds me, well, I don't know. There's all sorts of things, obviously, and parallels you could draw with everything in moderation, right? But certainly people have to push the boundary for us to know where the boundary really is, right? So I don't think, I think there's probably abuses within that and there's inefficiencies within that and things that don't make sense. But also, it makes sense to me that there would be a push towards figuring out where the proper boundary is, because I don't know if we totally, totally know that yet. I know even some folks, I think it was Jensen from NVIDIA, who obviously has a horse in the race in terms of token maxing, right? Maybe. Maybe. As we talk about NeoCloud and GPUs. But I think he was saying he would be very alarmed if there was an engineer that was making $500K and they weren't spending $250K on half their salary on tokens. I don't know if that's where I land, but like I say, I think we're not spending enough on tokens. I don't think in terms of, and this is where I think in one conversation we're talking about, you always have an infinite engineering roadmap, right? So on the negative side, certainly you can just spend tokens on dumb things. I'm willing to say that. But also, I think it is some indicator of how effective you're running an AI driven engineering team in today's world. I think that is a very sensible approach in the sense of I think what is unknown now is what the price to productivity translation really is And I think it you know if you look across many organizations I suspect you find a very significant standard deviation in terms of what that variability is. And so I think going forward as this also matures, we're going to see best practices. We're going to see there will be books being written. that you know you'll you'll start seeing in all the developer areas about how to do it efficiently and so i think there's probably i think we're just not there yet i think there'll probably be some guidance developing over time about how to do it without just being jensen's spend all the money on gpus you possibly can approach you know which you would expect yeah and i think maybe it's because we don't totally know the right metrics to optimize and max out right now, right? The tokens is probably a vanity metric in the same way that like clicks to your website is a vanity metric in terms of your top of the funnel, go to market activities, right? Like you can put a lot of money into, let's say ads or something like that and get a lot of noise to your traffic or maybe you of bot traffic, you have a lot of traffic on your website. It doesn't mean that you are doing a great job in terms of your organic discoverability, SEO, et cetera. Like they we have developed other metrics to judge that over time. Right. And now we kind of have there's been certain metrics around velocity, et cetera, for engineering teams over time. And now all of those things are kind of like the rules are broken to your point. So like what metrics do we use? sure, token usage is a vanity metric. I think it probably is, but it's not like it doesn't. Yeah. Correlation, causation, all of that stuff. Right. If you are doing well, you probably are using a lot of tokens, but it doesn't mean if you're using a lot of tokens, you're doing well sort of thing. And I think one of the thing I'll just throw in as we as we close out on this one is the fact that, you know, you mentioned just a moment ago, the kind of infinite roadmap, that a company might have on stuff. But it is possible to potentially outrun what your organization can manage in its other capacities. So even if you are able to produce a lot more productive code that drives the capability of whatever your company does, if you're outpacing what the rest of the organization can absorb and manage and modify, then that's another form of potentially losing efficiency where pure token maxing may not get you what you want. You can kind of pull it back a little so that your organization doesn't die under the weight of it. So just one last thought to keep in mind. It's a business. It's not just programming. Well, I would be curious if someone was token maxing to take a look at a few of their chat logs to see maybe how they were doing that token maxing, which it appears could actually be discoverable information in a court of law. So transitioning a little bit, one interesting thing that I saw this last week was a ruling by a federal judge in a case where the federal judge actually forced a defendant to hand over chat outputs, I think in particular from Claude in this case, that they had used to prep some legal materials. And the overall idea here being that AI systems like this, tools, are not lawyers. They are just what they are, tools. And so conversations with, you know, if you want to think about you're talking to your AI legal assistant, these are not lawyers and so they aren't protected by attorney-client privilege even if you're talking about these you know legal matters essentially the court treated ai systems like this like a third party meaning confidential confidentiality was effectively waived so that it's um kind of uh disturbing and in some ways maybe even like if you're listening out there you probably are thinking of that one conversation you had with chat GPT or Anthropic or whatever that is like, oh man, I hope I never go to court because they're going to find out about that chat log. Yeah. I'm not at all surprised about this. And I think there's been a lot of foreshadowing that such things would happen. Most of these companies, probably all of them have long since said this is not protected. I know ChatGPT, you know, the notion of once the voice capability way back came out and people were having, you know, live conversations and that evolved into people kind of treating it as a confidant or friend or that special someone who understands me in that. And I think these are all kind of different flavors of the same thing. So I don't think it's strictly a legal field issue only. It's also a medical field. It's a psychiatric field thing. And I think it's kind of like, well, I think they got bitten on that one, you know, when they went to court. But I think it's important to remember that no matter what it feels to you personally and how you're interpreting these things, it doesn't have a special, you know, the courts may ask to see that. So and that's an easy thing to get. Well, and I'm just thinking through all the implications of this. And certainly there is like the general public who might be whatever it is, divorce cases or, you know, whatever they're dealing with in their own life or criminal sorts of things. But for me as a founder, like with Prediction Guard, there's a lot of information that I process. And we have a lawyer, right? I think like most startups do or should, right? And we're processing back agreements or contracts or updates to license agreements, right? It's so tempting to just say, well, I got this from my lawyer. Let me pop it into XAI. Well, I guess I shouldn't use X as like a general variable now because X is not a general variable. It's a social media company run by Elon Musk. But if I pop that into a random AI system, then essentially I have moved from something that was confidential to something that is explicitly not confidential now. And so if those are in draft forms or if we're like talking about things that are just contemplated within the company or dealing with a problematic customer and, you know, that sort of thing, all of that essentially is then moved into what is discoverable. and you know i'm not hopefully getting getting sued for anything but yeah it does make you think in in the ways that you're using these these systems and i it does seem like in some of the response to this but even before this that some law firms are putting into kind of the contracts that they're using that that and maybe even people should think about this in their own license agreements and other things that they're doing for their products like hey if you're putting information into an AI system, you're essentially waiving privilege of confidentiality, right? Because that is explicitly discoverable unless it is explicitly private or you're running a private model locally. But yeah, there's warnings that need to be sent out to people to not do this. There's implications and maybe contracts that need to be updated, all of that stuff. Yeah, I'd like to kind of wind up a little bit with a question. And maybe some of our folks in the audience can educate us a little bit on our social media channels. And that is with, if you look at kind of communication things like Signal and like ProtonMail, who are catering to users who don't want there to be a record explicitly, So there's literally nothing there that a government, a court or whatever could access. And I'm wondering if there will be those kinds of systems for AI chat and what the legalities of those are in various jurisdictions. So that, you know, you can have your chat bot, but you're literally able to honestly tell the court, well, no such law, sorry, no such law exists. So if anyone has any insight into some of that, like the juxtaposition of AI chat and kind of like no record systems that we're seeing pop up, I'd love to hear about that. So, Daniel, any thoughts about that yourself? Yeah, I think it's really interesting. I also think it brings it back to something that really companies have been forced to deal with in terms of multiple transitions of technology where what you could send or not send confidentially in a physical letter. there were sort of rules about what you could send and not send confidentially an email. There's different, you know, intuitions that we've built up around that. And we just don't have the intuition here yet. And I think that will develop, but also that it does, to your point, present maybe some market opportunity as well, or some, some process sorts of things that everyone needs to think about. So it was fun to talk today, Chris. Go ahead and kick your shoes off and relax for the evening. You know, invest in AI data centers. I guess that's what we should do. There we go. That's my pastime going forward, I guess. Yeah. Awesome. See you, Chris. Take care. practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner, Prediction Guard, for providing operational support for the show. Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats, and to you for listening. That's all for now, but you'll hear from us again next week. Thank you.