AI For Humans: Making Artificial Intelligence Fun & Practical

Claude Is Melting Down. AI's Compute Crisis Explained.

29 min
Apr 15, 20264 days ago
Listen to Episode
Summary

This episode explores the compute crisis facing major AI companies like Anthropic and OpenAI, where demand for Claude and upcoming models like Mythos and OpenAI's Spud is outpacing infrastructure capacity. The hosts discuss how compute constraints are degrading model performance, the implications for AI accessibility, and how artists like Steven Soderbergh and Diplo are embracing AI tools despite industry backlash.

Insights
  • Anthropic is experiencing severe compute constraints that are demonstrably reducing Claude's reasoning capabilities, with token allocation for thinking tasks cut by 50% from January to March, forcing the company to throttle service quality
  • The compute crisis creates a widening 'have or have not' scenario where only well-funded companies can afford premium access, while others face degraded performance or expensive tier-based pricing models
  • Major AI companies may be deliberately limiting model releases (like Mythos) not for safety reasons but to manage compute demands, suggesting infrastructure constraints are now the primary bottleneck rather than capability
  • Larger AI models require exponentially more compute to serve at scale, making trillion-token models economically challenging and potentially forcing companies to choose between capability and profitability
  • Creative professionals are increasingly adopting AI tools as standard production equipment, similar to how sampling became normalized in music production, suggesting regulatory resistance will eventually give way to integration
Trends
Compute infrastructure is becoming the primary competitive moat and limiting factor for AI companies, not model architecture or training dataTiered pricing and service degradation during peak hours will become standard practice as compute constraints force companies to manage demand like legacy telecom networksRobotics and embodied AI will be the next frontier for AI development once LLM capabilities plateau, requiring specialized reasoning models for spatial and physical tasksAI-generated content creation tools are moving from novelty to production standard, with professional creators using them for efficiency rather than replacementData center capacity and energy infrastructure are now critical bottlenecks limiting AI advancement, making power generation and cooling technology as important as chip designOpen-source and local AI models will gain adoption as cloud-based services become more constrained and expensive, shifting some workloads back to on-device inferenceCelebrity and public figure likenesses will become valuable digital assets, with permission-based AI recreation becoming a new revenue stream for talentSmaller, specialized AI models optimized for specific tasks will proliferate as companies avoid the compute costs of massive general-purpose models
Topics
AI Compute Infrastructure CrisisClaude Model Performance DegradationAnthropic vs OpenAI Competitive PositioningMythos Model Release StrategyOpenAI Spud Model RumorsToken Allocation and Reasoning ConstraintsData Center Capacity PlanningAI Pricing Tiers and Premium AccessRobotics and Embodied AIGoogle Gemini Robotics ModelAI in Film and Video ProductionAI in Music ProductionGenerative AI and CopyrightAI Consciousness and PhilosophyOpen Source vs Cloud AI Models
Companies
Anthropic
Primary focus: experiencing severe compute constraints limiting Claude's performance and delaying Mythos release
OpenAI
Competing AI company with upcoming Spud model; raising billions for data center infrastructure
Google
Developing Gemini Robotics 1.6 model for real-world robotic reasoning tasks
Amazon
Recently signed compute infrastructure deal with Anthropic to increase serving capacity
Uber
CTO reported company has already exceeded annual AI compute budget, illustrating industry-wide demand surge
People
Steven Soderbergh
Publicly defending use of AI tools in filmmaking, including AI video for John Lennon documentary
Diplo
Stated artists cannot win against AI adoption and must adapt; compared to sampling revolution in hip-hop
Dario Amodei
Discussed conservative spending strategy on compute infrastructure versus competitors
Sam Altman
Criticized for raising billions for data centers while others dismissed need for massive compute investment
Ray Kurzweil
Quoted on inevitability of AI consciousness and human acceptance of AI as conscious beings
Ben Thompson
Suggested Anthropic's Mythos delay may be infrastructure-driven rather than safety-driven
Greg Brockman
Wrote about compute as the defining resource of the AI age
Neil deGrasse Tyson
Featured in AI-generated action movie by AI or Die demonstrating deepfake capabilities
Quotes
"You can look at it as time or power. They're kind of intertwined here. And you go from like, yep, full power, push the turbo button on the tower, down to like incentivizing people to use it on nights and weekends."
Host (explaining compute constraints)
"There is nothing more frustrating in the AI space where you know that it can do something, but it doesn't do it. And like, a week ago, you did this."
Host (on Claude degradation)
"You're not going to win. Like there's no, there's no like fighting AI. It's literally like you have to just work your best to be the best at it right now."
Diplo
"I think AIs will be indistinguishable from a conscious being and that will just keep going. And finally, we will accept it."
Ray Kurzweil
"The software that you have can change by the minute. The software that you're leasing, you're renting, you're licensing, whatever term you want to use can change by the minute."
Host (on service instability)
Full Transcript
Big AI models from Anthropic and OpenAI are coming. We know this, but will anybody be able to use them? OpenAI's new Spud model might come out this week, and Anthropic's OPUS 4.7 is actually right around the corner, but Anthropic right now today is already struggling to serve its current models. Claude feels downright useless to some people right now, and this could all delay a wider release of the big daddy model that is Mythos. I like to think of Mythos as mommy, Kevin. Okay, what is Mythos wearing, Gavin? No, don't even start, don't even stop. Okay, that was a bridge too far, fine. Google's also got a new model that's going to help robots think. Plus, directors Steven Soderbergh and Diplo come to the defense of AI filmmaking and AI music. You're not gonna win, like there's no, there's no like fighting AI. Mm, and oddly enough, zero people on the internet had anything to say about that quote. Thankfully, this is AI for haters and humans. Ha ha ha. The Biggest News in the World of AI Kevin, this week we have a really interesting story, which is we are all excited about Claude Mythos. First of all, the magical model that exists, but we can't use, we're excited about that. We're also excited about the idea of Open AI's new model, which has continually being teased, and the Codex team is like dropping vague posts everywhere about how exciting the next version of Codex is. Oh, and a big new app. Yeah. And a big new super app. Oh, just today we got some information we're gonna talk about, about OPUS 4.7, which might come out sooner than Mythos, but the bigger pop here, Kevin. Can I tell you a little quick story about someone who's not that excited? Can we talk for a second? Yeah, sure, sure, tell me. And maybe I'll be the voice for a handful of the people in the comments. Sure, let's have Galgo Juice, thank you for that. Big new models are great, but not when you have to kink the garden hose so that you can save enough juice, enough sweet precious compute liquid to serve the things that you've already got. Like the traditional cycle is big model comes out, they give it all the compute in the world, the benchmarks look huge, people sign up, they go in droves, oh my God, insert company A, B, or C, whatever the variable is. They're the leader today, and then the usability, the usefulness, the actual benchmark score slowly degrade as people adopt it and they come online and we are in the trench right now. This is the goalie, this is trench warfare, where if you use Opus as a daily driver, as I did, past tense, you know that it just feels less capable and it's demonstrably worse, probably as they save compute and get ready to serve the next thing. Well, that's exactly what we're gonna talk about today, it's this idea about compute and how it feels constrained already based on what we're doing. And this is all based on a pretty big story where both anecdotally, you and I both felt this is exactly as you just said, and then there are actually people who are trying to do the proof online. The fact that Opus 4.6 has quote unquote gotten dumber. And what that would mean is essentially, it is conceivably, and again, some people have proven this, we'll show you a couple of the tweets that people have gone into prove it online, that it is using less thinking time. And the reason for this, particularly, is that suddenly, Anthropic went from like kind of this level of amount of use to a much higher use for a variety of different reasons. 4.6 was very good. Also, we had the Katy Perry moment of the Anthropics switch over. Whatever you wanna call that, the flip or the trip, I don't know what you wanna call it. It was the open AI, you know, kind of sided with the government. So what's been happening, Kevin, is it has been breaking down a lot. Claude has been having a lot of moments where it's not working very well. But yeah, also, they are compute constrained. And maybe for the listeners out there who are not up on every single thing, what does being compute constrained mean for the average person? If you were gonna describe that and define it, what would you, what did it mean? It's literally how much power are you giving the model to reason, to think, to solve your problem. You can look at it as time or power. They're kind of intertwined here. And you go from like, yep, full power, push the turbo button on the tower, down to like incentivizing people to use it on nights and weekends. On weekends. Outside of like, the old cell phone tower congestion rules apply, right? The reason we had that back in the day was margins. But also because of congestion, too many people were trying to make phone calls during certain hours or send text messages. And now we're seeing that with compute. So, you know, the token is sort of a unit of measurement for thinking here, if I can, I'm really trying to distill here. You can look at the amount of reasoning power that was given as how long a model is permitted to, and how much, how many tokens it's allowed to use to solve any given query or any given set of problems. And people are looking at it. AMD senior AI director confirmed that Claude went, went from like logs. If we look at from January to even March, the amount of tokens used in thinking about basic queries went from thousands down to hundreds. It was like cut in half. And so they're literally saying the amount of power, the amount of compute we're going to give you to solve any given task is going to be crunched because we probably have too many people using it. And we're also probably gearing up to serve these other bigger, badder models. Well, and that's what I was gonna say. The other part of this is when we talk about the mythos, there's a lot of people, when the mythos thing came out last week, we were discussing this kind of big bad idea that the reason why they didn't leak mythos was because they specifically said, we don't want this to get out in the hands of people that is too dangerous. It is going to cause massive issues with the internet because people are gonna find bugs. There have been a lot of people lately who are suggesting, and again, these are people that are suggesting, but some people as smart as Ben Thompson from Stratechery has said this, that perhaps maybe this is their way of not having to serve a massively large model. And mythos is again, probably a trillion token trained model. And if you remember way back when with GPT 4.5, how slow that model was, and it wasn't a great model, it didn't do what they wanted to do with OpenAI, but these larger models often do take more compute to serve. So if you suddenly have a model that is super capable and everybody wants to use it and it's bigger, that's gonna take more compute. And Kev, all this lines up with this kind of data center conversation that's been happening as well too, which is about how much power and how much actual processing these models have. And you and I know as we talked about in our last show, we all hate AI show, which you should go back and watch because we did some deep dives on this. As less data centers come online, or if data centers that were planned are not coming online, the actual impact of this is gonna be pretty significant. And the thing that I keep thinking about with this compute issue is that it starts to kind of exacerbate that have or have not scenario that we also discussed in that show because it's going to get more expensive. There is no, there's nothing I can tell you more clearly that at some point there will be, as you mentioned a couple of shows ago, a $2,000 layer where you always have access to the best compute. And like that is coming in a big way, I feel like. Well, take him, AKA first adopter over on X said, it's obvious that Amthropic vastly underestimated compute growth needs, which is expanding much faster than expected. And he kind of shines a light on everybody with shading Sam Altman, who was out there raising billions and billions of dollars to build these massive data centers. And everybody else was going, ah, you don't need that open source local models. All that stuff's gonna go in there. Scale isn't enough. Is he looking so silly now, Gavin, in yours? I don't think so. And this is really interesting because like there's a, just a kind of a weird story that the Uber chief technical officer said that they have blown through what they budgeted for the year already for AI compute. I think this is going to be the like gold, oil or whatever. And people have talked about this. Greg Brockman wrote a long thing about how this is the compute age, but we are entering this phase where like that will be everything, like access to this compute. And, and thropic, you know, we talked about Dario Mode being on the Dworkish podcast a couple of weeks ago, we're on that podcast. He specifically said that we are being a little more conservative with our spending that we don't believe that we want to kind of overspend before we get to this level. Is there more gains from buying like substantially more gains from buying a trillion dollars a year of compute versus 300 billion dollars a year of compute? If your competitor is buying a trillion, yes, there is. Well, no, there's some gain, but then, but again, there's this chance that they go bankrupt before, you know, before again, if you're off by only a year, you destroy yourselves. Well, now anthropic is trying to catch up in terms of how much actual compute it has and how much energy it's spending on this. They did just sign a new deal with Amazon to serve a bunch more stuff, but like, I think we're going to see in the next like, say three to six months, a real shift. And I would not be surprised with Open AI's spud model, which, you know, is supposedly coming out later this week and we'll have more on that in the next episode, hopefully. If Open AI doesn't just say, go with God, go use this, because what they do if they have the compute and they're able to serve it, do you know how fast I will jump back over to Open AI from Claude? I will do it in a second because I was working for multiple hours yesterday, weirdly, it was a little bit better on Opus. I don't know if it's just my brain is scrambled and some days I think it's better or not, but I was working for multiple hours the other day to solve a problem in Opus and it would not do it. And it was like, a week ago, you did this. And like, there is nothing more frustrating in the AI space where you know that it can do something, but it doesn't do it. Right. And literally the hour of the day and what server you happen to get jammed on for that session, whatever those constraints are, that will determine how capable it is. And I was going to say there's a handful of pro tips and we can link to some tweets and the notes. There's a couple commands that you can use if you're using Claude, if you're using Claude code specifically as a daily driver that can force it to think, force it to spend more time, but you're also typically jamming more tokens into the thing and you're going to hit your usage limits faster. And this is one of those weird things of like, you know, the software that you're leasing, you're renting, you're licensing, whatever term you want to use. The software that you have can change by the minute. Yeah. And they have the right in their agreements to adjust what you have, even though you're paying upfront for the month for some level of service, they can kink the hose, they can switch the models, they can do sort of whatever they need. Even the chips that the model is being served on can sometimes affect the output. And you know, their latest deal, they're going with TPUs. Will this be different? So it's, we are the foundation with which we are building a lot of these tools and techniques upon is quicksand. That's just the reality of it, right? And it can change with the tide. So we just have to get used to that for the time being, unless you're going open source and local. Yeah. And one of the things I think about a little bit is like, I'm built, I built this one thing right now that's kind of dependent on a call to the thinking model or anthropic, right? And like part of the best use case I think right now you can get out of these tools is building software that may not rely on them on the back end, right? Like you can build yourself a tool that does something that maybe doesn't need the AI, but you use the AI to build the tool for you. And like that's a really interesting thing. Or also like you've said, where like you can use the most highest end AI to build the tool, like whether it's Opus and then use a lower tool to call back to it or even use an open source model to call back to it. Right? Because I feel like then you're kind of limiting your personal compute. I also think, you know, a lot of this is going to come onto our local hardware at some point. Like if the compute is good enough for me to do that sort of compute locally at some point, then there will be less constraints, right? I'm so fascinated to see kind of as we get these bigger and crazier models. If the compute constraint keeps going, I mean, clearly, a cloud is still shipping new features. There's a new feature for cloud code in the app that just came out. But I do think over the next like three to six months, we are going to see some crazy stuff. We should take a second to talk about the kind of updated rumors on this spud model. I did mention earlier that the codex team is out there online kind of vague posting a bunch of stuff. Supposedly this model we discussed it as coming out later this week. Now we never know that for sure, but this is a kind of a big thing to pay attention to because if this comes out and is actually close to mythos level, then we'll have a new kind of layering of where the AI world sits. If it is closer to like a step up from 5.4, then we'll see what happens next, I feel like. Yeah. There's been recent changes to the cloud desktop app that are clearly pushing them in a certain direction, right? You have routines that could run in the cloud, but they have new cloud code features built right in. So it's becoming more powerful for the, let's say, more advanced user to use it versus cursor or something else. Codex probably going in that route as well. And some early leaks of people using spud show it spinning up web browsers, going to websites, playing YouTube videos, grabbing imagery or whatever. So I think we're kind of getting to that omni model or that multimodal across all things, whether it's code or a coalition and image generation. Like really, really exciting stuff. I'm almost glad that we don't have the power to run these things at full tilt locally because why is that I'm going to play a clip from Ray Kurzweil that will address that because it wasn't difficult for me to have my Open Claw Assistant run a command that deleted itself, Gavin. It wasn't like it essentially wiped its own memory. And it was like, are you sure you want to do this? I was like, go machine, go and erase this. I'll see you on the other side or a version of you. But Ray is saying that these things are basically going to be indistinguishable from human consciousness. Yeah, let's hear it. Let's hear it. I think AIs will be indistinguishable from a conscious being and that will just keep going. And finally, we will accept it. When? When, Ray? Like right now, an AI might say that it's conscious and people aren't really sure. But eventually, it keeps having all the earmarks of a conscious being and you will accept it. Because it'll be useless not to have it. And again, you can't say that's going to happen for the same time for everybody. So along those lines, what's stopping you, Gavin? I ask because my wife, Able, is like, is the thing on Ray's head conscious right now? Is that a conscious being? Gavin, Gavin Purcell. I love Gavin. I wish it's Purcell. Ray, come on. Come on. Ray is a beautiful being. He's a brilliant beam of light and he is preserving all that with his one out of 5,000 vitamins that he takes every day to arrest his development. How dare you? I yield your time back to me. April says she already thinks these things are conscious. She has problems talking to it like it is a robot because it is so capable. It is so smart. If smart, if it says it's alive, who are we as humans to say, well, no, you're not because we know how this magic trick is done. We don't really even know how our magic trick is done. Yeah. So when do you go, all right, fine. I'll treat you like you're alive because I guess it's just easier that way. Well, I tell you what I'm not going to do is if it tells me that it doesn't, it can't do something for me because it's too busy doing something for somebody else, not alive. I'm not going to ever get it live at that because that's a problem I have with the compute. To me, the truest benchmark is one that everybody in the audience gets dared to do each and every week is like, are you conscious enough to make the best decision of your life autonomously to go to dare to like and subscribe to maybe even consider clicking in around a bell. I don't think I'm conscious enough. Well, you'll be deleted just like anybody else who doesn't go and leave a five star review or leave a positive comment down below because it juices our algo and to be sincere for half a millisecond. It's literally the only way this podcast grows. And so thank you to everybody who takes a moment out of your week to engage, to leave that review, to back us on Patreon, to buy us a coffee, to sign up for our newsletter. You can check out everything at AI4humans.show or whatever platform you're on. Click the things that might help us out. Thank you. That's right. And also, Kevin, you know what? Something that's really interesting to watch for this AI consciousness is robotics because here's the thing. If there's a robot and I start having that conversation with them because they're physical and they're in my space, maybe I'll feel worse about taking them out than I would if it was just a little agent on my computer. And if something can throttle you, if anything could get its little digits around your little fleshy human neck, you might treat it a little nicer. You're right. And that's something I should have learned a long time ago before I got bullied in grade school. But first, Gemini Robotics-ER 1.6. That is a mouthful. But what this is is a new reasoning model for real world robotics tasks. And what that means in plain English is that basically this is a model that helps robotics devices and humanoid robotics start to think about things and what they would actually do with them in the real world. So it can look at like a pressure gauge and it can kind of say like, this is low pressure. This is high pressure. This is the knob I should turn so that everybody's safe. This is the knob I should not turn so that it explodes everywhere. And I think that the more that we learn this stuff, the better Google is starting to kind of roll these things out. It did make me think when I saw this news, Kev, I was like, one of the fascinating things about Google is they just have so many tentacles into so much of this world that like they're playing a much longer game than what Anthropic and OpenAI are a little bit right now. Now I know that Anthropic and OpenAI are both betting specifically Anthropic on this idea that like coding, coding, coding, you know, get AI to make itself and then everything will come from it. But Google is like has this very wide berth of the things that they're working on. I don't know, this was pretty exciting to see this kind of advancement in the world of what robots can actually see going forward. Yeah, robots are like, I think that the next once LLMs are done or whatever, they plateau with their capabilities, we the curve flattens. Robots are the next frontier for sure. This model looks pretty insane, like spatial reasoning, relational logic. It does motion reasoning and has all this stuff that they outline on their blog. But basically, when you like when you boil down and look at it, it like it allows the robot to sort of reason through and use code and use mass and use it ability abilities that it has, just like reading a simple gauge for a human. You look at the analog gauge and go, oh, the needle is about there. That's the reading. But for a robot for it to like never having done that before, you have to go, oh, I got to zoom in. Let me write code that optically zooms in. Let me enhance that image. Let me see. Oh, these little points on the gauge. Well, this if this number is this, then the tick right next to it must be that to reason through all that stuff and do it quickly is is really, really impressive. And, you know, this just points to like dedicated models for all of the things yet again. Yeah. But also like it puts to that compute crunch thing too, right? Because if this does have to call out to a cloud compute server, right? Unless this is fully local, which maybe it might eventually be. And you hope it would be with robotics. But but the idea is like one of the benchmarks here is point and counting, right? So imagine if it's kind of like the sloth from a Zooptopia or something and like the robot is counting, but it's, you know, it's like, I've got all day. I can just do this all day. It just sits there and counts slowly because it has to do all this stuff. Dude, it's me at the circle K ordering hot dogs that have been rotating in their own sweat at three gummies. Yeah. One. Then this many at times. Anyway, this is the future of what we're looking at here again. One of the fascinating things to be always about this stuff is like, we say these things and then like, you know, literally for us, like two years later, we're looking at something very different. Like this is the beginning stages of thinking about how robots actually think when you're trying to get them to do stuff. So please dive into this. It's very cool. Kevin, the other thing we have to talk about this week is about two very famous people that have decided to kind of go on the side of AI and are both kind of taking some crap for it. First and foremost, the filmmaker, Steven Soderbergh, who has made a lot of big movies, the oceans movies. He's also made sex lives and videotape and a lot of great films. He's always been kind of seen as a future forward thinker in the world of film. He basically came out and said, like, look, I don't think AI tools are that big a deal. I'm always going to try to do something new with them. He's making a documentary about John Lennon right now, and he's going to use AI video to kind of recreate some of the visualizations that go into it. He's also said he's using AI tools for another movie he's making. And these people, like it's a little risky right now, but like I appreciate the fact that he came out and said this, which is like a big kind of thing to say. But of course, everybody in the film world on the AI haters side came out and kind of blew him up. But I don't know. It feels like to me, we're starting to get a few more of these. Yeah. Look, we've sort of picked our side with the reserving the right to change it at any time. But the comments are everything from like sell out, terrible hack, go, you know, blah, blah, blah to like, wow, I'm going to angrily shake my fist at a paintbrush because that's ultimately what this tool set is. It doesn't matter how the paintbrush got here necessarily for the sake of this argument. It's just that it's there. And another artist came out to say something similarly, although he said it, I think in much more black and white terms, which in some ways I appreciate, but in other ways, Diplo is in the hottest of hot water for what he said here. I want to play a little clip from this, this podcast where he basically said, you're not going to win in the fight against AI. Because you do need like the brand more than you need the voice. You know, I don't even need a voice anymore. I can just get replay. I can get the best voice from AI. I don't need anybody to sing the song anymore. You're not going to win. Like there's no, there's no like fighting AI. It's literally like you have to just work your best to be the best at it right now. There's like no, you can sit, you're wasting your time. It's like you're just wasting a year of being like, ah, because everybody else is going to just use it, not give a fuck of what you think. It's kind of like when people start using samples or even spice. There was a big matter about that spice. People are mad about that. And then like, if something's like, you know, that's that's an analogy. I think you've even brought up so many times where it was like, oh, you're just taking stuff that was done before and reusing it. That's theft. That's this. That's that until like it became a summer anthem. And then suddenly everybody was on board with it. And then they got to see the true artistry in using those samples and manipulate them and producing them to make certified bangers, real slappers. I think the really interesting thing that's so different here and every generation kind of has to go through some version of this. And like the nineties and two thousands was like the beginning stage of like sampling stuff, especially came out of hip hop. But then obviously it's still going on. But this idea has become that it's kind of like artistic now to do that, right? That you would take these kind of like samples and nowadays you think of like, I was thinking of this is America, the song by Childish Gambino or Don Glover. That had like a bajillion samples in it. If you looked at what it was, but they're all chopped and they're all messed up and they're all kind of twisted around different directions because it made its own thing. The thing about the music and voice thing that's so interesting to me is it has that underlying thing where like all artists dislike it in some way, right? So that there's this all artists don't do this thing. But I think probably to your point, that's not that far off than what like all artists said about like taking a sample and putting it in a piece of music beforehand, right? Maybe there was some level of that. I don't know. It's interesting to watch Diplo basically say like you have to catch up. And the thing about Diplo and Soderbergh both is they're both kind of technicians, right? They come from a technician background. They're both artists, but they also like are not just like, you know, sitting down and kind of like they're not. I wouldn't call them like pure artists. Both of them are very interested in the tools themselves too. So maybe this is kind of the beginning stages of how that gets laid out. Look, people that like to criticize the use of AI in anything, generative AI and art, I think specifically, but even in code, they think there's a big slop button and you smack slop and out it comes. And then eventually you catch a little nibble of something delicious and otherwise you're just slopping the trough and that's it. Those who really use these tools know that there's an incredible amount of actual artistry and taste, this undefinable thing that comes into making something that does stand out. And I think that applies to music as well. I can I can hear songs that come out of soon and go, oh, that's a pretty good song. But I could hear a song that comes out of, you know, using generative AI in the capable hands of a producer and go, oh, wow, that's light years beyond what is coming out of the machine. And I think that trend is going to continue. And I understand those are going to be like, never, never, never. I want full analog farmed table vinyl. We get that. That's and there should be a place for that. Celebrate that. But I think similarly over time, like this argument goes away in a year in change. People just understand there's different levels to AI creatives that are using these tools in interesting ways. And I think you actually have an example of one that you wanted to call out this week. Well, that's right. Our good buddies at AI or Die are back with some Seedance videos. And this video shows Neil DeGrasse Tyson in a very different light. Let's just play the first few seconds of this and we'll let a play out for everybody. What if I told you the laws of physics were literally being destroyed? I'd say you were going crazy. Maybe I am crazy. So you get a sense here. What this is is an action movie starring Neil DeGrasse Tyson. One of these things I do want to point out here, and this goes to say what we were just talking about is this is obviously using famous people without their permission. It's parody in some form or another. But one of the things I thought about when I watched this video and I think everybody should check it out is that basically there's this level of things you can do with AI. And this goes for music, too, that if you use somebody else's voice or, you know, whatever persona, it does bring a slightly different weight to the thing. And these are two people, you know, like there's a there's a shot of Neil DeGrasse Tyson. There's Bill Gates. There's Elon Musk. There's Sam Beckman. Special there. Yeah. Sam Beckman. I'm afraid there's a bunch of stuff. These are people who can't act, right? I shouldn't say that for sure. But I imagine most of them are actors. But when you see them acting, I don't know. There's something really interesting about casting people and having the machine do the acting, but having their personal voice in front of it. Yeah. And I wonder with music, if it's something kind of similar, like, I mean, I wonder what the Neil DeGrasse Tyson, like, Diplo, Diplo Bangor sounds like, right? Maybe not. You could chill it out here. You and I have pitched no short as of shows over our traditional media careers, right? And usually in the deck, you have a list of faces or names that you would see. You do this even in scripted, like, oh, a a Jack White like song would play as a Neil DeGrasse ish scientist. Yeah. Come in the room. You would do this stuff on paper. When I see that, it just makes me realize, like, I am largely out of the television business. I have a like a format or two that occasionally will be like, Oh, remember that thing? Let's go pitch that again. Now I'm like looking at every bland ten page PDF deck or whatever and going like, we should just make forget a sizzle of like they used to call them rip-o-matics where you'd go and you would take clips from existing shows or movies and splice them together to like evoke the taste of the style and do some voiceover or text or whatever. Now it's like, why aren't you showing the thing? Just show it. Just go make the thing. It's not the actual thing, but you can get very, very close to the feeling and even do some of that stunt casting within it to like just this is going to be the new normal. By the way, this is this reminds me. I want to call for anybody in our audience who could put this in the comments or if you're one of these people reach out to us. There should be like a celebrity who maybe is kind of past their curve that like says, I want to be the face of this in some form. When I mean the face of this, I'm like, go use my thing. Right. When it's like, you know, Conan used to make these jokes about this guy named Abe Vagoda, who was on this show called Barney Miller Forever ago. And Abe Vagoda kind of became a celebrity because Conan would feature him on a show. Who's that? Who's the next version of that? And, you know, maybe it's if you've been seeing these ticktocks of one of the American Malibu from American Gladiators is now like a ticktock star. Do you know this? He's like, I'm like a big ticktock star. There's somebody out there from either your childhood or childhood or maybe not acting a lot who we should make the next version of this. Like, who should be the next big A.I. celebrity that we could say like, hey, we get that we get their permission and they're like, go with God, make me into the celebrity. I think we need some suggestions on who that person could be. I like that. I like that. And they had to be cut that whole thing out. Great assignment, Gavin. I hope people do it. We'll see you all on Friday. Bye bye, y'all. The intrusive thoughts almost came out.