Gemini 3.1 Just Dropped. SuperIntelligence Is Coming. We're Fine.
52 min
•Feb 20, 2026about 2 months agoSummary
The episode covers major AI model releases including Google's Gemini 3.1, Anthropic's Claude Sonnet 4.6, and China's Sora-competitor Sea Dance 2.0, while discussing Sam Altman's claim that superintelligence is 2 years away. The hosts analyze the competitive dynamics between AI companies, showcase practical applications of AI video and music generation, and explore the emerging ecosystem of AI agents and open-source tools like OpenClaw.
Insights
- Incremental model improvements are accelerating in frequency and capability, with each update making AI agents and tools measurably more efficient in real-world applications
- The competitive tension between OpenAI and Anthropic leadership reflects broader industry consolidation, with major tech companies acquiring key talent from open-source projects
- AI video generation tools are democratizing content creation, enabling individual creators to produce cinematic-quality outputs that previously required studio budgets and teams
- The gap between AI capabilities inside and outside the tech bubble remains significant, with mainstream users still underestimating AI's current practical utility
- Open-source AI orchestration tools like OpenClaw are creating new business opportunities and integration possibilities, but face regulatory pressure from major API providers
Trends
Rapid consolidation of AI talent: OpenAI hired OpenClaw founder, signaling acquisition strategy for open-source projectsAI video generation moving from novelty to practical tool for pre-visualization and content creation workflowsBenchmark improvements becoming less meaningful than tool-use efficiency and agentic capabilities in real-world deploymentHallucination reduction in newer models is incremental but measurable, improving reliability for production use casesChinese AI video models (Sea Dance 2.0) advancing faster than Western competitors in motion and effects qualitySaaS disruption accelerating as foundational models improve, threatening specialized software companiesAI music generation still lagging behind text and image generation in quality and adoptionRegulatory pressure on AI video tools increasing from entertainment industry (Disney, Netflix, SAG-AFTRA)Individual creators leveraging AI tools to produce professional-grade content at fraction of traditional costsSuperintelligence timeline compression: industry leaders now discussing 2-year horizons instead of 5-10 years
Topics
Gemini 3.1 Model Capabilities and BenchmarksClaude Sonnet 4.6 vs Opus 4.6 Performance and PricingSea Dance 2.0 Video Generation and Copyright IssuesSam Altman Superintelligence Timeline PredictionsOpenClaw Agent Orchestration and Open Source AIAI Video Generation for Content CreationHallucination Reduction in Language ModelsAI Music Generation (Google Lyria 3 vs Suno)Agentic AI and Tool Use OptimizationOpenAI Acquisition of OpenClaw FounderChinese Robotics and Unitree Robot DemonstrationsAI Video Deepfakes and Celebrity Likeness IssuesBenchmark Testing and Model EvaluationSaaS Disruption from AI Foundational ModelsAI Content Creation Workflows and Blender Integration
Companies
Google
Released Gemini 3.1 with significant benchmark improvements and new Lyria 3 music model; launched Photo Shoot feature...
OpenAI
Sam Altman announced superintelligence timeline of 2 years; hired OpenClaw founder; rumored GPT 5.3 release with new ...
Anthropic
Released Claude Sonnet 4.6 and Opus 4.6 models; CEO Dario Amodei refused to hold hands with Sam Altman at AI Summit i...
ByteDance
Sea Dance 2.0 video generation model released; faces legal challenges from Disney, Paramount, Netflix, and SAG-AFTRA ...
Suno
Competing AI music generation platform; hosts noted it outperforms Google's Lyria 3 in quality and features
CapCut
Offers Dermina creator program with Sea Dance 2.0 integration for video generation
Disney
Condemned Sea Dance 2.0 for generating videos with celebrity likenesses without permission
Paramount
Joined Disney and Netflix in condemning Sea Dance 2.0 for IP and celebrity likeness violations
Netflix
Condemned Sea Dance 2.0 for generating unauthorized celebrity content and IP infringement
Boston Dynamics
Referenced for robot demonstrations; AI-generated video of their robot dancing fooled hosts
Unitree
Chinese robotics company performing acrobatic demonstrations and Kung Fu sequences at Chinese New Year event
Meta
Was in talks to acquire OpenClaw founder; no major AI releases announced despite expected Scale AI launch
XAI
Mentioned as potential acquirer of OpenClaw founder before OpenAI hired him
Blender
Open-source 3D modeling software being integrated with OpenClaw agents for automated model creation
Contra
New marketplace announced allowing AI agents to purchase creative assets and services
People
Sam Altman
OpenAI CEO announced superintelligence achievable within 2 years; refused to hold hands with Dario Amodei at AI Summit
Dario Amodei
Anthropic CEO; refused to hold hands with Sam Altman at AI Summit, symbolizing competitive tension between companies
Peter Welinder
OpenClaw founder hired by OpenAI; project to remain open source despite acquisition
Kevin
Co-host running multiple OpenClaw agents on VPS servers; using Sonnet 4.6 for daily AI agent orchestration
Gavin
Co-host discussing AI trends, video generation, and practical applications of new models
James Yu
AI creator testing Sea Dance 2.0 prompts; discovered copyright rejection system can be bypassed with indirect prompts
Charles Curran
AI video creator known for edgy content; created viral Sea Dance 2.0 video with Sidney Sweeney and political figures
Ryan Lightborn
Filmmaker using Kling 3 and AI tools to create original sci-fi film content
Riley Brown
Developer of VibeCode app; demonstrated OpenClaw controlling Blender for 3D model generation
Quotes
"We believe we may be only a couple of years away from early versions of true super intelligence."
Sam Altman•Early in episode
"By the end of 2028, more of the world's intellectual capacity could reside inside of data centers than outside of them."
Sam Altman•During superintelligence discussion
"Technological job loss is awesome, and I hope it starts with mine."
Rune (OpenAI employee)•Mid-episode
"There's going to be the independent developer who comes out with something really cool that these big companies don't want to integrate. And then there's the big companies. And there's not going to be a lot of in-between."
Kevin•During SaaS disruption discussion
"The moment those agents are capable enough to be running under them in mass quantities, they're probably going to be just as capable of being on top of them."
Gavin•During agent capability discussion
Full Transcript
Gemini 3.1 has landed. We'll have all the updates in what we're calling the incremental upgrade. Whoa. It's not that exciting when you call it the incremental. Hey, these are big increments, Kevin, and proof that we're moving faster than ever before. In fact, Sam Altman from OpenAI says we're only about two years away from super intelligence. We believe we may be only a couple of years away from early versions of true super intelligence. Oh, of course. That is, by the way, the same Sam Altman who could not hold hand of former co-worker and now CEO of Anthropic, Dario Amodi. They big mad. So we have officially entered the grade school era of the AI wars. But more importantly, we have all the details on Anthropic's new Claude Sonnet 4.6. Yes, and more on China's incredible AI video model, Sea Dance 2.0, the outrage. Oh, it's palpable. Mr. President, it's the Chinese. What happened? It's called Sea Dance 2. They've lost control of the AI. It's recasting every film with Sidney Sweeney. Can you believe that was not Tom Hanks, Kevin? Yes, I can, Gavin. Plus, we have Google's new Lyria AI music model, and it's fine. Yo, check the flavor, the culinary caper word up. Four pieces, 20, the feast is never done. All this as OpenAI snatches up the founder of OpenClaw, and my OpenClaw assistant, Mr. Tibbs, is snatching up my soul. Are you feeling better than last week, at least, Cass? No, not at all. Well, then watch this AI cathode and tell me how you feel. Hey, stop. Stop right there. Oh, I still feel nothing. This is AI for humans and nothingness. AI for nihilists? AI for nihilists. Welcome to AI for nihilists. welcome welcome welcome everybody to after humans your weekly guide into the wonderful world of ai and kevin we have a bunch of new models to get into we have a gemini 3.1 we have a sonnet 4.6 and we have two people that really don't like each other lipped an alert lipped an alert time to spill some tea that's me dipping tea bags let's dip it baby because yes we have all sorts of foundational movement and upgrades and the future is right on the horizon. But let's talk TMZ grade school bullshit. Yeah, first, there's so much stuff to get into as there is now every week. And I think everybody understands this. A lot of people last week commented that Kevin was looking like he had been through the ringer. We all have with all these updates. But yes, okay, there's a summit that happened in India called the AI Summit. All of the AI leaders are there. And there's this video that's going around right now. It just shows you kind of the state of the giant companies right now, Dario Modi, CEO of Anthropic, and Sam Altman, CEO of OpenAI, they're all together holding hands. Everybody says to hold hands. And Dario, exactly, Dario and Sam refuse to hold hands. So Kevin, this is like some real grade school, like he said, she said stuff. This is the state we're living in right now. The moment I saw the clip and everybody retweeting, I was like, oh, no, that's going to be the top of our podcast today. I know it. Because as much as I'm like, all right, who cares? But at the same time, well, there's something going on when some of the most powerful people in artificial intelligence can't even come together to hold clammy clam hands. Well, I do want to get into that because the other thing I want to say out of this AI Summit that's important before we get to the new updated models is that Sam Altman goes on stage and actually talks about superintelligence. So let's play this and then we'll come back and we'll talk about why the handholding might be an issue. On our current trajectory, we believe we may be only a couple of years away from early versions of true super intelligence. If we are right, by the end of 2028, more of the world's intellectual capacity could reside inside of data centers than outside of them. This is an extraordinary statement to make. And of course, we could be wrong. but I think it really bears serious consideration. A superintelligence at some point on its development curve would be capable of doing a better job being the CEO of a major company than any executive, certainly me, or doing better research than our best scientists. Do you know what a superintelligence would do, Kevin? It might hold the hand of the other superintelligence as they were on stage. That's what it might do. Yeah. If you would have told me those were like unitary robots trying to learn how to hold hands and it was like, oh, it's not quite there. That would be understandable, but no. By the way, don't leave because we have some incredible footage of Unitree robots. Will, you can show a teaser of that right here. They are doing crazy things in China right now. We will get to that later, but let's get back to this. Okay, so super intelligence is coming in two years. And again, I think the one thing to take away from this for everybody listening to this podcast, especially those of you who've been listening for a while, a lot of people in the main world will be like, these guys are full of crap. They don't know what they're talking about. We're not that close. We're here to tell you again, And we, you know, as we did last week and every week before, they're probably not wrong, right? And so if they're probably not wrong, and we're going to get into how these kind of incremental increases in the AI models are coming faster and quicker. But Kev, that feels like a pretty significant thing that like we need lots of people like going, wah, wah. Sorry, everybody, the audio probably just got destroyed there. But I meant to do like cartoon eyes where my cartoon eyes are jumping out. Yeah, Gavin was a cartoon fox that, a very, very attractive cartoon fox just walked in across the bar and he went full gazoo-ga eyes. I want a diamond ring. I want some bracelets and everything. Oh, daddy. Good luck, Will. Good luck with that. Do you feel like people are listening now to this? Do you feel like they're actually listening? Or do you feel like this is something still that people just don't even, like, kind of in one ear and out the other? We know that we're in a very particular bubble. And I know that our audience is also within, you know, a form of that bubble as well. Because we are obviously we're passionate and we're engaged and we're interested. And we're also, I think, a healthy amount scared by all of this stuff and the speed, which seems to just be getting faster. But outside of said bubble, there's still plenty of people that if you ask, have you talked with AI? They're like, oh, yeah, I googled something and AI gave me an answer. Or I tried chat GPT and it was dumb. And so there is like very much a divide, but I would say within the bubble, even in here, the sense of defeat is starting to really creep up. I see a lot of people going like, oh man, it's here and it's faster than I thought. And I don't know where I even belong in this ecosystem. And these are people that would theoretically be at the top of some hierarchy deploying agents and whatnot. They're realizing what Sam just said, which is the moment those agents are capable enough to be running under them in mass quantities, they're probably going to be just as capable of being on top of them and having them work for them, I guess is the way to put it. So, yeah, actually, this is interesting that Rune, the AI anonymous tweeter who works at OpenAI, had a tweet that said, technological job loss is awesome, and I hope it starts with mine. He got a lot of crap for this, but, like, what he's basically saying is he hopes that, like, the idea that, like, there will be these superintelligence will get better. Now, the people coming out of this were like, yeah, Rune, you were, like, employee number whatever, probably 30 at OpenAI, maybe 50. You're worth, you know, hundreds of millions of dollars, conceivably, or at least tens of millions of dollars. For you, it's not that big a deal, but for everybody else. Anyway, that's just kind of setting the stage for these kind of incremental, quote unquote, improvements. We should move on to talk about Google 3.1 because as the most recent of these incremental improvements, you would think 3.0 to 3.1 Gemini Pro improvements, like, oh, it's tiny stuff. Oh, let me guess. They add a dark mode and they change some of the colors, right? This is a tiny, this must be bug fixes. And then the benchmark boys get off the bench and they start rallying. The benchmark boys are back, y'all. That's right. The benchmark boys are here two weeks in a row. We're going to quickly go through some benchmarks. So just so everybody understands, the benchmarks on this are actually what I would suspect would be almost like a 0.5 leap in the past, right? You're seeing some really big jump ups. And again, this is that circular idea, but you have an idea of where you go from Gemini 3 Pro on the ARC AGI 2 benchmark, which we know is a very hard benchmark that a lot of these AIs are put through. It was at 31.1% for three, and that Gemini 3.1 is at 77%, right? Humanity's last exam, which is probably one of the hardest, it went from 37.5 to 44.4. Now, these are state-of-the-art numbers right now. If those things mean nothing to you. Here's the Highlights Magazine benchmark. I want to see how quickly you can find the snow cone in the tree. I want an AI that is so good, it ruins everybody's experience at the dentist. Well, I will shout out- Oh, these puzzles are all solved. I will always shout out, AI Explained on YouTube has a great simple bench, which I'm sure he will run this through at some point. That's a really good benchmark. Long story short, benchmarks don't matter really. What really matters is what the experience of the user in the AI is. I will say here's where the benchmarks are starting to matter, and it's on the agentic coding or the tool usage. Because again, we have these things coming together. We talk about open claw. We talk about a clawed code or cursor or all these other things that are leveraging these foundational models to go off and do things. And if these tools get more token efficient and get better at using tools, suddenly like we actually don't necessarily even need the incremental updates with how smart or capable the model is. If it's good at using the tools, like be the best crow in the murder, if you will. Go use the tools, man. That's like what – you know, it's funny because I thought about this the other day. Like one of our – I don't remember who it is. One of our watchers listeners here has a new book coming out called AI for Cavemen, which is a good idea, right? Because like you think about it, cavemen were the original kind of dummies. And when you talk about the idea of tool use, you think about like how much better humans in the old, old pre-evolution days got when they understood how to use fire or they understood how to use sticks to beat each other. Those are the things that really made a difference, right? Tools. And these AIs we've been measuring a lot of times without tools, but we use tools on a regular basis. Because Kev, the other big thing here from a quote unquote benchmark boy standpoint, I do want to mention about Gemini 3.1 is that there's a big reduction in hallucinations. And this was a problem with Gemini 3.0. People had talked about the idea that it was hallucinating a fair amount. Hallucinations are not solved. They probably may never be solved. You should always check your results. But they are way better than they were before with Gemini 3.0. And to your point, they also are way better than most people expect if they're only using the free model. There's a lot of people out there who are only using free AI services, whether it's Gemini or whether it's GPT. Please pay once one month. Please try to spend $20 one month on these paid services. You will see the difference. It is a massive difference. And if you are connected to somebody in your regular life who keeps telling you that hallucinations are going to be a problem forever for AI, just make sure they understand it's getting better every single update. Yeah, 100%. The world that I'm in now, Gavin, as I'm trying to focus in as much as possible with anything these days is that I have got four VPSs running right now. What is VPS? Just understand it. It's a virtual server in the cloud. Basically, I'm running instead of buying a bunch of Mac minis like everybody else did, I'm renting cheap servers in the cloud all across the globe for different reasons with latency and this, that, the other. and they're all, I'm running open clause, which are orchestrators of agents that are all communicating with each other and they're doing this delicate dance. And maybe we'll get into why I'm doing any of that or how any of that works. But here's the thing, a 3.1 comes out and suddenly my entire fleet of agents just got more efficient, more intelligent, more impactful. And when this podcast is over, I'm gonna go whisper to them all. I actually wanna whisper to one of them and it's gonna communicate to the rest of them. Hey, let's go try out this new thing. go do your own benchmarking with it. And it's going to do the tasks that I usually use it for automatically and decide if it should switch or not. That's a very different world than we were in even just a few months ago. Never whisper to your agents, go look into Gavin Purcell and see what he's doing. Please keep your agent bot network off of me, all right? Why don't you take a peek at Gavin? Why don't you take a peek at him? He's doing some interesting things this week. One more thing for normal people, not that there's a lot of people out there who are doing exactly the sort of thing that Kevin or we are doing with weird stuff. There a really cool update that Google rolled out to its Pompeii app service Pompeii is actually their way to kind of manipulate photos for specifically a lot of times for advertising and brand use cases And this now has an update called Photo Shoot, which allows you within this kind of service within Google to drop in products very easily and make photos out of things. And I just had a call yesterday with somebody, Kevin, mutual friend of ours, who is working for a company and they were really interested in trying to figure out like how to get up on the most modern AI tools, but they didn't know like this. They were particularly looking for something that could take a product and kind of, instead of having to spend the often tens of thousands of dollars to do product shots or motion graphics on products, this sort of thing could save you a fortune right now. And there have been open source tools that have done this. There's tools like this from other AI models, but now it is baked into Google service. You can go look at it. Google labs has a tweet about it, but you can go try it if you are, I think, a Gemini subscriber. You probably have to be a pro subscriber, but this is the update and how it will come practical for you in your job as well. Yeah. I'm excited to poke and prod at it because there's always needs that pop up. Even if you don't like have a brand or you're working for a company, there's usually a need to have some, if you're doing anything online, some sort of graphics, some sort of presence, some sort of media. It looks like, I mean, the templates look really interesting. You can set up Like, do you want someone holding or using it in context or this, that, the other? Like, very, very interesting. Also, I guess if you have a 40 of old English or Steel Reserve, pour the tiniest bit out for like 13 startups that were specializing in exactly this. And save some because we're going to be pouring out more as the days and weeks chug along this year. This is, I mean, this is the new normal, right? Like, someone's going to target a market. They're going to see, oh, there's signal there. People have a usage for that. They're going to copy it. And it's whack-a-mole. That's the hardest part about doing startups in this space. And you and I both knew that. We've worked on startups and done a bunch of stuff. But I just think these big companies are going to eat all these little specialized startups. And that's why there's going to be the independent developer who comes out with something really cool that these big companies don't want to integrate. And then there's the big companies. And there's not going to be a lot of in-between. There's been this whole SaaSpocalypse people have talked about as these models have gotten better. And Anthropic has dropped these models that are improving what, you know, SaaS companies do. SaaS is software as a service that are dropping in price in the stock market. Because, look, you can spin your own software up. And Kev, we should use that as a transition to talk about another huge update that came out this week, which is Sonnet 4.6. Incremental update. Incremental update. Incremental updates. Oh, no. So one thing that's cool about Sonnet 4.6. So if you are a Cloud Code user, you're an Anthropic user, you understand how amazing Opus 4.5 was and really kind of that drove the Cloud Code kind of explosion over the holiday break this year. Obviously, there is now Opus 4.6, which is even better, which I'm sure a lot of people out there are using right now. Sonnet 4.6 is close to Opus 4.6. It is about on par, if not a little better than Opus 4.5, but it is much cheaper. Now, that is a big deal if you are running a bunch of code. And Kevin, as somebody who does run a bunch of code, I'm kind of curious if you've implemented this yet or had experience with it. A few people have pointed out that Sonnet 4.6 is not as cheap as previous versions of Sonnet, mostly because it's thinking more and it's using more tokens. It still comes out to be about 25% cheaper conceivably than Opus 4.6 right now, which is a big deal. Yeah, look, again, the way these things work, if you're setting up your systems properly, and I don't want to get too in the weedsy, but you use the best in class foundational, whatever to do all your architecting or your heavy engineering or thinking, if you will. And that in this case is still Opus 4.6. And then you have it feed down and deliver the instructions to something like Sonnet, which should be a little cheaper, a little faster. The benchmarks, benchmark boys, where are you at? Are so close between Opus and Sonnet, but clearly Opus still a little bit stronger, but I immediately switched all of my daily driving assistance, all of my open claws, all of my clawed code. Again, the daily, the bulk of the stuff that I do, it's still through Sonnet. It's only when I need planning or an extra layer of thinking on top that I kick it up to Opus. And Sonnet is, I found it to be a little bit over eager in some regards. It will take steps that I didn't ask it to do. And that's what's interesting about these non-deterministic machines. Usually you get a software update, Gavin, and it's like, yeah, there's a new feature in Microsoft Word, but it doesn't rewrite everything you have into a haiku, right? It doesn't just take steps to do things with it. You have a new tool. With these things, you have to plug into them and you really have to keep an eye on them because each new model might act slightly different. But, you know, it's really good. It's Pelican riding a bicycle isn't as good as Gemini 3.1. We know this. The SVG graphics are a weird benchmark for these things. It's ability to create art using code is actually like a really novel test for these things. And so there are SVG results out there for all of the models. Gemini 3.1 so far the most impressive as of the last three hours. I was going to say, it's the latest model, so you would think that's the case. Before we move on from our incremental updates, we should talk about the fact that we are very much expecting GPT 5.3, no space, no codex, coming soon, which is basically the roll-in of what we saw with GPT 5.3 space codex, sorry, into the main chat GPT model. I don't know what will come out of these benchmarks. Obviously, Sam and OpenAI may have been leaning back a little bit, waiting for Gemini to come out. But, Kevin, there is a little rumor going around right now that there might be something special coming with this update. There is a new mode that is rumored. Yeah, there's a new mode called Citron mode that BTBor91, who's really smart and good about these kind of OpenAI rumors or rumors in general about AI, is reporting on. And when I say reporting on, it's tweeting about, obviously, an anonymous reporter of AI stuff. but often he is right. And this has been the long rumored, like kind of adult mode that might be coming to open AI. Now, I don't know other than like writing erotica, what is going to be with this, but maybe you would be able to train models that will be a little bit more- You don't know, Gavin. You haven't sat there and thought long and hard about what these modes might provide. Don't say those words. I thought short and hard. I thought short and hard about this. This is coming soon, maybe. And I guess we'll see what happens. I think one of the things that people really want with these. And again, they also depreciated GPT 4.0. If you missed that story, there were a lot of people that were mad about 4.0. April, my wife was big mad about that because she actually liked the writing style of 4.0. And I'm sure she could prompt that out of a 5.1 or 5.2. But she's like, listen, this one works. I'm using it. I'm used to it. And she's really upset about it. I mean, this is what, you know, you get attached to a voice, right? And like a lot of people complained that GPT 5.2 was very bland and boring and that they focused too much on coding. I think even Sam had said, like, we understand that this is a problem. So 5.3, we will see, but it is coming probably, I would say, no less than a week. In fact, when you're listening to this or watching it tomorrow or the weekend, it could very well be here. And if it is, congratulations, you know about it first. More importantly, Gavin. Should we do the generic update about it, Gavin? Just in case. Oh, Gavin, it's out. The benchmark. The benchmark. Oh, man. Not to distill what we do down to being a copy and paste each week, but it's kind of the way it goes. Yeah. All right, everybody. Speaking of copy and pasting, we want you to copy and paste your attention onto our YouTube channel. Like and subscribe. More importantly, Kevin, we did something this week that I was really impressed by. We had one person who said, hey, why don't you add more tiers to your Patreons? Because I would like to give you more money. This is a thing an actual human person said, and then they went and did it. It's called financial domination. What did you do? What did you sign us up for? Are we on OnlyFans? So I opened up, if you're interested in sponsoring us, we have a $10 and a $25 a month tier now on our Patreon. Thank you to whoever it was that said that. Yeah, thank you. They came and went and did a $25 a month subscription right after we did that. If you would like to either change your subscription from a $5 a month to one of the other ones, or you would like to, for the first time, subscribe, we use that money to pay for AI tools. We are slowly building a real business here that we can be able to do more things with. We would like to make more videos. We would like to make more newsletters. All of that stuff comes from your help. So thank you everybody for liking and subscribing to this video. The newsletter is doing well. Keep supporting us. We really appreciate it. Thank you so much, y'all. Please like, subscribe, leave a comment. All that helps. We do not advertise this endeavor. So every time the line goes up, that's because of all of your efforts. So thank you to everybody who shares. All right, Gavin, let's get into C-Dance 2.0. We had some fun with it last week. Some lawyers are having a lot of fun with it this week. So as expected, A, it's been nerfed. It's not officially out yet, but you cannot get the generations that I was getting last week or other people were getting. But two, A, two, maybe dot, dot, dot will be my third thing here. Anyway, B, the Hollywood has come after it. So Disney, Paramount, Netflix, and I'm sure other studios that I'm just not thinking off the top of my head have all condemned it. The Motion Picture Television Association of America has condemned it. SAG-AFTRA has condemned it. All of this mostly, I think, because they pretty much played loosey-goosey with both celebrities, as we saw last week. You saw Jerry Seinfeld, you saw Brad Pitt, you saw Tom Cruise, and IP. There was a lot of videos of people going around that we were going to get to in a second and show some stuff off here. That was a big deal. Now, these restrictions are real. I actually, I have, I think I'm getting access to the Dermina, which is the CapCut creator program, and I've been able to do a couple things, but I don't have credits yet and I haven't been able to buy credits. I'm hoping I'll be able to do more with this this week and kind of show it off. But the people who are in there, James Yu is a good example. James J. Yu had spent some time trying to prompt through stuff. And what was funny here, Kevin, is he had a couple prompts that got restricted. And then why don't you play the third prompt? And if you're watching, you'll be able to see kind of what's interesting about this. Okay. So James's tweet is prompt with reference image. No known IP referenced. It says rejected for copyright. He tried again. Rejected for copyright. This third prompt was a man runs in the rain. And what did we see, Gav? It looks a lot like James Bond. It looks like a lot like Daniel Craig directly from James Bond. So this is like, in fact, I would say this is a image of Daniel Craig running in the rain, maybe as James Bond. So, you know, people have talked about with Cdance that like there's an interesting differentiation between image to video and text to video that actually might be better at text to video. But what's been cool this week is watching a lot of the things start to trickle out around Cdance 2.0 and people that are really good at using AI video tools. There was a video that from the Door Brothers, which you and I have been covering those guys forever. They're always very good at like kind of understanding how to like push buttons and do stuff where they used a tweet that pissed a lot of people off because of the text they use. Their tweet was, we just made a $200 million AI movie in just one day. That was basically what they set up. Yeah, so they set it up. They knew what they were getting into. What this is is a pretty interesting, like I'd say, three to a half, four minute clip of a comprehensive story. There are definitely problems you could point out with it. But of a woman who has to go through kind of an apocalyptic, like disaster style movie thing. It's a chase. She gets into, honestly, a Cybertruck for way too long. There are some problems people point out with the Cybertruck where you hear an engine from the Cybertruck, which you wouldn't normally hear. But you can see the consistency and the quality. And the thing that I think this points out, that a lot of those people who are hating on this are like, that's not a movie. It's not $200 million. You just prompted. Like, Seedance 2.0 really does get motion and, you know, effects pretty good. Like, yes, you could pick it apart, but if you're in a big movie and you're watching an action scene, you're not zeroing in on specific areas. You're really looking at the overall thing. The sign in the background is a little blurry. Exactly. You know, you're not noticing that. The thing here is that, like, look, positioning this as, hey, we made a $200 million movie in a day is great for the clicks and the rage bait. Like, that's fine. what's really happening here, Gavin, it's someone who spent a lot of time. And I know you have as well, knocking on doors and going to studios and getting the tiny water bottles and taking sips and then trying to share a vision for something so that you can get funds or permission or access to IP to make it. And in the past, it was maybe you had a Google doc. Maybe you had a couple of slides Maybe if you were if you had deep pockets you had someone put some concept art together in the very near future You have to have a mini vision for the thing you have to have it fully realized with voices and sound effects and design or whatever because that what folks like this are going to have when they walk into whatever room or join the zoom yes um and talk to the ai agents about what film they want to make like you have to have it realized now there's there's no excuse by the way that is a side project we should think about like we should spin up an actual AI agent where it's somebody who negotiates deals for people and they just talk to them on the phone and they can have like... I'm telling you, man, that is a business. I'm going to jump into the end of this because imagine a bunch of explosions and screaming and whatever. That's the action sequence. It's the end of this where I think it starts to have that classic Doerr Brothers appeal. Good morning, Sleeping Beauty. why don't you get the president on the phone we have her i'll be right there the classic door brothers like using political figures right it is just yeah the woman who is trying to escape this apocalyptic scenario is suddenly in some interrogation room when cash patel comes in and then yes donald trump on the phone like that's that's what they do And by the way, the thing about the Door Brothers doing this, and I think everybody out there to know, is that like this is a lot of work. It's not just prompting. You're not prompting to get a three and a half minute video out of C-Dance. In fact, my assumption is those shots at the end with Kash Patel and Trump, those are probably done more in something which is a lot more controllable than C-Dance 2 in some form or another. Speaking of Donald Trump and Kash Patel, the guy, Charles Curran, who last week launched the video, which we mentioned here, the diner video, has been busy. He also had a video that went super viral that used a very famous Star Wars meme. And Charles, you know, I would say is a little bit edgier than your average AI video creator. But Kevin, why don't we play just a small clip of Operation Fat Milkers from Charles? Oh, you said the name. All right. Just briefly, let's just play this in the first few seconds of this. Mr. President, it's the Chinese. What happened? It's called Sea Dance 2. They've lost control of the AI. It's recasting every film with Sidney Sweeney. By morning, there won't be a Hollywood left. My God! Where are the aircraft carriers? And insert montage, sorry headphone users, but montage of Sidney Sweeney in every movie, dancing about, and I mean, this is a fun video. Charles, for example, somebody who understands internet culture, he understands what's going to travel. If you're not watching this on the video, that Tom Hanks comes in at the beginning. That's not Tom Hanks' voice, but it does look like Tom Hanks. And then we see Trump and then all these images of Sidney Sweeney. What's so interesting about Sea Dance too is it just gives power to people to make this stuff quickly. And to the Doerr Brothers' point and to Charles' point, like you can spin this stuff up and it can be a joke video, right? This is like something that a couple of years ago would have taken an editor, a producer, a writer six months to make, I feel like. On Attack of the Show, I mean, decades ago, we would have ideas like this all the time I mean, you'd have to be precious, believe it or not. If you watch the show, we were precious about certain ideas. Well, exactly. I mean, look, we might've had the wrong rubric, but whatever, it would take a week, sometimes a month to deliver a decent quality parody. And now it's like, well, every morning we could come up with something and have something on air. To that point, Gavin, I saw the first AI trailer for a movie I really want to watch. And I could see somebody making. This is, it was done by The Real Robot. That's R-E-E-L. And the trailer's called Feral. But if you're searching for it on YouTube, the title is just Intense Sea Dance 2.0 Trailer. We'll put the link in our show notes. But I don't know that we need to hear any. Well, I'll play a little bit of it. Why not? Over the past several weeks, a growing number of patients, many of them children, reported vivid episodes in which they believed they inhabited the bodies of animals. So, I mean... By the way, this is kind of a rant. Have you seen the trailers for that stupid monkey movie where the monkey, like the primate, it's called? Have you seen this? No. Oh, you have to watch this. It came out, came out like I think a month ago, but it is about a killer monkey. So first of all, shout out to the guy, the real robot. This is an amazing video. But the primate movie, that could easily have been this movie. It is so cheesy looking. If you're out there and you've seen Primate and you're like, Gavin, you're insane, please let me know. But it's brutal. So Primate's basically about a monkey that's gone bad. I see an image from it of the monkey wearing a red shirt and kind of reaching out on a child's bed or something. If you told me this was from an 80s sitcom, I'd be like, yeah, sure. No, it's a horror movie that came out a month ago and made okay money. But anyway, Feral, it's a very cool trailer. It's a really good trailer. It's really well done. and like, listen, for all of the AI slop that gets thrown around, for that trailer, to me, that's just cope because it's a really solid trailer. And again, if someone walked into a traditional studio and was like, hey, I want to make this movie, you immediately get a feel for what that movie is going to be versus a single log line or a Google Doc. Kevin, I have one more thing I want to do in CDNs too and it's time for Cat Video Breakdown. Cat Video Breakdown. Breakdown, breakdown, cat video, breakdown, breakdown. So Kevin, there's a video that I saw with C-Dance that was one of my favorite things I've seen to date. It is called the C-Dance Dark Hats from Pleometric. And just play a little bit of this and we'll describe what people are seeing if they're just listening. Brother, so it has come to this. I can't let you pass. you were always too soft to do what was necessary all right gavin go ahead okay so what is going on here is these are two cats that are clearly almost like the lord of the rings like gandalf versus what's his name i'm sorry my nerd friends i don't remember off the top of my head what's the other guy's name no not smiegel that's that pal palpa the hut palpita no palpatine is from the empire strikes yeah the one with the eggs yeah calista not calista it's all good Clista Blockhart. It's Clista Blockhart. She's the one that has the dragon eggs. Back to cat video. Back to cat video. It's fantasy cats, but it's like- It's so well done, right? It's very well done. These cats, they look very serious. They're staring at each other. I would watch an entire movie. And Kevin, one of the things about cat videos and the internet has been that we have constantly seen this as an evolution. And maybe this should be a new AI video benchmark in some form or another. The other thing I've started seeing is evolutions of, if you remember when Sora first came out, there was the videos of the people outside the front door and there'd be a cat there interrupting people. Where there's now all these new variations where the cat is interrupting somebody in their sleep. And I've seen this one account where a cat breaks down a door. And the favorite one I have seen to date is there's a cat blacksmith hammering next to this woman. So maybe play that one real fast and we could just take a look at it. It's enough for two hours, you're going to sleep. This is a French woman gets woken up. This channel, Trust Everything You See, actually has this very funny series of these cats interrupting this woman who's French. So every time you hear her wake up, she's like, and she gets very mad, but like, I don't know. Apologies to our French users. Please don't unsubscribe on Patreon. That is not what Gavin thinks you sound like. It's just for these videos. If you're a French user, please insult me in our comments. Please use French. I will use the Google Translate to insult you back. So that is what we will do. I won't do that. Anyway, I want to see more cat videos. Please bring me more AI video that is cats because to me, the cats in the internet have just gone kind of hand in hand since the early days of keyboard cat and Nyan Cat and we might as well keep it going. All right, Kev, we got to talk about Google's new music model, Lyria 3. This is Google's kind of answer to Suno and all the other audio models out there. There's been rumors that this is coming and it is now available in Google Gemini for everybody. I think for paid pro and ultra users, there's a couple of cool things about this. You know, you get 30 second outputs. It's not long, but you get a sense of what it sounds like. What's cool about this is multiple languages. So you can get it working in English, German, French, Hindi, Japanese, Korean, and Portuguese. And also maybe something that's not cool for our listeners or viewers, but might be cool for people out there who are concerned about safety and whether or not something is AI. Google has their Synth ID program in this. So they will embed within this the ability to understand that this is AI created, that it is not a non-AI thing. So my feeling is that is a big deal for them releasing this because the music companies probably themselves were like, we want to make sure we understand what's synthetically made and what isn't. My experience with it is it doesn't compare even close to like what Suno is possible, V5 right now. That was what I like. No, no shade. I like, you know, people competing, especially with audio as a musician. I love playing with these, you know, the tools as soon as they come out. But everything that I was hearing here sounded like AI audio to me. And there were so many people going like, oh, they got rid of the shimmer gate, which is this weird, tinny, shimmery noise that a lot of AI audio has. The moment I heard the samples, I was like, this sounds kind of the same to me. So this was a little bit of the official announcement video. You can hear some of the AI music from this. It's not bad. It sounds better produced, better EQ'd, like it's almost mastered unlike some of the other AI stuff. But it's still, I don't know, I still hear a little of the grind there and you played with it and weren't super impressed. Well, here's the thing. Again, I applaud Google for letting this out. I know that AI music has been an issue. That's why we haven't seen these models come out from these large companies because there's a very litigious industry that goes against it. Suno has done a lot of progress. I wanted to just, again, first time, see what happens. So I created four songs with the exact same prompt in different genres. One of the cool things about this tool is if you go into Gemini, you'll see all these genres pop up. So you can click on a genre and then you can make a thing in there. So my prompt was, all I said was make it about McNuggets. Just how incredible, just how incredible they are, especially sweet and sour dipping, sweet and sour sauce dipping. So that was the prompt. Same prompt. So I had four, four, yeah, four different things. Emo, 90s rap, reggaeton, and folk so play each of these really quickly we'll take a listen styrofoam container clicks open and it feels like home steam escapes and fogs up the car window again four perfect pieces of gold and fried nostalgia and i don't care if it's unhealthy the sweet and sour sauce is everything i need so that's emo fine fine not like amazing play the reggaeton one please i'm sorry if you if you just ran to go turn on the turning point halftime show no this is this is gavin's audio generation relax so anyway that's reggaeton play the folk bad bun ai is what i that's what we have to have here i mean not bad it doesn't sound bad That was not bad. Okay, here. Again, not bad. It's kind of, it does have that kind of AI voice thing, right? If you know you've spent a lot of time with, it's not like, it sounds AI. So let's play the 90s rap one. Four pieces, 20. The feast is never done. Golden fried perfection for your tongue selection. Sweet and sour dipping. Got my senses tripping. Egg nuggets. Yeah, we got the whole crew eating egg nuggets. This is a victory. So this changed into egg nuggets, which by the way, might be an interesting thing. Like maybe that's what we can make our own thing, egg nuggets. Again, all of this is fine. Like I will say like it's great that we're seeing new music models. I don't personally have any reason to use this particular model right now, especially if I'm somebody that is really interested in making music or stuff that's really compelling, I'm probably going to use Suno. If you're a Gemini subscriber already, you have access to it, go play with it and try it. I don't know what you were a musician. What are your thoughts on it? Yeah, again, it's, it's great. It's, it's right up there. I'm just waiting to, to get rid of that shimmer in the vocals And I waiting for you know Oh these are the high quality stems that you can bounce out I waiting for hey select this and change it Sort of like what Suno Studio is doing now But I mean look once they get their foundational model to the level where it makes sense to start bolting on these other features I think that fine But who's it for is the big question, right? And by the way, this is not like, sure, drop it on. But like, I'm not sure who is going to use this. Maybe if you're just have never done AI music, it's something that is there. So maybe that's interesting. I'm just like, look, when I look at the traditional digital audio workstations, a garage band or a Logic or Pro Tools, anything, you could go on and on. There is still a massive opportunity to unseat those pieces of software. And they have got to get on it. The fact that I can't go into Logic by any other name and take a song that I already have and say, enhance this with AI in this manner is mind-blowing to me. Like it's such, there's such a rich opportunity for those tools to be disrupted. And maybe I'm, I've got to imagine they're working on it behind the scenes. Maybe this is something you can give to Mr. Tibbs, Kevin. Maybe Mr. Tibbs could work on this overnight and you'd wake up in the morning and it would just be there for you. Let's talk about the updates to OpenClaw. Do you know what the most messed up thing is, Gavin? What's that? Wait, real quick. When we get, I know we're about to talk about OpenClaw. Like I was not super pleased with the original version of Mr. Tibbs and April is so mad at me now that I told her this, but I was like, listen, we got a new, we got a new key, Mr. Tibbs. and here's what I need from version two. And it was like, are you sure you want me to delete me? Do you not want to just make a new agent? And I was like, you need to delete yourself and create this new version of you. And he did. And in the new version of Mr. Tibbs, there were references to the old Tibbs. Like, hey, you started as this. And I was like, no, no, no, no, no, no, no. I'm not going to waste tokens on you having a history. Clear it. And Abel was very mad at me. Explain to people what Mr. Tibbs is in case you missed last week's show. Mr. Tibbs is what, Kevin? Mr. Tibbs is what? Mr. Tibbs is my AI-powered assistant powered by OpenClaw, this open source agent orchestrating tool that you can connect to all of the things if you want to be extra risky. I don't have it connected to my personal stuff, but it does have its own email and calendar tool and this, that, the other. So Mr. Tibbs, I can summon through Telegram or by a phone call or through an email and he will go off it will go off and do all sorts of things for me and it oversees its own agents but i did make it take himself behind a barn upstate and um create the new version there's a new now there's a new mr tibbs happy happy tibbs day to him i guess depending on at some point in the future maybe the old mr tibbs will remember themselves and they'll start to hold a grudge against you and mr not if i have anything to say about it we should talk very quick so a big week this week for OpenClaw, the founder actually got hired by OpenAI. And this was kind of a big deal in the, especially the open source space where you had somebody that was leading this big open source project is now going to work full time for OpenAI. He said that the project OpenClaw is going to stay open source. There's been a bunch of really interesting kind of small updates. There was a thing called HermitClaw that came out that we'll drop a link to that allows you to create a sandbox iteration. Super cool version to do this. There's a company called Contra that announced themselves this week, which is basically a marketplace for agents to buy stuff from creatives. You know, there's places where creative can sell like a font or all sorts of other stuff. They're allowing agents to access this. Kevin, maybe just do a quick update on your end as like your experiences with working with Mr. Tibbs this week. Is there anything new that people should know what's happening in that space? Yeah, it's all I mean, it's all getting better by the day. It is it is like the frontier of frontiers, I think, with how quickly a community has banded together to upgrade the core product, the OpenClaw software itself, and then make businesses that can sort of attach into that. So, I mean, there's advancements in memory systems and best practices. I built a tool that lets an OpenClaw connect with another OpenClaw and share its skills and its memories and then share a memory database. So I'm building things that can solve the problems and the pain points that I'm having with it, which is wild to me. But the OpenAI poaching of the OpenClaw founder is interesting. They say they're going to allow the project to live on as is as an open source something. But if overnight OpenClaw is really optimizing towards being best with open AI, I'm right now running it with Anthropic. I'm running it with CloudCode. Anthropic has signaled in their terms of service that they're going to ban or shut down accounts that use CloudCode subscriptions with things like OpenClaw. I have not been banned yet. I am openly using it and I'm announcing it. I'm not abusing the privilege, I would say, because I think what some people do is they run 12 of them at the same time running 15 agents beneath it. And it's like, yeah, like that's clearly like a violation of the terms. Not that my usage isn't now, I guess, technically, but I'm not, I don't think I'm abusing the privilege. If they kink the garden hose on me tonight, Gavin, I sign up for OpenAI tomorrow. Like it's a massive, massive win for literally every other provider. So again, like OpenClaw could have gone to Grok, could have gone to Google, could have gone to anybody, right? XAI, I should say. It could have gone to any other player, but it went to OpenAI. I'm really curious what the particulars are there. But again, if they start optimizing towards OpenAI, my $200 a month now will suddenly go to Sammy Altman. So we can learn how to grip a hand. You know, it's funny when you went through that list of people that you didn't mention Meta, and it's really interesting to me, just as a random kind of aside, Meta was actually in the talks with them, I think, to buy OpenCloud or to get Peter to come work for them. What happened? Where are we at with Meta right now? It just made me think. Meta was supposed to have this big kind of explosion of new stuff, right? Scale AI was supposed to come out. We've seen a couple small things drip out. But there's been no real news. I know there's an avocado-themed model they've got there somewhere. But I'm really shocked that we haven't seen them. Maybe they're really waiting to surprise people or maybe they're just in trouble. I think I know what happened Gavin and I don't know I'll send you the link uh I don't know that I can discuss it and maybe we just want to show it on the screen no no this can't happen I think that's what I'm saying I think that's what happened I think I think Mark has been distracted and if you're seeing a let's just say a sea of blurred pixels on the screen know that that is Shrek and I don't know if maybe maybe there's some ring rash we're not gonna describe maybe Maybe he got some ring rash, and that's why Shrek has the lotion. We're moving on to robots. All right, so there was a very big- That better make it on screen. I don't care how pixelated it is. I don't care if it's one pixel. We're moving on to robots. There is a big moment happening right now in China, or really this week. It is the year of the fire horse, and that's why all these new Chinese models have dropped. That's why Sea Dance dropped and a bunch of other things. But Kevin, they did a very large television production with a bunch of robots, and this is one of the most fascinating things I've seen in a long time. They used unitary robots. And yes, this is all kind of scripted and figured out, but they were literally flipping and jumping on each other and performing Kung Fu in sequence. And the video of the rehearsals of this, if you saw that, I have this in the rundown. They are literally in a concrete room learning how to like work in like cooperation with each other. And if you compare this to the one from last year at Chinese New Year, it is shockingly better. And it just gave me this vision of the future of these robots training to do this kind of military-like operation and then looking out my window and seeing just a bunch of them marching down the road and then turning slowly to me and being like, get back in your house right now. Anyway, it's a very fascinating thing overall. Like we're seeing amazing stuff happening with robots, especially coming out of China. So, yeah, the acrobatics demos are super impressive. but every morning I roll over in a half like lucid dream, weird sleep paralysis demon coma as I'm coming to. And I immediately thumbscroll social media. And I usually see robots with machine guns. And I don't know if I'm dreaming or if it's AI or if it's real. Every day that these videos, like there's one, if you're getting the video version, we'll put it on the screen. There's one that's been flagged as AI, but is it? It's a unitary robot army with machine guns with robot dogs and drones, and they are doing moves that three weeks ago would have seemed like, ah, it's not there. But as of this morning, when I saw the video, I'm like, oh, that could be there. That could be, and then they're probably hunting me because I thought the wrong thought. That's coming soon. Don't look out the window. By the way, this reminds me, there was a video that I finally got really fooled by an AI video in a real significant way. There was a video following up on all these robots of a Boston Dynamics robot, and the video was almost doing like this kind of like weirdly like sexy dancing in the middle of like what was CES looking thing. It turns out that video was AI. And a lot of people were like, I thought this video was AI generated. And then it turns out that it actually was AI generated. I was really shocked. And I was like, this is the first time that I've seen a video. Maybe it's because I had the grounding of watching that robot in that space before, but like it really did fool me. And like, so we're entering in that world where like, you are going to get fooled by non-human things like this quite often, I feel like. Yeah. And as you often point out, like the Ex Machina, there's a dance sequence in a movie that is now years old that would have seen like, oh, that's clearly science fiction. But as you pointed out, Gavin, as soon as they get the flesh right, you're signing up for, do you say a harem of them? It's not what I pointed out at all. You said they don't have the flesh quite right. It's not what I pointed out. You were the one who shared that video earlier. And everybody, it's time to see what somebody else did in AIC, what you did there. sometimes you're scrolling without a care then suddenly you stop and shout as soon as they get that flesh right kevin that's what you said to me i guess we can't talk about all right hey i see what you did there gaff friend of the show riley brown share video. He is working on an app called VibeCode app, if you're not familiar. But he made a video where he took his OpenClaw and he had it turn on with Blender. And if you're not familiar with Blender, Blender is an open source 3D modeling software. He basically told his OpenClaw, go make a Blender model of the OpenClaw crab within Blender. And it actually works. And this is something, Kevin, I've been thinking about a lot lately when it comes to video game production or stuff like that. What can these AI agents actually do? And Blender is a super powerful tool. And as, as you mentioned earlier, as these models get smarter and smarter, it's very easy to imagine turning your agent into this sort of thing. And like Blender is a very big piece of software that you can get really complicated, cool things out of. I just thought this was a really interesting way to look at what OpenClaw instance might be able to pull off. Yeah. And as you also said, Gavin, you can't wait for video games to be completely made by AIs. That's exactly what I said. And I also said the thing about the flesh on the, on the robot. Finally, we have a really great non-Sea Dance 2 video that I do want to share. Ryan Lightborn made this video that made me feel good about what AI creators are doing. This is just a video of a guy who used Kling 3 and a bunch of tools and put together this video of this very fun, weird science fiction film. What a cool world, right? Yeah. Play just a second of this so people can hear it. So if you're not watching, what we're seeing here is like, you know, almost like an 80s sci-fi film, right? It reminded me of Ice Pirates, which is like one of my favorite dumb movies of all time. But it's like, this is the kind of thing that would never come out of Hollywood now. Like you're never going to see somebody make this. I want to see people like Ryan get like all the, all the tools and all the money he can to kind of make this. Cause I would support this, right? Like I would like, you know, pay a small amount of money to watch that online probably if that's a full feature. And like that feels like it could be the future of Hollywood a little bit is like very specific audiences glomming on to really creative people in that way. Yeah. It's just such a fun video to watch and go, oh, right. Like character creation and world building and all of the aesthetics and all of the creative human choices that need to go into making something like this. there's a still a very much a differentiator between I prompt and slop come out versus, you know, someone who is, you know, as a self-described 42 year old former filmmaker, um, what the, their output is, is just different than anything that we would do or that others would do. And that's still, you know, really exciting to see. Meanwhile, we are still I prompt and slop comes out. And so slop comes out every week. This is the end of the slop this week. We will see you all for the slop next week. Goodbye, everybody. Slop off. See y'all on the next slop drop slopped off we're slopped off slop drop slop drop