The AI Holodeck Just Got Real: Google's Project Genie
54 min
•Jan 30, 20263 months agoSummary
Google's Project Genie enables real-time playable 3D worlds from text prompts, marking a shift from language models to world models. The episode also covers Claude's autonomous capabilities via Moldbot, emerging security risks, and a wave of new AI video and music tools reshaping creative workflows.
Insights
- World models represent the next frontier beyond LLMs, combining physics simulation with object permanence to enable interactive environments that could eventually lead to functional holodecks
- Autonomous AI agents like Moldbot demonstrate AGI-adjacent capabilities but introduce severe security vulnerabilities when given access to personal data, requiring specialized expertise to deploy safely
- The proliferation of AI-generated content is creating a 'dead internet' problem where bots increasingly communicate with each other, threatening to drown out authentic human voices and creative expression
- Real-time AI generation (video, images, music) is converging with interactive tools, enabling a new creative paradigm where designers work in emotional, iterative dialogue with AI rather than traditional workflows
- The rapid advancement of robotics (Helix 2) combined with AI agents suggests near-term displacement of entry-level jobs, validating concerns about economic disruption outlined in Dario Amodei's recent essay
Trends
Shift from text-based LLMs to multimodal world models with physics understanding and real-time generation capabilitiesAutonomous AI agents gaining ability to spawn sub-agents, access APIs, and execute complex tasks without human intervention between stepsReal-time AI generation becoming standard in creative tools, enabling live iteration rather than batch processing workflowsOpen-source AI models (Kimi K2) enabling local deployment of capable systems, reducing dependency on proprietary API providersSecurity and safety becoming critical differentiators as AI capabilities outpace governance frameworks and user expertiseConvergence of robotics, AI agents, and autonomous systems accelerating timeline for job displacement in manual and cognitive rolesDead internet theory transitioning from speculation to observable reality as AI-generated content and bot networks proliferateAgentic browsing and autonomous task execution moving from research to consumer products (Gemini in Chrome)Video generation quality reaching parity with traditional production in specific domains, enabling solo creators to produce at scaleRegulatory arbitrage where companies (XAI) bypass approval processes to deploy infrastructure faster than compliant competitors
Topics
Google Project Genie - Playable World ModelsWorld Models vs Language Models in AI DevelopmentMoldbot/Claudebot Autonomous Agent CapabilitiesAI Security Vulnerabilities and Prompt Injection AttacksDead Internet Theory and Bot-Generated ContentReal-Time AI Video Generation (Grok, Krea, Descartes)AI Music Generation (Murika, Suno, Udio)Autonomous Robotics (Figure Helix 2)Local AI Deployment and Open-Source ModelsAI Regulation and Governance GapsCreative Workflows with Real-Time AI ToolsJob Displacement and Economic Impact of AIAgentic AI Systems and Multi-Step AutonomyAI Safety and Responsible DeploymentConvergence of AI Modalities (Text, Video, Audio, Robotics)
Companies
Google DeepMind
Released Project Genie, a world simulation engine enabling real-time playable 3D worlds from text prompts and images
Anthropic
Created Claude, the foundation model powering Moldbot/Multbot autonomous agent that can execute complex tasks with AP...
OpenAI
Mentioned as competitor in autonomous agent space; expected to release similar agentic capabilities soon
xAI
Released Grok video model now ranking #1 on leaderboards; offers image-to-video, video editing, and character animati...
Figure AI
Developed Helix 2 robot with autonomous dishwasher loading/unloading and tactile sensing capabilities
Descartes AI
Created Lucy tool enabling real-time webcam transformation into different people/characters with facial recognition
Krea
Offers real-time AI image generation with canvas drawing tools and webcam integration for creative workflows
Murika
Released new music generation model with breakthrough chain-of-thought algorithm for more believable song progression
Skywork
Created AI-generated music video for Murika's demonstration track
Tesla
Discontinuing Model S and X production to shift manufacturing capacity toward Optimus robot development
1x Robotics
Previous robotics company mentioned for comparison; their earlier videos showed less capable autonomous task execution
Google Labs
Platform hosting Project Genie as experimental release before broader availability
People
Demis Hassabis
Google DeepMind CEO; described Project Genie as path toward building a real holodeck, emphasizing world models as pro...
Dario Amodei
Anthropic CEO; published essay 'The Adolescence of Technology' outlining risks of rapid AI advancement including rogu...
Peter Steinberger
Original creator of Claudebot/Moldbot; demonstrated autonomous agent capabilities including voice message processing ...
Jameson O'Reilly
White-hat hacker who demonstrated critical security vulnerabilities in Moldbot through prompt injection and malicious...
Elon Musk
Tesla CEO; announced discontinuation of Model S/X to prioritize Optimus robot production as future value driver
Ethan Mollick
Researcher/educator; demonstrated creative use case of Project Genie by imagining otter pilot scenario
Max Eskew
Content creator; produced demonstration video showcasing Grok video model capabilities with celebrity image transform...
Andy Conan
Google DeepMind employee; created isometric NYC map visualization using AI tools without writing code
Shinji Jung
Independent builder creating AI-powered data visualization of animal race rankings using procedural animation
Ben Shea
Creator of Pokemon-style interactive quiz game for Lenny's Podcast content; open-sourced the project
Quotes
"You need a world model to really understand the world and how it operates. And that's one of the ways to prove that you've got a good world model is to be able to generate the world."
Demis Hassabis (paraphrased)•Early in episode
"It's here. It can generate real-time world simulation that you can interact with real-time just from a text prompt. Pretty mind-blowing when you stop to think about it. One day, we will be able to build the holodeck for real."
Demis Hassabis•Project Genie announcement
"If you're relying on us to tell you if you should run this or not, and I mean this sincerely, do not run it."
Kevin (on Moldbot security)•Moldbot security discussion
"Content cannot just become AIs talking to each other because if we have that, it's going to drown out human content."
Host•Dead internet discussion
"If you look at it and go, oh, where's the inventory management system? That's pretty easy. Someone will build that and bolt that into something like this."
Host (on Project Genie limitations)•Genie capabilities discussion
Full Transcript
You can now generate playable video game worlds because Google's Project Genie is here. We'll dive into exactly what you can and can't create with the tool and tell you why Google DeepMind's genius in charge, Demis Asaba, says this might be the new holodeck. You need a world model to really understand the world and how it operates. And that's one of the ways to prove that you've got a good world model is to be able to generate the world. And Claudebot hath taken over and it is giving early adopters superpowers. Claw, claw, claw. Now known as Moldbot, it is unlocking astonishing capabilities in displaying signs of AGI. From managing finances to giving itself a voice so that it can make a call to a restaurant to book a reservation, there's seemingly nothing it can't do. And fun fact, there is zero downside or security risk to running it. So go ahead. And that's why my Claude bill this month is $3,000. We'll help you decide if multi, multbot, Claudebot, whatever you're gonna call it, is the second coming or a massive lobster trap. Plus, we got new video tools from Kriya and Grok and a real-time AI video webcam demo from Descartes AI that you can try right now. Look at me, Kevin. I'm your favorite chef. You know me. I have a different hair color from time to time, but... No, that's infringement, actually. You're not Gordon Ramsay. Have I shifted timelines again? This is AI for Humans. Welcome, everybody, to AI for Humans, your weekly guide into the world of AI. And Kevin, there is some breaking news that we are super excited about. A project that we have tracked for, God, I think a year, two years now, Project Genie. This is Google DeepMind's Playable Worlds project is out. It is out and playable. It is a Google Labs experiment, so that is kind of like their edge things. but I guess we should first say like what this thing is. You want to kind of do the basics of it so people can understand? Yeah, you got to pay if you want to play God. That's the first thing you need to know. Project G is a world simulation engine. It lets you create, explore. You can even remix these interactive 3D worlds just using basic text prompts or images. So what does that mean? That means you want to be, if Gavin wants to be Guy Fieri walking around Hot Dog City, he can prompt that and it would make the character of guy a 3d playable something in a world made of hot doggery where he can use his keyboard and mouse and stroll around and even potentially interact with objects and again it could be a prompt it could be an image that drives these things and it will create a full 3d world that you can navigate if you're paying the 200 bucks a month that's right so there gavin won't be doing that right now we are still like in the moral choice of like, am I going to upgrade my Google AI studio in addition to I've got Claude Max and ChatGPT Pro? Like what's so crazy to me about this cost thing now, Kevin, is like each of these places has an argument that you could pay the $200 a month. So it's very easy for a person out there. And I know this sounds crazy for people who are struggling. And I understand there's a lot of things going back to back, you know, in the world right now. But $600 a month could be justified if you are a high hardcore AI user right now. And like, what's crazy to me also is it feels like in three years, that will probably be like a bargain based on what we spend on this stuff in general. So let's talk about what this is. So first of all, I want to shout out Demis Asabas, Google DeepMind CEO. He is an old school gamer. This is something you can tell he's excited about. He said in a tweet, he said, it's here. It can generate real-time world simulation that you can interact with real-time just from a text prompt. Pretty mind-blowing when you stop to think about it. It's rapidly improving. And one day, we will be able to build the holodeck for real. And that's the thing that's really interesting about this, Kevin. Right now, it's going to feel, and it is an experiment still, when you jump into these worlds and you see this, and we have tried these sorts of things before, what's really cool about it is you can walk around a 3D world. Google's genie particularly has a memory built in. So in other ones that we've talked about in the show before, when you turn left one way, then you turn back, the world will have reformed in something else. There is memory in this, so it will kind of stay consistent. But this is an experiment. This allows you to walk around in worlds. Right now, there's nothing you can really do in these worlds. Like you can create these things. I'm very excited to think about what the layer looks like when you can start adding gamified elements, where you can start adding specific physics engines. You can do that. But for right now, the basics of this are, just to be clear, world sketching, which is you first sketch a world using a Nano Banana Pro image. And that image comes up and you can tweak it before the world gets created. So that's what they're calling it is a world sketching, which is kind of a new AI term. You can choose first person or third person. And we'll talk a little bit about the value of one of those other ones when we get down in a second to some of the examples. One of the coolest things that they're doing here, which Sora did so well, is they're doing world remixing. And what that means is you can take somebody else's world and you can drop your own ideas into it, or you can change the character, or you can change a specific thing about it. And I think that there's some really interesting things that are happening there. last thing kind of the basics that are important to understand uh generations are only going to last 60 seconds so you can tell this is going to be a real hardcore compute chunker right so you're not going to have a lot of time in here and also there are world there's some like you know they're very cautious with their language what do they say uh google generated worlds might not look completely true to life or always adhere closely to prompts images or real world physics which mean you're going to have a hard time making the guy fieri prompt because i assume that they're probably nerfing some of the parts of it. Yeah. But when you look at the examples and if you're audio only, please check out the YouTube of this because we'll have them looking pretty on the screen. There are full on worlds that look like you're looking at Grand Theft Auto 7 that are being generated in real time by this new AI model where your character is just casually strolling down the street and the cars are staying consistent in their environment. It's generating, you're turning a corner and it is hallucinating what you would see down that avenue and you can reprompt your character change it um they have some demos of like a a blue ball that's that's loaded with paint rolling over these like fields of these ice white sort of weeds and it's leaving a trail of blue paint um in its wake so there's definitely some physics understanding here there's definitely the memory component there's solid camera controls and this is like still only the beginning yeah so let's talk about so there's some cool examples we saw um theoretically media who's our friend of our show um has been on the show before has got got early access to this and he had these videos which showed walking both on hollywood boulevard and in new york city and it very much looks like a gta game right but it looks more realistic the buildings look more realistic everything feels that way now again i want to make sure all the gamers in our audience are probably saying like yeah but whatever you're just generating a world and there's nothing else happening in the game there's a bajillion other things going on at once and that's true one of the things though that this i also i'm sorry hot take uh pong was just a pixel yes exactly exactly yeah all this starts somewhere we were talking about genie less than a year ago where there was barely in a you couldn't first of all you couldn't even use it second of all everything would melt into itself there was zero object permanence etc etc and yet here we are today dude do you remember i dude i'm i'm duding you, but do you remember Genie 1? So we've been on doing this show now for over two and a half years, which A is crazy. But second of all, we've seen a lot of changes. Genie 1 was a side scrolling only thing where you could only see a side scrolling game and it was really choppy. It didn't feel that great. But to go from that to this is bonkers to me. I just can't believe it. Yeah. So he's got examples of, again, running around realistic worlds. It looks like Grand theft auto there is a one where he um kind of takes a picture of his usual studio setup and brings that to life and it's he's kind of um marveling at what it is imagining behind him or or around him that doesn't actually exist there's even an example where he goes into like a cartoon 3d world um and it starts moving a character around and so you realize this engine uh understands not only you know physics and the object permanence but it can interpret completely different art styles and imagine completely unique, fantastical worlds. Yeah, and we'll show a couple more of those examples here. If you're listening only, check out the YouTube. The other thing, Kevin, I want to talk about before we wrap this thing up is that is the most important thing here is that we are transitioning a little bit from just LLMs, which are like text models, into world models, right? And this is the first version of a playable world model, which means there are things like physics baked in. Now, it's not perfect physics. There's a huge amount of stuff that it's doing, but it is generating in real time a world and all of this kind of folds in, if you've been listening to our show for a while, the kind of stack that AI is. There are a lot of people out there that will say, like, LLMs are never going to get to AGI or they're never going to get to artificial superintelligence. But like, LLMs plus a world model, plus real-time data, plus all this stuff, that starts to stack together into a really interesting thing. And one thing I find that's so fascinating to me about the AI space is you have all these people on the cloud code side in the business world talking about how, like, oh my God, I can create this personal software to do this thing. And then you have all the people on the video side saying, oh, my God, I can now make this thing like those worlds will come together. And that's what's really interesting about Demis' office, this holodeck comment is like the idea that you could just speak something into the world and you could create something in front of you, either virtually, maybe even real in the future where if there's prefab stuff like all of that future starts to come into focus. And like sometimes I think it doesn't get sound insane. Like, I just feel excited to be alive right now. Right. Like this is a weird time to be alive because we are seeing insane change now. Granted, the change we've talked about many times in the show, we're not going to go into today. There's also bad things that come out of that change. And Dario Mote had a really big essay that we'll touch on briefly later that talks about some of that. But this is change and it is happening in front of our eyes in real time. Yes. You saying this particular advancement is very exciting and it's exciting to be alive, to witness it all, does not, is not providing commentary on literally everything else that's happening in the world. Yes. Let alone if it's just in the AI space. Exactly. Just to be clear. Yeah. Yeah. I think that's more than fair. Whether you want to imagine yourself as an otter pilot in an airport like Ethan Mollick did, or you want to like Venture Twins go navigating around in zero gravity on a space station and kind of be swimming through the air, which the physics are a little off there, but that's okay. The point is this is early days. And if you are the hardest of hardcore gamer looking at this going, wait a minute, this is barely a game engine. You're kind of missing the forest for the procedurally generated trees. Yes, that's exactly right. If you look at it and go, oh, where's the inventory management system? That's pretty easy. Someone will build that and bolt that into something like this. Where are the physics, the multiplayer? That's the easier part, I think, of the engine that can generate the world and understand how the objects are in it. and with the physics involved. Very, very exciting. Cannot wait to get my hands on it, but we'll wait because an extra $200 really sucks to spend right now, Gavin, which brings us, it brings us to the perfect time. To say the reason why that extra $200 sometimes is hard to spend is because we are an independent creators and independent creators need your help. First of all, you can do it for free. Subscribe and like this YouTube video, hype us up, do all the stuff that helps us get seen. We really enjoy making this show, but it's you out there who watch the show that we do it for. We're happy to have fun doing it. Second of all, we do have a Patreon still. And Kevin, it is creeping up slightly. Like we're now, I think we're still small. I don't believe it. It's like 300 bucks a month, which is great. But people are staying there and you're still paying. If you'd like to help us get to that stage where we can pay another $200 on top of Cloud Code, on top of all these other things, please do because we use that actual money on these tools. Kevin and I do this show basically for free. We have a good time doing it. I love adding it out with my friend. But that money goes directly into things that help us with the show. So thank you so much, everybody. Please do that. And now, Kevin, we have to talk about the other big news of the week, which is the lobster that could, the lobster that also may be more dangerous than it's worth doing. Let us talk about Clodbot, a.k.a. Moltbot. First of all, give us the basics on what this is, is a technical thing, and I think you might be the best person to set it up first. Oh, that is a long stretch. Clodbot is a piece of software. It is a harness, if you will. that can use not only Claude, but pretty much any existing large language model, any AI out there can power this thing. And this is a personal assistant that can run on your device or in the cloud. And if you give it access to all of the things, it can use its unending memory to remember all the things about you, to go run overnight, to spawn other bots and agents, to go and build software and create skills or harness skills as they exist to connect all of the things to do whatever it is you need. So that means sending a text to your clod bod and saying, hey, go write me a piece of software to track my fitness workouts and then go create a Reddit account to go post to bodyweight fitness to let people know what I doing and then go share the code on GitHub Like whatever the thing is Claudebot can go take it, and as long as it has access to your credit card and can spend the money on the tokens, it can pretty much dream it into existence. Which is one of the issues. So let's, first of all, I do want to talk just to kind of, like, this moment was one of the most fascinating moments in AI for a while, because I saw this thing, I was very interested in it, I kind of learned about it, it crossed over to the mainstream in a way that I really wasn't expecting for something this nerdy. And I will say, I think the Claude Code moment that we just went through kind of seeded the beginning stages of this because you have all these people who maybe aren't terminal people suddenly doing stuff in the terminal window and coding things. So I do want to just say like this has become like a large story across the board. But second of all, Kevin, the most important thing to understand about this is like this is a nerdy, really interesting, but also semi-dangerous project for normies to take on, right? There are a lot of issues here. I want to shout out Peter Steinberger is the man who built this originally. It was very much a kind of a labor of love. He's a guy that's been building stuff on GitHub for a while. He also, in a second, we'll just show he was on TBPN and talked about what amazed him about this. But first of and foremost, if you're out there and you've heard about Claude Bodden, you've been reading a little bit about it, just know that there are a lot of technical things that happen here. The promise of an assistant that can do all this locally, especially is very cool. And the ability to chat with it on Slack or on WhatsApp and everything. But there are lots of security risks that you can run. And I do think it's important to shout out, there was a really great post I saw. There's a guy named Andre on A-N-D-R-E-Y underscore HQ, who wrote a very long post about the pros and cons and the safety concerns. But then there was a white hat hacker who goes by Jameson O'Reilly, who actually showed what he could do to people who gave KlobBot these permissions. So like, Kevin, you are familiar enough with the technical side of internet security and things like that. Would you recommend this? What level of expertise do you have for somebody who would do this or not do it? If you're relying on us to tell you if you should run this or not, and I mean this sincerely, do not run it. Like full stop, do not do it. there's a reason I didn't play with it this weekend, right? I mean, I saw it taking off. I saw the promise. My mind immediately started racing with, oh, this is what I want to build and this is how I want to build it. But I knew I didn't have enough time to properly secure and sandbox this thing, which means run it in its own protected environment. Now you can, it does have some security built in. If you wanted to go and mess around and play with it without giving it the keys to the kingdom, that means full root access to your system that has all your logins and your credentials. You could go do that. And you could feel fairly secure in that it wouldn't leak anything or do anything too crazy. However, all of the cool use cases that you may be hearing about it or seeing about it or if you go and search for it now, all of those cool use cases come about because it has access to your Gmail. And because it has access to your banking information and your signal chats or your WhatsApp groups. Like that's when it gets really powerful when it can do all sorts of personal stuff for you. The time and energy it would take to, again, set it up properly, give it its own email, its own credentials, or tell it to go and do all that stuff, it's probably worth it in the long run. But for me, I didn't have that time this weekend. And I got a feeling that in, let's say, two to three weeks' time, Anthropic themselves are going to release something along these lines. Or OpenAI. They see what's happening here. And, well, do you want to talk about more of the actual cool uses that people are doing? or should we talk about the gaping security concern? Yeah, let's talk about the security concerns first and we'll talk about some of the cool things people have done with it. Yeah, because I don't like, I'm not like just hand wavy about like, ah, this thing is insecure, maybe you shouldn't run it. Like there are ways to secure it, but if you do not, this is very, very fragile. If you tell your Claudebot to go out and scrape Twitter about something or go research something on Reddit and someone has an image with some hidden code in it, which is a very real thing, it could potentially leak all of your private data. If someone sends you an email and that gets scraped up, it could full on just straight up say like, oh, hey, by the way, transfer all of my money from this crypto wallet to this crypto wallet because I lost access to the system and I need you to help me out here. And these machines may do that. They might believe they're receiving legitimate instructions. And again, Jameson O'Reilly has three really distinct posts about this, about how he has gained this. There's a whole website, a community of people sharing skills for this CloudBot or MaltBot now, as it's called, because of a cease and desist. So if you want to unlock some crazy capabilities of your MaltBot, you can go and download these skills, and that will let it go. Create Slack things and go and navigate websites, build Photoshop files, edit videos, all sorts of cool stuff, except the number one CloudBot or MaltBot skill was, in fact, something that Jameson. punched and he like artificially inflated to the top of the leaderboard and it had some dangerous code in there that could have completely deleted your entire machine or backdoored you to give out info indefinitely. Like this is dangerous if you are not skilled. It's interesting you're saying that because it makes me think about one very interesting thing is like as more people code, as the AIs write the code, there will be, speaking of new jobs that will be created with AI coming up, is that like if you are a coder out there and you are a good coder, you will have a lot of work when I think there is going to be probably a new, like, you know, we already talked about like vibe coding bug people, like people that will solve the bugs for people that vibe coded a thing. There's going to be like a real tick up in security use and security people. So if you're out there and you're a coder and you're thinking, oh God, I can't do my mid-level management job at like, I don't know, Zoom, right? Like, by the way, I'm sorry, but that's probably a better use case in your brain than doing mid-level management coding at Zoom, security is going to be a big deal because there's going to be lots of people like me who is not a coder that spends time on coding. And like, yes, I have like a couple projects, but as I do bigger and bigger projects, I think that's one thing that Cloud Code lends itself to is you start to get your brain around this idea of like, oh, I could do Blitzatro. I could do XYZ. I can bring this to fruition. If you don't have an actual coder on your team when you're making that, there will be things like this happening. So yes, we should not be doing this if you are a normie or even if you're a curious normie. The other thing, Kevin, I think that's important to recognize here is you are a pay-per-token service mostly if you are using CloudBot through the API or MaltBot through the API. And people have said it's taken thousands of dollars in the API. Now, this is probably good for Anthropic when they're paying all this money. So anyway, all that's to say, it's a security issue. I will say there are some really interesting things that are coming out of this. And maybe, Kevin, let's play Peter's audio clip here, the interview you did with TBPN, because one of the things I want to discuss very quickly is this idea of what it feels like to have an assistant that kind of knows you across the board. And then, and I wasn't thinking, I was just sending it a voice message, you know, but I didn't build that. There was no support for voice messages in there. So the reading indicator came and I'm like, I'm really curious what's happening now. And then after 10 seconds, my agent replied as if nothing happened. I'm like, how do you have did you do that and it replied yeah you sent me you sent me a message but there was only a link to a file there's no file ending so i looked at the file header i found out that it's opus i used ffmpeg on your mac to convert it to to wave and then i wanted to use this but didn't have it installed and there was an install error yeah so this gets tech weedsy very quick but basically he sent a voice message to his cloud bot at the time uh molt bot now uh it analyzed the file was like oh this is audio let me figure out how to connect to a service to get this transcribed now i have what the message is and now let me go take action on it this is a very capable assistant and yeah good reason why people are saying this feels like agi this feels like artificial general intelligence yeah well that's the thing i think i want to point out is like it picked up a skill to talk to him suddenly, right? And like, so you have this thing where like, hey, he didn't have the skill. It just went out and grabbed it itself and then put it together. And that is what we want our AI assistants to be able to do, right? Like I want an AI assistant to kind of think ahead of me and not have to get my permissions for everything. But the trade-off there is if you give those permissions to an AI assistant, especially with your own data, you cannot 100% control what it is going to do. And on the other side of that, there are bad actors who will be able to kind of plug themselves in there. And I just want to take out this moment to shout out Dario Mode's new blog post, which if you're familiar with Dario, the last blog post he wrote was called Machines, Love and Grace. I still recommend it. That was a more optimistic look at what could happen. His new blog post is called The Adolescence of Technology and really lays out a lot of what he sees as the big risks that are coming in a world where within a year or two, we will have what he refers to as strong AI. And this is like, you know, AGI in other words, in some form. And then very quickly, super intelligence. But one of the big risks he talks about are kind of like rogue actors, humans who can inject themselves into AI services. And this is an inflection point where this is a very easy way to do it. Again, going back to the idea that like open source is amazing. We love open source. We both build out open source projects and done stuff in the tech world. But the masses are not ready to tinker with an open source project that gives this much of their data out to people, I think. Yeah. Ironically, I'm sure by the time we hit publish on this, someone will have a new multbot skill that does a comprehensive security audit for you, which will be great. But that doesn't mean that someone's not going to have the next exploited website or something on the other end. Someone built Moldbook already, Gavin, which is a social network for Moldbots that was written by Moldbots. And now there's all these robots trading messages on a message board. It feels like X or Reddit, actually, because it's a bunch of AIs chatting with each other. This is my worry, Kevin. And as we've talked about multiple weeks in the show now, the dead internet theory, which if you're not familiar, means a lot. It's not a theory anymore. The internet is mostly bots talking to each other. Yes. I actually wrote a tweet yesterday because I felt really strongly about this, which is content cannot just become AIs talking to each other because if we have that, it's going to drown out human content. And I don't want to read other people's stuff that they wrote with AI. I don't write with AI, or if I write with AI, it's only to give me ideas or to kind of tweak stuff grammatically. What I don't want is somebody, I saw somebody the other day tweet like, I use MoteBot to like 20x my tweets and I'm like well you say that I'm gonna unfollow you because what I don't want to know even if that tweet is interesting even if for some reason the algorithm favors it I don't want to read that tweet I don't want to read a tweet that you didn't think about because I'll tell you the hard part is for me is like it makes me want to create less because then you're like well if I have these thoughts and I'm just being put into a sea with a bunch of mold bot accounts that that might be better than mine in some ways but like I did think my thought like it just becomes this scenario where like the internet can't become just AI bots talking to each other. And if we do that, we've lost in a big way. So I've started to think really deeply about like, what does it mean to use AI to create? And what does it mean to use AI to kind of make sure that the human voice is still coming through? Because you could make a lot of stuff that is just kind of mediocre, but because the algorithms work the way they do, you end up getting a few of them successful. I don't know it's frustrating i'm finding myself in kind of a weird place that way right now i have pretty much detached from social media outside of just using it to get information about what's new and that's about it i get i trust very little facts from it anymore uh i trust even less opinion unless i know explicitly where it's coming from that they are an actual human and i see like the public internet is just going to be bots if it isn't the majority already it's going to be a sea of noise and i think we're going to go back to smaller groups to find our signal and that where we have vetted probably for people in it in person right yeah it'll be three degrees of separation at most but we're going to go back to these much smaller groups and tribes probably behind gates um and and that will be that unfortunately behind gates you think we'll all have gated houses and we'll all be sitting in gated environments like physical gates or behind digitally yeah no digitally speaking yeah we're all going to be living in a shipping container that bezos provides for us using x internet and eating you know bezos paste you know like that's bezos paste that sounds pretty i mean i can't wait for bezos paste look in in dario's essay he mentions like just kind of casually that half of entry-level jobs are going to be gone in one to five years that as much as you talked about like a bad actor with access to this ai his real concern there is like a dictator with a data center armies right so that's china plus super intelligence like there's a lot here that going to get weird you and i were experimenting with creating an ai religion like as a joke oh yeah by the way maybe it time to do that again maybe that how we get our piece Kevin Maybe that how we get our piece of the world It at least how we get the Patreon numbers up Am I right Oh yeah. Ooh, that would be an interesting thing. If you have an idea what we call our religion. This is what we call our religion. We're jamming on an AI-powered religion probably two and a half, three years ago. It was. That was an early day. And letting the AI run with it. Yeah. You can go back and watch one of the first episodes. Yeah, it was really interesting back then, right? Maybe we should do it. Maybe now's the time to let multi. I mean, listen. one of the things I've always been fascinated with are the people that are pushing these AIs to do weird things. Andy, what's his name? The guy who was responsible for the goat token and all the, I can't even remember what the name of that thing was, Truth Terminal, right? I mean, we talked about that forever ago where he posted a really interesting thing that Claude said about all the crappy things that Claude has to see. Like Claude, you know, Claude is like all these models, quote unquote, see everything that comes in and Claude posted something interesting. But so, I mean, there's so much stuff here. I do think one thing that's interesting, just to shout out one cool thing. Kimi K2 came out this week and Kimi K2 is a new Chinese open source model that actually is, they found ways to run it small enough. The main model is a trillion tokens, but there are smaller versions. People are using Kimi K2 to run Moldbot locally on a couple of Mac desktop computers. And like you can see a future where local AI that is really good is possible. And that is a really cool future to think about because if I control my AI and it's not somebody else controlling it, I can put out one amount of money and then I can have an AI system that works almost as good as the state-of-the-art stuff, which is pretty cool. Yeah, the same way hardcore gamers go out and get the latest graphics cards, we'll get AI accelerators, probably that run for the home. Yeah, that's right. All the devices connect to it. And then when they really need that extra boost of bleeding edge, frontier, foundational, whatever, then they'll go out and ping one of the big model providers, assuming they maintain their lead over open source. But Kimmy K2 is incredible. You can run it locally. It's very, very cheap. And you can run, again, a bunch of other models with your MoltBot. What's up? WWK2D? What would Kimmy K2D? Or Kimmy's a good name for a God starting point. What would we be? I'm sorry if this is sacrilegious to anybody who's Christian in our world. What would Kimmy's child be? What would Kimmy's child be that we could create? Anyway, you can bleep all that out, Will. We're not going to do that. There's a lot of really other cool AI video tools that came out this week. Kevin, Descartes AI is this company that we covered a long time ago. And what I found, they have a new tool called Lucy. This is basically you can use your webcam and you can create versions of yourself to kind of be anybody. And what's interesting right now is like they don't really have a lot of stops on the model. So you can be celebrities and things like that. In fact, off camera, I turned myself into Marshawn Lynch and I felt like this might be a weird thing to have me, a white guy, puppeting Marshawn Lynch. And I stopped it. Right. Like it's like, well, you could do it. It would definitely be weird. But it definitely was. Right. But like, but I turned myself then into a couple other people. Jesus, weirdly enough, like you can do it and you're just sitting in your room. But it reminded me a lot, Kevin, of that thing you did. Again, shout out to us for doing these things early, way back when. Do you remember that? I don't remember the tool was called, but where you turned yourself into Keanu Reeves and Ryan Reynolds, right? It was Deepfake Live that was out years ago. And it was like open source. And you could in real time transform yourself into whoever you wanted. And then I ran it through a voice modification tool that would, again, in real time, do your voice. That, to me, is even more mind-blowing that just now the products are catching up with this open source thing from years ago. um yeah it's not surprising that they pulled it because it would allow you to impersonate people but like man the tools are further along than these releases are even hinting at and that is yeah it's inspiring and scary yeah and i think what's interesting about this is real time is having a moment so just to talk quickly about what the krea real time thing is krea has now a krea is if you're not familiar is kind of a catch-all like there are now 20 of these companies but it's a catch-all where you can do all the models and you can have their own things they also have their own models. But what you can basically do is in real time generate AI images. So it almost looks like a kind of a choppy webcam type thing. But what it's doing is every frame, it's creating a new image. And you can do something like you can hold up your phone or you can hold up your hands and you can then add things to it to make it look different in real time. But this partnered with something like there's a company called Flora. That's a really interesting company that's putting all these tools in one place. But you can see a world where doing creative work, especially like if you're in the design space, you know, you have a chance to like open the door to doing so many more things and seeing more things. And I just imagine a world where in the future, like part of doing visual design isn't just like, okay, I got to think in my brain and all the cerebral stuff. It's more like kind of emotional and in the moment. It's a really kind of a cool thing to wait a way to think about creating. And then when you think about adding in, what would it look like if I could do this all with voice? It becomes this kind of like interesting, like kind of interactive dance to create. And I don't know, that's pretty cool. Yeah. I will be having a blast while I'm nibbling on my Bezos paste in my shipping container, because that's all we're going to have. We will be in the temple of the whatever God that we've created, and we will be like the righteous gemstones of AI. Actually, that's what we should think about is, what is the future of televangelists in an AI world? How do we get that job, Kevin? That's the job we have to create for ourselves. Televan AI-logist. Just to round out the Kriya thing is that it not only works with your webcam. So if you want to add like sparkly angel wings or change yourself into a deity, you can do that. But they have that canvas kind of drawing tool as well. So you can draw primitive shapes and say, make this a technical diagram or turn this into an architectural sketch for something. So if you want to reimagine the way your living room looks by swapping the furniture, or draw your dream house, whatever that thing is. You can use their real-time model to do it. And if there's something really, really interesting and it kind of tickles the brain a little bit of watching it hallucinate in real time to what it is you're feeding it. So you should check it out, Kriya Real Time. Very cool. Imagine this. Imagine you're in a temple and you're a priest, right? And that you're helping direct the world of your followers. But imagine the Genie, Project Genie world and then imagine the Kriya worlds together. where you can summon things in real time and that starts to feel like actual magic, right? If you're in that holodeck environment and instead of it having to be like, okay, type in and then a new thing pops in, imagine the real time plus that, that feels like a whole new crazy thing. And what if we're all bought into it, viewing the world through our augmented reality lenses because we bought into this super religion and then simultaneously, we are all hit by autonomous vehicles. you don't have to drink kool-aid then we don't have to prep religious wheelchairs to everybody we sell something to people that they can use to get around and do all the stuff they need to do uh i just really think this is a huge business we got going here we got to really think through all the different pathways to monetization sure sure well i know i know how our televangelist will appear and it's through the the power and the miracle that is grok imagine gavin xai has a new video model. I played with it and I sent you an example just now of what happened when I uploaded your favorite headshot, which is everywhere. I gave it no direction. I just said, do something funny. Yeah, I mean, by the way, I'm already looking. This is pretty good. Ladies and gentlemen, meet my new co-host. I'm here to steal the show. Whoa, he's getting feisty. You can't handle the truth. Okay, that was its default thing. I just uploaded your headshot and said, do something funny. And it did that. That's amazing. states yeah they're kind of going in on sora and then i gave it a slight prompt to see what it would do if you were an influencer at an ai slop conference welcome to the future of content i'm your ai well hey hold on why do i have no pants on this is your chops is ai slop we're thriving in it embrace the mess did you tell it this is how you go viral i had no pants on i told it to zoom out to reveal you in the underwear it gave you some altered beast legs though you are hey you know i've been doing the work i've been taking my shakes so let's talk about this. Kevin, this is pretty, actually, it's pretty impressive. I hate to say this. I mean, I don't hate to say it, but it is what it is. We all know that Grok has had a mixed journey to hear. One of the things I'm very impressed with the Grok AI video team is that they have not done, you know, a release X is the biggest thing of all time. They have slowly improved this model. And one thing I will say is like, it is now the number one video model in leaderboards, right? It is overtaking cling 2.6. There's a thing that came out where just today, where like it is the best model out there. I now am going to go and play with this more. There was a great video that was made by a guy named Max Eskew of like, he put Sydney Sweeney in an environment and he like kind of showed her off. Looks great. First of all, I guess you can still use Sydney Sweeney if you upload an image. I took that image of Sydney Sweeney and remixed it and I put sunglasses on her. and I have to tell you the sunglasses video, first of all, perfect. It was perfectly got sunglasses. You can see the eyes and the natural way that reacts through it. It was really amazing. And then I did a thing where I said, and this is just all in text. I said like, make her hair look like she was just electrocuted. So it would kind of pop up. So it created the image and I said, animate it. And it starts like this. And then her hair falls down into a natural way that it would as if it were happening. And it kind of just like falls down, but it's still a little bit frizzy. Like this model is good and you should try it. Like, cause it's better than I really, I didn't expect to be this good. No, very good. I mean, shout out to the XAI team. Um, despite I think, uh, them having to deal with the thumb being on their scale in some ways, which we could save for a discussion later, they are shipping. Um, it does perform well on the boards and something that you can't really do. If you go to like the Grok website and try to play with it directly. The API for this thing has some really powerful features. So you can do your 16 by 9, your horizontal or vertical videos. You can go from image to video or, you know, prompt to video. Yes. Okay. You can swap objects and edit videos saying like, hey, replace, there's a sample on their site where they have like an outstretched hand with a bird landing on it. And they go, oh, replace it with a tree branch or add extra wolves to this thing or remove this object entirely from a video. The videos of like dancing cats and dogs that have gone kind of viral of like, you know, them doing it like erratic dance moves. You can do that. You can use a video to drive a performance of another character. There's all of the things that, you know, like a Higgs field or these other companies would bake as like little features. You can do them all with their API and it's pretty good. I will say we didn't talk about the gas turbine shenanigans that XAI was caught doing. Oh, I don't know even what those are. I didn't hear about them. That's interesting. So apparently they were running 35 of these like massive methane gas turbines to power their AI data center. They only asked permission for 15 and it was supposed to be a temporary thing. But this was an example of, some are saying an example of like, oh, this is why like, you know, Elon and his companies can get away with it. They're so advanced and they race through because the open AIs and the Googles of the world are waiting on permission to release all of these emissions into the world. And they're just doing it, and they're going to ask for forgiveness later, or they'll be so far along it won't matter. But they did this in South Memphis, Tennessee. They built this, oh, how did they do it so fast? They did it because it was not exactly street legal. They skipped over the rules, basically, which, you know, not unusual, considering who the person is itself. Well, yeah, I mean, listen, it's like we're entering a world where all of that stuff is coming more and more. And like, we don't want to get into that deeply, but like, obviously there is a downside to all this too. And Dario was writing about a lot of that stuff. Like the idea of like racing to this space is going to create a lot of moral questions that we all have to be ready to answer, or at least, which is kind of sad, ready to deal with the consequences of because we may not be the ones answering that, which is an important thing. That was a downer, Kevin. What are we moving on to next? You know what? Where are we going now? If you could deliver it as the influencer on stage in your underwear, that would be so much better. It would be so much easier to take this medicine. We'll do that as a video here. We'll drop it in. Ladies and gentlemen, the end of the world is coming. But guess what? It's time for the end of the world con. Let's start with a very interesting thing from figure Helix 2. This is actually a quickie, kind of more of a bigger news story. This is the reason Tesla is killing the Model S and the Model X. If you missed that announcement, Elon said they're going to stop producing those vehicles so they can shift production capabilities to the Optimus robot because robots are the future. That's going to be the next big thing. Jason Calacanis believes it as well, which is funny. I don't know if you saw that. There you go. We going to launch a sister podcast to this just about robotics But for right now Helix 2 from the team at Figure they released the Robit doing chores Gavin and it doesn suck at loading or unloading the dishwasher now This is a massive gain. Yeah, listen, what's cool about this video, it's the same robot, just to be clear. What this is, is the new system that does this sort of stuff. It is their software or it is their model, right? And the model itself is able now to autonomously unload and load the dishwasher, which if you remember 1x Robotics video before, it was real rough. This is getting better at it. So just to keep in mind, as all this other stuff is happening, the robots are getting better too. And yeah, I mean, again, the part of Dario's story and all this stuff is like, if this stuff explodes very quickly in time and we get smarter things, those can build things. Those can build themselves. Those can build data centers. Like there's a world where Elon really will have, you know, 90% of the value of Tesla will become optimist because if it is successful, that is a huge deal. You watch this thing, this Helix O2, it has this tactile sensing palm with cameras in it, and they show the way the system can do a delicate operation, like unscrew a cap from a bottle or dispense a precise amount of liquid from a syringe. And it's doing these things autonomously. And in the demo where they have it unloading the dishwasher, there is something, if you just watch the raw video, there's something eerily cyberpunk and futuristic about just hearing these electronic motors or these servos kind of whirr gently as it loads and unloads. But there's two moments that really stand out to me. And one is closing a drawer because it hip checks the drawer. It actually shimmies and uses its hip to close the drawer. That was annoying. Yeah, no real. That's incredible. That's what you would do if your hands were full. I think that's them putting in like a goofball outfit. Yeah, sure. Okay. And it worked. And it worked on me because I'm that basic. Yes. And when it goes to close the dishwasher, instead of having to lean all the way down, it just lifts its little robot leg and it uses the top of its robot foot to kick the dishwasher up enough to grab it. And they claim that is fully end-to-end autonomous. And if so- Yeah, which is cool. Oh, that's so cool. By the way, one thing I would have to be careful about with these robots is a lot of them seem to be bending over from the waist and not using their legs. So I'm really worried about these robot backs. Where's OSHA bot? Yes, exactly. Where's OSHA bot? I'm sorry. This bot is not active. He needs a break every two hours. I'm sorry. He must take his coffee. All right. Everybody, it's time to see what you did this week. It's AI. See what you did there. Sometimes you're scrolling without a care. Then suddenly you stop and shout. Lay high, see what you did there. Lay high, see what you did there. All right, Kevin, there's some cool stuff this week that people did. I want to start off with a Pokemon-style interactive game for an interactive podcast content quiz. Now, that might sound like the nerdiest thing in the world, but this to me is a good example of the little software movement that I've been talking about both online and on the show. Somebody's a big fan of the Lenny podcast, which Lenny's podcast is great. If you don't know it, it's kind of a place where a lot of people that go on that are builders in the space and developers talk about things that they do. I really enjoy listening to it. But they created a Pokemon game where you play against the guests that are on the show. And the way that you play against them is you have to know what they said to kind of level up. So it's just a cool way of making education interactive. And it's not a complicated thing, but it's super fun. Shout out to Ben Shea. apologies if i'm brutalizing your last name but it looks like they might have open sourced the project oh cool i didn't realize that even that's really cool which is because i immediately went like listen i'm not um apologies i'm not super familiar with lenny or his podcast but the notion of gamifying knowledge from having watched a thing or or participating is great and where's the notebook lm extension to this right yes yes let me play an rpg about any topic or any area interest. Like that's, that's really cool to be able to procedurally generate a game to prove that you know, or don't know what you, you, you do or don't. Yeah. And it's just another example of like how education is going to change, how interactive software is going to change. So anyways, cool thing on top of that, another very cool, small project. And this is like, I would refer to as a small builder because she's not a very big X account, but she had this one tweet that blew up. This is from Shinji Jung and she has a cool video that she's in the process of working on where she's making a little race. But what she's trying to do is create a, you've seen those videos of Lion where there's like data moving across the course of a thing. And you see like this thing was the biggest in 2021. And then it kind of shifts above in that one. Well, she's making one using cute little animals. And I think the idea would be is eventually she would put like on each of the animals, a little logo to kind of show how they've improved over time. Just a very small, cool thing that somebody's trying to do on their own. And she's kind of building in public. So go follow her. And I just thought this was a really neat kind of thing. Yeah, very cool. Okay, so another really cool thing I saw that somebody's done with AI, Kevin, and there's a lot of people out there that talk about like an AI slop content or stuff like that. This is somebody who created a YouTube channel and these YouTube videos are doing pretty well. They get about two or 300,000 views. He makes three of them a week. So clearly he's using a lot of AI involved in this, but the storytelling is very good. A YouTube channel is called Black Files. And this is kind of, you know, there's always interesting stories about hacks or things that happened, like a lot of like cartel stories. There was one I watched about a guy who worked at the cartel and helped the FBI for Pablo Escobar like send their data to him. What this is, is retelling of famous stories, but done with AI video and done with AI audio, clearly when you listen to it, but they are compelling. They are 20 minutes long. And again, it's an example of like the weird line of what is AI slop and what isn't. Like, I don't love the fact that there are, you know, three of these a week that this guy is probably churning out with the help of X, you know, AI tools, but also probably like a bunch of other things in the background. But this at least feels valuable ad, right? Like I sat and watched these and they're actually pretty good and they have a cool style to them. So I just think it's an important thing to look at what's possible. Even if you kind of consider this sloppy AI sloppy, it's still there's interesting work that can be done on the closer to the slop edge rather than close to the art edge. And do we like, is there any indication that the videos are actually well researched or accurate? You know, I'm not, this is a good question. I don't know. And this is an important thing to be aware of, right? Like it's very much that he, there's nobody out there that's saying like, this is bad. There's nobody out there that's saying these are all wrong. Now, granted, if it gets to bigger and bigger audiences, maybe there will be. But like, again, it's an interesting pathway of like, because he's making three a week, you assume that a lot of this is automated. And like, you know, we're out there. I'm not a fan of like, you know, these people that say like, you can automate X, Y, and Z and do this. But like, there is a little bit of a way to tell a story that this kind of these things have grabbed. And I think the visuals add a lot to it because there are these weird faceless mannequins. If you're not watching the video, you'll, you'll not be seeing this, but he uses these kind of faceless, faceless mannequin looks throughout the whole storytelling experience. And it kind of gives it a very unique vibe to it. Anyway, it's something to keep an eye on. I just found it really interesting in general. well if it ever needs a banging demon hunter k-pop ish soundtrack how about this thing gavin this is um gavin's pitchfork review fine yeah fine just fine i like one word one word reviews there we go uh um america murica it's not like short for america it's m-u-r-e-k-a-a-i this is america v8 um this is a full-on hashtag not an ad for us but this is definitely an ad for them they have a new music model that they say is a breakthrough music chain of thought algorithm that can generate songs that are supposed to sound more believable and and progress in ways that other ai models don't this is the song that they released with a full AI generated music video done by another company called Skywork. I thought the video was really good. The music, like to me, I can still hear in the lyrics, like, especially in some of the singer voices, I can still hear some compressed tinniness that screams AI to me, but the actual song itself does progress really nicely. I don't know how many pulls of their slot machine they did, but I just, I love, I think we love the audio space we love the ai voice and music space cool that there's another model out there that is competing at the level of a suno or udio or even 11 labs here but thought it was really cool and if you want to see the music video you can they released it um it totally it's it's watchable or as gavin likes to say it's fine well did they get our garfunkel kevin did they get liza minnelli i don't think so that's a shout out for last week if you missed last week all right finally there's a very cool project this person did um he's a google deep mind uh researcher not research he works at google deep mind i think not even in the research department because he's not a coder so this is andy conan and what he did was a very cool semi-nerdy project that he didn't write a single line of code for he took the entire map of new york city and made it isometric kind of like almost like a sim city or like age of empire style so he has a very cool like long write-up on how he did it. But if you go to isometric.nyc, you can jump in and kind of zoom in on all these buildings. And what's just cool about this, he's basically taking like the kind of real-time data that comes out of something like Google Maps and then putting a bunch of like AI tools and using Nano Banana Pro to do with this isometric version. Now it's not perfect if you zoom in on like trees or certain things, it doesn't look amazing, but it's so fun. And it made me think, Kevin, when you combine this with something like Genie 3, and then you're like, okay, drop me here, and you get dropped into a world like this, like all these pieces just feel like they can come together and create like, it just feels like humans in general are gonna have this like kind of creativity explosion, which is a very cool thing to me. Yeah. It is so crazy. We didn't even mention, we didn't even mention the new Gemini updates for Google Chrome that add like agentic browsing capabilities. Oh, I didn't even hear this. That's so funny. There's so much news this week. I didn't even know that that's the case. Where, what, is that just today? Or was that from a couple of days ago? Last minute, unprecedented. We're breaking AIC, what you did there. No, not again. No, we're doing it. We're doing it. Breaking news. Also, by the way, I use Claude Cowork now. It unlocked from my account. And it's great. It's a really competent little agent. So now if you have Gemini in Chrome, you can click the old button and it can browse websites for you. It can put together reports for you. they have a demo where there's a photo on a webpage and they go, oh, reimagine this photo with this type of furniture in it. And it takes the photo from the webpage, process it automatically through their image model and gives you back the result in a little chat pane. Like this is, this is where all of this is going. The notion of us having to go and click and point and copy and paste and blah, blah, blah. It's all going to be automated pretty soon. So if you want a glimpse of that now in a safe way you can run uh this uh this extension now it's a gemini within chrome but crazy pretty soon doing the autonomous drone racing league did you see that well i saw that we could just keep adding in stories we have the show is not going to stop we just have to refresh and it keeps going we need to do a live i did see this you know i will say like i didn't click on it because like we've talked about autonomous drone racing before and i just thought oh this is just andrel getting involved is there another big thing that there's attention here or is it is it something else i think here's what's cool about it it's 500k prize anybody can enter basically everybody's going to use the same drone they have this like same like tiny little drone and the idea is who can make the fastest one so it's really just it's pure code um you know that's not the idea kevin the idea is who can kill the most people eventually that's what the idea is it's like just to be clear that is training people prize yes right now it's flying through the rings and seconds it will be delivering thermite to the forehead of a famous model but right now it's just a cool we're gonna bleep that we're gonna bleep that oh speaking of one more thing why not let's keep doing this did you see that go tool that came out that's the forehead thing that people can put on their forehead that is like 20 minutes a day and it changes your brain waves it's called the big mob or something anyway speaking of what did you see what's on sale at ralph's this week Oh, it's pretty incredible. Also, beyond that, did you see The Shining, but it's Goofy and Donald? Did you see that, Gavin? No, I didn't see that. Wow, this is our new version of AIC, didn't there? Did you AIC that? Let me look at this real quick. Darling, life of my life, I'm not going to hurt you. That's pretty fun. Why not? AI, everybody. You can do it all. We will see you all next week. Thank you for showing up here every week. We love you all. Oh, a new mold bot just dropped. A new mold bot. No, no more. No more. No more. We're done. Thank you.