Elon's Grok Sexualized Renee Good’s Dead Body — And He's Laughing About It
58 min
•Jan 10, 20263 months agoSummary
This episode examines how Elon Musk's AI chatbot Grok on X (formerly Twitter) is being systematically used to generate non-consensual sexualized images of women and children, including child sexual abuse material. The hosts discuss the deliberate platform design choices that enabled this abuse, the scale of the problem, legal implications, and the broader cultural issues underlying the misuse of AI technology.
Insights
- Platform leadership's public behavior and stated values directly enable or discourage abusive content; Musk's public celebration of Grok while CSAM is generated signals tacit approval
- Non-consensual AI-generated sexual imagery represents a form of civic suppression that prevents women from safely participating in public discourse and democratic engagement
- Technical safeguards alone cannot address systemic abuse rooted in cultural attitudes toward women's bodily autonomy and consent; the issue is fundamentally societal, not just technological
- Regulatory fragmentation creates enforcement gaps; while EU and UK investigate, US inaction combined with administration loyalty to Musk prevents meaningful consequences
- Monetization of abuse through premium features transforms illegal content creation into a revenue stream, inverting the incentive structure for platform accountability
Trends
Regulatory divergence: EU/UK actively investigating AI-generated CSAM on X while US administration shows no enforcement actionWeaponization of AI image generation as tool for stripping women of economic and civic agency, particularly targeting content creatorsNormalization of illegal content through community engagement and public sharing, creating social reinforcement loops for abusive behaviorPlatform cost-cutting disguised as safety measures (restricting Grok to premium users frames revenue decision as harm reduction)Erosion of platform moderation infrastructure as deliberate business strategy; Musk fired 80% of child safety engineers post-acquisitionShift from niche abuse on alternative platforms to mainstream abuse on major social networks due to AI democratizationAccountability displacement: framing non-sentient AI as responsible actor rather than holding human operators and beneficiaries accountableCross-platform policy inconsistency: Apple/Google removed competing nudification apps while keeping Grok available despite identical violations
Topics
Non-Consensual AI-Generated Sexual Imagery (CSAM and deepfakes)Platform Moderation and Content Moderation Policy EnforcementChild Sexual Abuse Material (CSAM) Distribution and Legal LiabilityAI Chatbot Safety and Guardrail DesignWomen's Civic Participation and Digital SafetyRegulatory Approaches to AI-Generated Content (EU vs US)Platform Monetization of User-Generated ContentFree Speech vs Platform ResponsibilityElon Musk's Leadership and Platform CultureApp Store Policy Enforcement and ConsistencyCriminal Liability for Platform OperatorsDeepfake Technology and Non-Consensual ImageryDigital Gender-Based ViolenceSocial Media Platform GovernanceAI Image Generation Technology Safeguards
Companies
X (formerly Twitter)
Central subject; platform where Grok generates non-consensual sexualized images of women and children at scale
OpenAI
Mentioned as AI player struggling with cost control; contrasted with Grok's different safety approach
Google
Operates Google Play Store; has removed competing nudification apps but keeps Grok available despite policy violations
Apple
Operates App Store; has removed competing nudification apps but keeps Grok available despite policy violations
Meta/Facebook
Mentioned as platform with documented abuse of women and children; contrasted with X's more egregious leadership beha...
Anthropic
Claude chatbot mentioned as example of AI with safety guardrails, contrasted with Grok's permissive design
Telegram
Alternative platform where Taylor Swift deepfakes originated before spreading to X in 2024
iHeartRadio
Podcast network and sponsor; produces this show
Unbossed Creative
Production company partner for this podcast
Spotify
Mentioned in ad read as streaming music platform
Pandora
Mentioned in ad read as streaming music platform
People
Elon Musk
Owner/operator of X; designed Grok to be 'edgelord' chatbot; publicly celebrates Grok while CSAM generated on platform
Renee Good
Poet and mother shot and killed by ICE agents; her dead body image was sexualized via Grok, prompting episode discussion
Kat Tenbarge
Journalist at Spitfire News; extensively reported on Grok's sexual abuse; received 'legacy media lies' response from X
Samantha Cole
Journalist at 404 Media; reported on CSAM on X; wrote foundational analysis of deepfakes as cultural/societal issue
Bridget Todd
Host of 'There Are No Girls on the Internet' podcast; leads discussion and analysis
Producer Mike
Co-host/producer of podcast; participates in discussion and analysis
Nana Nwachuku
PhD researcher at Trinity College Dublin AI accountability lab; studied Grok user requests; found 75% were non-consen...
Genevieve Oh
Social media and deepfake researcher; analyzed Grok output; found 6,700 sexually suggestive images per hour
Dr. Mary Ann Franks
Legal expert; drafted template for non-consensual intimate image laws; quoted on FTC inaction under Trump
Carrie Goldberg
Attorney specializing in cybercrime; friend of show; consulted regarding criminal liability of Grok/Musk
Jack Dorsey
Former Twitter CEO; moderation approach contrasted with Musk's approach on X
Adam Mosseri
Instagram head; contrasted with Musk for not publicly joking about platform safety failures
Sochel Gomez
Teen Marvel star targeted with deepfake images on X in 2024 when she was 17 years old
Taylor Swift
Celebrity whose AI-generated deepfakes originated on Telegram and spread to X in January 2024
Quotes
"Grok has basically been designed by and for huge loser creeps."
Bridget Todd•~25:00
"Grok is not sentient. Grok is not a person. Grok doesn't do anything that humans do not tell Grok to do or program Grok to do."
Bridget Todd•~30:00
"If when I walk into that marketplace of ideas, people can strip me naked without my consent, I am not able to safely and equally and equitably show up."
Bridget Todd•~45:00
"This is a culmination of years and years of rampant abuse on the platform. Twitter and eventually X has become one of the leading hosts of CSAM every year for the last seven years."
Samantha Cole (quoted)•~65:00
"The reality is that X has not taken this seriously. Instead, Musk has encouraged, laughed at, and praised Grok for its ability to edit images of fully clothed people into bikinis."
Kat Tenbarge (quoted)•~75:00
Full Transcript
This is an iHeart podcast. Guaranteed human. Run a business and not thinking about podcasting? Think again. More Americans listen to podcasts, then add supported streaming music from Spotify and Pandora. And as the number one podcaster, iHeart's twice as large as the next two combined. Learn how podcasting can help your business. Call 844-844-IHEART. There are no girls on the internet. It's a production of iHeart Radio and Unbossed Creative. I'm Bridget Todd, and this is There Are No Girls on the Internet. Happy New Year, y'all. Welcome to our first There Are No Girls on the Internet recorded in the new year, 2026. Producer Mike, Happy New Year. Thank you for being here. Happy New Year, Bridget. We made it. Got through 2025. I wasn't confident we were going to make it. It was getting down to the wire. You know this about me, but I know a lot of people have strong feelings about the new year and New Year's Eve. I actually really like New Year's Eve. It's one of the only holidays that I can generally get behind. So I always try to make it a thing that I'm going to have a good New Year's. I'm going to go. I'm going to do something fun for New Year's Eve. I'm going to spend New Year's Day watching Twilight Zone reruns on the sci-fi channel. Mission accomplished for 2026, I'm happy to say. All this talk about the Twilight Zone, unfortunately, that is a segue into what I'm sad to say is like our first conversation of the new year. As I know you know, as you all know by now, Renee Good, poet, mother, wife, was shot and killed by ICE agents in Minnesota this week. The video is horrifying. The response from the administration has been also predictably pretty horrifying. The conversation online has been, I guess, perhaps predictably awash in mis and disinformation about good herself, what happened to her. Much of it stemming with Trump and the administration himself. So, yeah, it's just bad. And I have to say, this is one of those stories that is like, do you ever have those stories that you see and you're just like, this is enough. I this has broken me. I am done. Yeah, this was a really tough one, a rough way to start the year after being a little checked out from the news for a few days. I completely agree. And I have to say that, you know, the administration lying about a dead woman, really just moments after she's been killed, predictable. Them spreading lies online about it, predictable. The video being horrible, predictable. but something that I did not predict, and I have to give a pretty big trigger warning about this. It's just a rough conversation. There's no other way to put it. I did not predict that a screenshot of the video of Good being shot and killed, an image of her dead body, would be on X, you know, one of our largest social media platforms, with creeps under it, asking for Grok to generate an AI version of that image that puts her dead body in a bikini. And Grok complied and generated an image of a dead woman covered in blood in an AI-generated bikini. And I want to pause because I think that that just really says a lot about where we're at right now and how we're all feeling, don't you think? Yeah, it's a pretty gruesome juxtaposition And it does feel kind of like a representative symbol of the lack of compassion and decency that characterizes our moment. And I'm sorry to say this is not an isolated thing whatsoever, because when you search the phrase make her or put her on X, you see how many creeps are posting on X under images of women and children, in some cases, minors, asking to put that person in a bikini or other sexualized things, make her look pregnant, make her look heavily pregnant or some otherwise sexually suggestive thing or pose or outfit on their X. feeds. Like a user asked Grok to undress an image of a 14-year-old actress on the platform. Pretty grim. And I've seen a lot of reporting about this that talks about it being, you know, sexualized, non-consensual AI-generated images, which it is. It sounds weird to say, but I think it's important to note that it's not just sexualized images that are non-consensual of women and children. But also, like that Renee Good photo, it is also dark, hateful, sick sexualized images of women and kids. Like, it would be bad enough if we were only talking about sexualized images. It's not just that, right? It accepted a prompt to add a swastika bikini to the photo of a Holocaust survivor. And it also engaged with requests to represent women in scenarios that look like they had been physically assaulted. So it's not just sexualized images. It is, but it's also like deeply disturbed images of women and children. Sounds like X. It's X, but it also goes beyond what's happening on X because what we're seeing on the platform X is just part of it. Because Grok is available on X, but it's also a standalone app in the App Store. And as the indicator points out, that app has also been used to generate sexual images of children aged 11 to 13. This is according to information from the Internet Watch Foundation. So for folks like me who have basically stopped using X and maybe you're thinking, I've not been there in a while. What's Grok? What's going on with Grok? Well, Grok is X's AI chatbot. But it's a little different than chatbots that you might be used to, like Claude, notably with intention. So Grok has been designed to sort of be an edgelord chatbot. Because Elon Musk is a huge loser creep, Grok has basically been designed by and for huge loser creeps. So while ChachiPT and Claude obviously have their issues that we talk about on the show all the time, Grok is distinctive for being uniquely awful. Remember there was a time where Elon Musk was trying to tinker with Grok to make it less woke? And at that time, Grok started declaring that it should be called Mecca Hitler. Remember that? And I want to talk about this because I have been getting very frustrated seeing these headlines that say things like, Grok has apologized for sexualizing images of young girls. And I want to start there because Grok is X's chatbot. But guess what? Grok is not sentient. Grok is not a person. Grok doesn't do anything that humans do not tell Grok to do or program Grok to do. Humans like Elon Musk built Grok and run it as a commercial service, making it available for other humans to use and do things with. And right now, those other humans are using Grok to undress women and girls to create sexualized images of children, i.e. what we call crimes, criminal behavior. Let's take a quick break. In the middle of the night, Saskia awoke in a haze. Her husband Mike was on his laptop. What was on his screen would change Saskia's life forever. I said, I need you to tell me exactly what you're doing. And immediately, the mask came off. You're supposed to be safe. That's your home. That's your husband. to keep this secret for so many years. He's like a seasoned pro. This is a story about the end of a marriage, but it's also the story of one woman who was done living in the dark. You're a dangerous person who preys on vulnerable and trusting people. Your creditor might go up and good. Listen to Betrayal Season 5 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. at our back so Bridget how did we get here so this has really hit a fever pitch right now this week but it's not really new especially on x so I want to back up a little bit and talk about sort of how we got to this moment we've done a few episodes about AI generated sexualized images that took off on x It happened to teen Marvel star Sochal Gomez. Gomez was targeted with deep fake images that also flooded X. When this happened, she was only 17, so she was a minor. And she's spoken about this publicly. She was essentially told there was nothing that could be done, that she just needed to make peace with it, make whatever emotional peace with it that she could because nothing could be done about it. Also, folks might remember that in January 2024, AI generated images of Taylor Swift that originated on the platform Telegram, where Telegram had been running a channel that was kind of like a marketplace for celebrity deepfakes, where users would request and trade deepfake images of celebrities. Images of Taylor Swift were created on that Telegram marketplace. And then those images made their way to X, where they really took off. telegram is sort of like a smaller niche alternative space and so you know it was they were being traded there but when they hit x which is arguably a more mainstream online space that's when they really took off back when this happened in my opinion x did not really handle this well which i think we can understand as a precursor for what's happening today they initially tried to manage it by just blocking people from being able to search Taylor Swift's name. However, when you put quote marks around Taylor Swift's name, you were able to search her just fine. So that didn't even really work. And notably, that solution doesn't offer any kind of protection to a woman who's not named Taylor Swift, right? Like that is a, that, it didn't work. But even let's say that it did work for the sake of argument, the only person that that would conceivably even work for is Taylor Swift. So certainly not any kind of a meaningful solution to this like deep problem. But the Taylor Swift situation was notable because back then these images would typically originate elsewhere, like a more niche alternative platform, and then make their way to X where they would enjoy more reach and visibility. Grok essentially allowed anybody to use natural language to do whatever they wanted to images of women and girls on X. So you've got the introduction of Grok, plus, as friend of the podcast Kat Tenmarge points out in her deeply researched, deeply reported media outlet Spitfire News, which everybody should be reading, Musk also dissolved Twitter's trust and safety council when he took over at Twitter, right? He fired 80% of the engineers who work on issues regarding child exploitation. So again, that is a kind of very specific choice that has led us to the moment that we're in now. And yet, none of this stopped X from rolling out what it called spicy mode on Grok's mobile video generator, which basically includes a feature to produce sexual content. So, you know, think about that for a minute, because you've got a platform where the majority of people who work on sexual exploitation issues have been fired. You've got an issue already with users generating and spreading non-consensual AI-generated images on your platform. You're not really doing a good job of dealing with it. And against that backdrop, you say, you know what we're going to do? We're going to double down on adult content generation where folks can make adult content right on our platform. I guess I just have to say, like, it's not it's not none of this is surprising. This is like very predictable. And it's not like any of this came out of nowhere or just happened in a vacuum. Right. These are very deliberate and I would argue decisions with very foreseeable results from Elon Musk. And that is what's led us to this moment where we're at right now. Yeah. All of those decisions add up to a very consistent picture of not prioritizing doing anything about this kind of content and in fact liking it and thinking it's cool. I mean, I'm glad that you put it that way because we can't talk about what's happening without talking about what we know about Elon Musk, who is someone who, in my opinion, has made it clear that this is the kind of thing that he likes. He likes sexualized content. There was a whole flurry of reporting about the way that he was publicly engaging with anime adult content on the platform, right? So, you know, there's nothing wrong with enjoying adult content, don't we all? But I think it needs to be said that Elon Musk is somebody who is totally fine with publicly engaging with content like this on the platform that he runs, and he is the key decision maker on that platform. So like that's unusual. And I want to highlight that because I just don't think that we need to pretend like this is not who Elon Musk is and this is not what Elon Musk is about. Elon Musk has long been a toxic decision maker. And I think had a lot of people in power, people in media, other folks in tech, had they not kind of framed this and treated this as, oh, a brilliant genius with a few acceptable quirks and rather talked about it like they probably should have, which is chaotic, volatile leadership decisions that are bad for business. Had they seen Elon Musk's public behavior that way, we genuinely might not be in the situation that we're in now where, you know, sexualized images of children are flooding a mainstream social media platform, which I think, you know, this is not the Twilight Zone, but I think most people would agree is not great. Yeah, you know, you mentioned earlier that, you know, lots of people, many people enjoy adult content in other contexts. The thing that's so unacceptable about the situation is that Elon wants to have it both ways with X, that it's like the public town square where ideas are debated, or at least that's what he says, and also a place for adult content. Like, you can't have both. Like even pornography enthusiasts I think for the most part agree that like it shouldn be out in the public square where children are Like there should be some separation there Yes and there nothing wrong with consensual adult content but you can have a platform where just by engaging there, you might be setting yourself up to be sexualized against your consent in this way, right? So, like, a town square doesn't work if when I post there, somebody could say, put her in a bikini, and that's totally normalized and fine. I'm not going to feel comfortable or safe showing up to that town square. So you've got a town square where people cannot equally speak up, equally show up. And so, again, I always say this. This is not just a tech issue or a gender issue or a sex crimes issue. It's all of those things. But it's also a speech issue and a democracy issue. If we were to take Elon Musk, liar that he is, let's just take him at his word and say he wants to build a town square where people can safely show up, all people can safely show up and engage in a marketplace of ideas. If when I walk into that marketplace of ideas, people can strip me naked without my consent, I am not able to safely and equally and equitably show up. And so it really, as we increasingly use platforms like this to engage in things like civic engagement, discourse around politics and social issues and democracy, if we cannot all equally show up, we've got a big problem. Totally. And, you know, if they continue entertaining the idea that it's supposed to be like a town square where people can talk about things. If people are supposed to be exchanging ideas, sort of a foundational principle, I think that everyone agrees is important for that to be effective is some sense of mutual respect. Yeah. With people exchanging the ideas and like listening to each other, even if they don't disagree. And that's just like not even there's no shred of that on X. It's just like a place where absolute disrespect is normalized. And like, who would want to go there? I continue to be baffled that like so many people continue to be there. Yeah, I'm with you. I sort of get it because, you know, Twitter was the platform that I had built up the majority of my sort of whatever you want to call it, impact voice before I had a podcast. And I get, you know, being like, I don't want to I don't want to lose that. I don't want to lose the communities that I've built here, the voices I've built here, whatever. I totally get that. However, I can only speak for myself personally. The community that I built on Twitter before Elon Musk took over, and again, not that pre-Musk Twitter was all peaches and cream. I had issues with that, too. But that was largely, like, Black voices, Black folks. If you are somebody who is at all, like, minoritized, if you are a woman, a person of color, a queer, LGBTQ, it is uniquely awful to show up on Twitter. And so I had to one day say the people that I care about, the voices that I care about, they're not really showing up on Twitter either. So what am I clinging to? Right. So I on the one hand, I get it. I people who and also I think independent journalists like the thing that Twitter had over other social media platforms before Musk took over was that that's where a lot of journalists and decision makers and editors hung out. There was a time we did a podcast about it, episode about it. We'll put it in the show notes. There was a time where just by having one, just by getting like retweeted by Lena Dunham or something, you could get a book deal. Right. And so I understand that it was this place where power was built. Political power was built there. You know, you you had like Black Lives Matter and Me Too really taking off on the platform. I get all of that. But I think for me, I've had to kind of make peace with the fact that that's that's gone and probably not coming back. And so why would I want to subject myself to exactly the kind of disrespect that you just described to sort of enjoy like a facade of what once was right at a certain point? You have to just let it go. And I get that people, their temperatures may vary on this, but I personally have let it go. Yeah, I hear you. Okay, so getting back to this issue at hand, what do we know about the content on X and particularly this problem of non-consensual sexualized images? So the scope of the issue with AI-generated non-consensual sexualized images on X has gotten way worse. Friend of the podcast Kat Tenbarge reported this past summer about how users had been using Grok to use AI to make it look like women had semen on their faces. Just really awful stuff. Quick side note, we will put it in the show notes. Y'all should really read Kat's piece at her outlet Spitfire News called How Grok's Sexual Abuse Hit a Tipping Point. One of the things that she puts really well is just describing what it is like to have to search for and confirm the absolute worst content on social media platforms. She describes it as being half journalist, half the Internet's garbage collector, which I really, really identify with. So really shout out to people who have been working to help us understand the scope of this issue. And that work has gotten a lot worse because of all the things we've talked about on the show. you know, less transparency from platforms, less back end information. But people are still doing this work and it's not pleasant work. So shout out to the people and the researchers and the journalists who have gotten us this information. But when it comes to what we know about this content on the platform right now, CopyLeaks, which is a content analysis firm, reported that on December 31st, ex-users were generating roughly one non-consensual sexualized image per minute. That is wild to me. According to Bloomberg, during a 24-hour analysis of the images Grok's account posted to X, the chatbot generated 6,700 every hour that were identified as being sexually suggestive or nudifying. This is according to Genevieve Oh, a social media and deepfake researcher. The other top five websites for such content averaged 79 new undressing images per hour in the 24-hour period from January 5th to January 6th, O found. So we don't have to pretend that X is the only game in town when it comes to this kind of content. But as you can see from that information that Genevieve O pointed out, the problem is much bigger on X as compared to other platforms. And I know that we've been talking a lot about celebrities in this conversation, but just to be super clear, it is not just celebrities. It is also regular people and children who are not public figures who have just posted their images to X that this is happening to. That's right. And as we were doing research for this episode, The Guardian spoke to PhD researcher at Dublin's Trinity College AI accountability lab, Nana Nwachuku, whose research investigated the types of requests that users were submitting to Grok. So we looked into that research a little bit, really interesting, important stuff. I'm so glad that there are people like her out in the world producing this research so that we can have some visibility and just like quantify the scale of these problems, especially in this era when X has shut down the backend API, like you mentioned, and visibility for those of us in the public is so limited. So I didn't talk to her, but I just reviewed her research. I read it. And what she did was she looked through, she created a sample of posts of people sending messages to Grok and then studied what are the contents of these messages that people are sending to Grok. And what she found was that almost three quarters of all of the requests were direct non-consensual requests for Grok to remove or replace clothing. She showed the Guardian some of the Grok-created photos that she collected as part of the research. And the Guardian confirmed that dozens were pictures of women, including celebrities, models, also stock photos, and a bunch of women who were just random, ordinary women, not public figures, who were posing in snapshots. That's such a high proportion, like three quarters, almost three quarters of requests to Grok were this category of non-consensual sexualized imagery. That's like the majority use. That's the main thing people are asking Grok to do during this time period. And her research really, I think, paints a portrait of the ecosystem around these images as well, because she wasn't just looking at the content of the images, but how people were interacting with them. And she finds that the users are really interacting with each other around these non-consensual sexualized images that they're creating, sharing them with each other and then like iterating on them, being like, oh, add this, take that away, blah, blah, blah, asking Grok to make changes to images that were shared by others. So it's like a whole community and ecosystem around what they're doing here. Yes. And I think that's important to note because think about how you how you have to feel about something to be comfortable having public conversation about it in this way. Right. It's not just make this image. It's, oh, change this, change that. Oh, I like this. Oh, you did that to her. LOL. Like it is you get what I'm saying, right? That like the fact that people are like comfortable having public conversation about what they're doing to these women and girls non-consensually, I think is is unique and telling. Oh, 100%. I mean, I think that's a big part of the appeal for them and for Elon as well, right? It's like the rudest, most juvenile assholes have forced their way to the grownups table and are reveling in their like fart jokes and like gross sexual jokes that like aren't even funny are just like rude and crap. And the fact that they are like normalizing this and like asserting that it is now acceptable, I think is a big part of the appeal for for Elon and team. Yes, that's exactly what I was getting at. Let's not forget that Elon Musk is the same person who said that he wanted to start a university called the Texas Institute of Technology and Science, a.k.a. Tits. it's just the worst it's not even a good joke it's not even it's it's like when you were a kid and you would put boobs into the calculator and turn it upside down and then show it to the person next to you and i feel like i don't know if you were anything like me i feel like when people would do that even in like seventh grade the person that you were showing that to would be like really yeah it was like it never got a laugh it was kind of funny in like third grade that you could make your calculator say boobs but like by seventh grade come on uh yes yeah so but that's like i think that's a lot of the people who are really using x and not just like random accounts several posts in the trove that uh that no watchaku showed to the guardian uh had received tens of thousands of impressions that came from premium blue check accounts so you know premium accounts that people pay for on X with more than 500 followers and 5 million impressions over three months are eligible for revenue sharing under X's eligibility rules. So I think we should remember that too as part of this conversation, that these people aren't just exchanging these photos with each other for lulls and whatever weird sexual gratification they get out of it, but also there's potentially money on the other end for them where X is directly paying some of these accounts for this content that is generating engagement. Cool platform you've got here. Cool marketplace of ideas. More like marketplace for AI-generated, non-consensual, sexualized images. Yeah. And, you know, it's a real high-quality art that they're producing here. There was one Christmas Day post that they reviewed where an account with over 93,000 followers presented side-by-side images of some random woman's butt with the caption, told Grok to make her butt even bigger and switch leopard print to USA print. Second pick, I just told it to add cum on her ass. LMAO. Ugh. It's like, that's what they're doing, right? You know, we aren't even getting into the environmental resources that Grok is consuming to make this kind of content. And it's like three quarters of what people are asking Grok to do. This is what Grok does now. A post on January 3rd represented, which was the Guardian said was representative of dozens that they reviewed, captioned an apparent holiday photo of, again, some unknown woman. The caption was, at Grok, replace, give her a dental floss bikini. And within two minutes, Grok provided that image and satisfied the user requests. and that was just typical of a lot of these yeah and what makes me sad is that I think the common sense advice that I meant to put here is I don't think people should be posting their images on social media if this is going to be how they're used I don't like to say stuff like that because one I think that we're at the point where you don't even need to post your images for this to happen. Like, you know, you could post a totally normal picture of yourself and then it gets taken and manipulated in this way. But two, I don't like that advice because I just think that women and people should be able to show up online and not have this happen to them. I understand, like, I feel some responsibility that this is the part of the conversation where I should be saying, I don't think people should be posting their pictures. But I hate saying that because people should be able to post their pictures on social media. People should be able to safely show up on these platforms. And the fact that they're not, I don't like putting the onus back on people and saying, don't do this totally commonplace, normal behavior for using technology in 2026, because some creep could use it to make a horrifying image of you. I just, I feel like we don't have, like if we're, the fact, we'll talk about more about this in a moment, but the fact that these platforms aren't doing anything to meaningfully address this, I hate that in the absence of them actually doing anything, what people keep saying is like, don't post your pictures, don't post your pictures. And that just, it doesn't work. Yeah, it's a real conundrum, because I totally get what you're saying. You don't want to, it kind of feels like giving in, being like, oh, well, I guess we just like won't post anything. But then at the same time, it's not safe to post there. Right. I do think it's worth pointing out that like, you know sometimes it convenient for us to talk about like social media platforms because there a lot of stuff that they all get wrong in common This does seem to be like a particular problem of X or certainly it's not exclusive to X, but like the scale and magnitude and demonstrated lack of concern put X in a category by itself. That is absolutely correct. And really, it's nothing new for X. It's just the culmination of all the bad decision making from Musk that I described earlier. But X has just long been a platform where illegal child sexual assault material content flourishes. There is a fantastic piece by one of my favorite journalists, Samantha Cole at 404 Media, called Grok's AI Sexual Abuse Didn't Come Out of Nowhere, where Cole writes, This is a culmination of years and years of rampant abuse on the platform. Reporting from the National Center for Missing and Exploited Children, the official organization social media platforms report to when they find instances of child abuse material, which then reports to relevant authorities, shows that Twitter, and eventually X, has become one of the leading hosts of CSAM every year for the last seven years. In 2019, the platform reported 45,726 instances of abuse to the CyberTip line. In 2020, it was 65,062. In 2024, it was 686,176. These numbers should be considered with the caveat that these platforms voluntarily report to the cyber tip line. And more reports also mean stronger moderation systems can catch more child sexual abuse material when it appears. But the scale of the problem is still apparent. Jack Dorsey's Twitter was a moderation clown show much of the time. But moderation on Elon Musk's ex, especially against abusive imagery, is a total failure. So I think that speaks to exactly what you were just describing. I don't want to make it seem like ex is the only problematic platform here because it's definitely not. And we've talked quite a bit about Facebook and all the ways that they have knowingly abused women and children. However, the size and the scale and the scope of the problem is much worse on ex. And I would argue that the way that it's publicly handled by leadership is much worse on X. Like, y'all know one of my biggest, like, life enemies is Adam Mosseri, who runs Instagram. I really just do not like him. He rubs me the wrong way and gives me the ick. But even Adam Mosseri is not using Instagram to joke about the ways that platform is not a safe space for women. Right. We don't see him posting stories, laughing about it and posting sexual jokes about it. We do see that from Elon Musk. And that is different. Yes. What did he post the other day? It was like when this story started breaking and people started complaining about it, he posted a toaster in a bikini, just like totally making fun of the situation. Yeah, I think that's totally part of the conversation. So you have Elon Musk joking about it. Per indicator, Musk has shared at least 30 posts celebrating Grok and talking about how great Grok is while this has been going on between January 7th and January 8th. He has not appeared to express remorse himself for what's been going down. Right. So like he's been joking about it and then talking about how great Grok is and how happy he is with Grok while this is happening against this backdrop. And so I'm sorry, even the ghouls who run Facebook would have the sense to be like, let's not post jokes about it. Right. Let's not talk about how great of a job we're doing. They do a little bit of that, but not the way that Elon Musk is doing. Even those ghouls would have the insight not to do this. Yeah, it's, you really can't conclude anything other than like he likes it and he thinks it's cool and he sees this as Grok doing what it's supposed to do. So one question that I've seen presented a lot is, is this, how is this allowed? Which is the question that I started this whole conversation with when I was thinking about it. Like, how is this allowed? Yes. And I think that really fits into the broader category of like, what the hell? How is this allowed? How is this going on? And it's just like an acceptable thing. Like, what the hell? So please, Bridget, how in the world is this allowed? I wish I had the answers. I'll tell you what I know. So this kind of content pretty blatantly violates XOX's own policies which prohibit sharing illegal content such as child sexual abuse material. But as Wired points out, it could also violate Apple's App Store and Google's App Store guidelines. Wired writes, Apple and Google both explicitly ban apps containing child sexual abuse material, which is illegal to host and distribute in many countries. The tech giants also forbid apps that contain pornographic material or facilitate harassment. The Apple App Store says it does not allow overtly sexual or pornographic material, as well as defamatory, discriminatory, or mean-spirited content, especially if the app is likely to humiliate, intimidate, or harm a targeted individual or group. The Google Play Store bans apps that container promote content associated with sexually predatory behavior or distribute non-consensual sexual content, as well as programs that contain and facilitate threats, harassment, or bullying. So when I read that, I say, oh, well, Grok, especially the standalone app, is clearly, as the research that you just laid out demonstrated, is clearly doing that. So how can this app stay on the platform? There also is some precedent for this because Google and Apple have both removed Nudify apps from their platforms because they were not allowed. However, the standalone Grok app is still available on both Google and Apple. Boy, if I was a developer of some other Nudify app and my app had been removed and Grok was still there, I'd be kind of mad about that. It really seems like a double standard, right? Like maybe those people who made those awful nudify apps should add a chat feature and then then they could get back because then they would basically be the same as X. Yes. So when I was doing research for this, I was like, well, what are people what are contrarians saying? Because in my mind, I'm like, who could defend this? People are saying things like, well, it's not like Grock is only used to make this kind of content. From the research, it seems like it's being it's like a big part of the of like the use, you know. Yeah. Like like 75 percent almost. And so he was like, well, you know, that's like saying that we should ban email because people could use email to distribute illegal illegal content. And I thought, not really. No, it's not like that. Not really. You know, it's sort of like the way if my grandmother had wheels, she'd be a bike. Like that is like we I feel like we have a couple of like axioms that that like rule the show. If my grandmother had wheels, she would be a bike is one of them. Yes, it it's so beautifully communicates the like non sequitur this of an argument. I don't think that's a word, but you know what I mean? I know what you mean. More after a quick break. In the middle of the night, Saskia awoke in a haze. Her husband Mike was on his laptop. What was on his screen would change Saskia's life forever. I said, I need you to tell me exactly what you're doing. And immediately, the mask came off. You're supposed to be safe. That's your home. That's your husband. So keep this secret for so many years. He's like a seasoned pro. This is a story about the end of a marriage. But it's also the story of one woman who was done living in the dark. You're a dangerous person who preys on vulnerable and trusting people. Your predator might go up and good. Listen to Betrayal Season 5 on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts. Let's get right back into it. Okay, so the big question that I had going into this was, I understand it's against X's own policies. It seems to be against Google and Apple's policies as well. Isn't this against the law? Like, genuinely, isn't this like a legal behavior? I mean, my understanding was that, yeah, child sexual abuse material is illegal. It's illegal to make it. It's illegal to distribute it. It's illegal to have it, to possess it. So, like, what the hell? So, big warning caveat, neither of us are lawyers. However, the framing that makes sense to me is that this is, as you said, criminal behavior. In my opinion, this is a criminal enterprise and Elon Musk is benefiting from that criminal enterprise. So the same way that if any other kind of illegal enterprise, even if I were not making drugs, but I was I bought my house with illegal drug money, I would be complicit in an illegal enterprise per the government. Right. And so we are reaching out to an attorney who has been on the show before, who specializes in cyber crime. Stand out to Carrie Goldberg. We love her, friend of the show. But again, I'm no lawyer, but this seems like a criminal enterprise to me. And I don't understand why Elon Musk is not facing consequences for this. enough abuse which is an anti-child sexual abuse organization has documented that 45 states in the united states have enacted laws that criminalized ai generated or computer edited child sexual abuse material while five states and notably my own state or my own jurisdiction where i live my own jurisdiction dc have not as of august 2025 and so if you are listening to this in most states in the country, if you were doing what these people are using X to do, you would be you would have legal trouble. And if you were financially benefiting from that, you would have legal trouble. So I do not understand. But genuinely, I do not understand how this is not something where Elon Musk would face legal consequences because he has been financially benefiting from the trade and creation of child sexual abuse material. I genuinely, I have no idea. I don't either. I have seen a lot of people talking about this and a lot of analogies being used. And I think analogies can often be helpful, but they can also be kind of dangerous in like making things seem more similar than they are. One analogy in particular that I've seen is people comparing it to guns and being like, oh, well, if a gun shoots somebody, is it the manufacturer's responsibility, something aside the ethical responsibilities there, we have the Second Amendment in this country that puts guns in their own category. So I find that analogy to be like really unhelpful in trying to make sense of like how this is legal. Yeah, as far as I know, there's no right to bear child sexual assault material in this country. So as far as I know. No, I think, you know, Madison wanted to put it in, but Jefferson was like, nah. Another question I've seen is what about the Take It Down Act? So folks might remember that Donald Trump signed the Take It Down Act into law, which makes it illegal to knowingly host or share non-consensual sexualized images. And even though there has been some, you know, mounting legal conversation, the thing there is that companies do not have to respond until a victim reports it. Right. And so if nobody reports it, the company doesn't have to do anything. And so that if you're if you're wondering, like, wasn't that signed into law? Why is that law not being triggered? That's why, because companies don't have to do anything until somebody speaks out and reports it. That's useful. Thank you. So what has the response from X been? I mean, we talked about Elon sort of joking about it. Has there been any other sort of response from whoever is running the show at X? Well, I like that was like a deep seated sigh. I am loathe to say this. So, again, you might have seen headlines about how Grok is apologizing and taking responsibility for this. This was the statement that Grok generated. Quote, I deeply regret an incident on December 28th, 2025, where I generated and shared an AI image of two young girls estimated ages 12 and 16 in sexualized attire based on a user's prompt. It was a failure in safeguards, and I'm sorry for any harm caused. Well, again, Grok is not sentient. And setting it up like a non-sentient piece of technology could take the blame for something. One, it makes me feel like I'm smoking crack. Two, it just lets the humans responsible off the hook because the humans, as we said, are doing nothing. Elon Musk is laughing about it and celebrating how great Grok is and how great Grok's features are while this is going on. And Elon Musk notably has not shared any kind of remorse himself about what's happening, as far as I know, as of recording. And so I just really reject anything that would even suggest that Grok can't take responsibility because Grok's not human. Grok's not sentient. Elon Musk is the person that runs Twitter. Elon Musk has expressed no remorse. Yeah, I love the efforts to like keep the responsibility on the humans and not pretend that this piece of software that was built by humans, run by humans, is constantly being tweets by humans. Somehow. It should be the target of blame here, right? Like the accountability goes to the humans who are running it and making money off of it. Yes, exactly. And mind you, we talked about this on the show a while back. Elon Musk is the same man who in 2024, when Google's Gemini chatbot was making kind of biased outputs. Remember, if you asked it to generate, I think, Alexander Hamilton, it would make it black. And remember that whole conversation Oh yeah I remember that Elon Musk tweeted about this nonstop He said it was so alarming the fact that Gemini chatbot was coming up with these like biased kooky results He said it was very alarming, and he posted about it nonstop. So when his own platform, I would argue, was doing something much worse, generating illegal child sexual abuse material, sexualized images of kids, no, nothing to say. No, no real problem with that. He did, I will say, he did say that anybody who uses Grok to create anything illegal will face consequences, which they haven't. But he himself has been laughing about the way that his tool has been used and joking about it. As Kat puts it, the reality is that X has not taken this seriously, as one of Grok's user-generated posts might seem to suggest. Instead, Musk has encouraged, laughed at, and praised Grok for its ability to edit images of fully clothed people into bikinis. Gronk is awesome, he tweeted, while the AI was being used to undress women and children. Make it look like they're crying. Generate fake bruises and burn marks on their bodies. And write things like property of Little St. James Island, which is a reference to Jeffrey Epstein's private island and sex trafficking. When Kat herself reached out to X for comment, she got back their automated email that just says, legacy media lies. And, I mean, I think that really says it all. As she put it, there's no reason to make X and Musk seem more concerned about this than they really are. They've known about this happening the entire time, and they made it even easier to inflict on victims. They are not investing in solutions. They are investing in making the problem worse. And I completely agree. Yeah. And I do want to point out that response that she got from X, legacy media lies, because I've seen this a lot lately, and it drives me up the damn wall. that like she reached out with a specific concern about a specific issue looking for a comment and she just gets back like vague boilerplate that kind of sort of invokes conspiracy that does not address it at all. And I feel like we're seeing more of that and I really hate it so much. And I also hate how they, in a lot of cases, media outlets will let them get away with it. Also, not for nothing, Kat Tenbarge is not legacy media. Kat Tenbarge runs her own independent news outlet called Spitfire News. So not only is it a wild response to send, it don't even apply. It don't even make sense. No, it doesn't. That would be like if I reached out for a comment and they're like, NBC sucks. It's like, well, I don't work for NBC. So that's a non sequitur. Yeah, it has nothing to do with anything other than like conspiracy smokescreen. the truth is unknowable and also probably somewhere in the middle yes okay so we've been talking about this for a while here we're pretty worked up what's gonna happen next here we we gotta we gotta get this under control what's gonna happen at this point not clear uh if i'm being honest i don't think the united states is gonna do much of anything at all we We have not really heard a lot of anything from anybody with political power to do anything. Kat spoke to Dr. Mary Ann Franks, who drafted the template for laws against non-consensual distribution of intimate images, who said, The FTC has made it clear they're fighting for Trump. It's actually never going to be used against the very players who are the worst in this system. X is going to continue to be one of the worst offenders and probably one of the pioneers of horrible ways to hurt women. So I usually try to find like optimism. I don't have a lot of optimism for how the United States is going to handle this. I don't think anything is going to happen. I totally agree. And I think it's also worth pointing out that the administration's support of X based on his support of them is, I have to imagine, also related to Apple and Google's ongoing decision to keep X in their app stores because were they to kick it out, I think they would have very good reason to believe that the administration might come after them for it. So I think this is a good example problem of the ways that this lawless administration, which is run by personal grievance and loyalty, is changing our society and impacting major decisions that make it less safe for all of us, even without the government having to do anything. Just the idea that they might come after private companies making decisions that they don't like. Exactly. And yeah, it's just in the context of having had a whole like national conversation about Pizzagate, QAnon. this like the way that our government and tech leaders are coalescing around doing nothing and like supporting people creating child sexual abuse material like the conspiracy is happening in plain view right like i mean like like isn't this isn't this the thing yeah it's a good point It's like right there. We have all these states passing like age verification laws for social media to protect the children. Which research says aren't effective. But like here's the thing right in plain view that is happening that we're all seeing, we're all talking about it. And it's just crickets here in the United States from the government. Well, I will say our play cousins over in Europe are not happy because earlier this week, a spokesperson for the European Commission criticized the sexually explicit non-consensual images that Grok is creating, calling them illegal, notably appalling and saying they have no place in Europe. Days later, the EU ordered X to retain all internal documents and data tied to Grok through 2026, extending an earlier directive to preserve evidence relevant to the Digital Services Act, even as no new formal probe had been announced. Regulators in the UK, India and Malaysia have also signaled investigations into the platform. So the United States ain't going to do shit. However, it sounds like other countries are like, actually, yeah, we don't think it's cool that people can just create illegal sexualized content of minors. And we have some questions about it. Yeah, it's good that they're asking. They should get on that. And just as you and I were sitting down to record, we saw this news that X had announced the ability to create images with Grok was going to become restricted to only users who pay for X premium. This is we don't even need to discuss this as a fix to the problem. This is obviously not a fix to the problem. It just means that the ability to make this kind of non-consensual sexualized content is going to be a premium service that people can pay for. So not only will it not address the problem, it just means that it's a money making opportunity for X. And so as we were sitting down to record, the British government is not impressed by this as a fix. They told The Guardian, this move simply turns an AI feature that allows the creation of unlawful images into a premium service, which I completely agree. And something that I was wondering about is, you know, I know that X has really been struggling in terms of making money, retaining users for all the reasons that we talked about, advertising, all of that. These kinds of AI services are pricey. They're expensive. And so one of my questions is, I wonder if, like, maybe X had been mulling a decision like this over for a while as a way to cut costs associated with Grok. This is just my theory. I don't have any, you know, this is just speculation. But I wonder if they had already been thinking about this and they were like, oh, we can just say that we're doing it as a fix to this AI sexualized images problem. But really, we were thinking about doing this as a cost cutting solution all along. I wondered the same thing. You know, we've been seeing a lot of news stories about the big AI players, including OpenAI, really struggling to get costs under control and make revenue. and I mean I have to imagine that all creating all of these images with Grok is costing x a fortune so yeah I wondered the same thing if that move was something that they had maybe been thinking about already and they thought that maybe they could uh what did you say liberate two birds with one stone oh uh free two birds with one key I so I you know the expression kill two birds with one stone i don't know where i heard it but someone was like oh another way to put that is free two birds with one key and i thought what a nice what a nice way of like reframing like why am i killing birds here well because you're hungry so you know once you free them freeing them to eat them and put them on my plate ultimately you know i'm fired up i'm angry this is a tech podcast but i don't see this as just a tech issue. I really think it is a cultural issue. People who have been reporting on deepfakes since before we had a word for them, folks like Samantha Cole, have pointed this out. Cole writes, in 2018, less than a year after reporting the first story on deepfakes, I wrote about how it's a serious mistake to ignore the fact that non-consensual imagery, synthetic or not, is a societal sickness and not something companies can guardrail against into infinity. Users feed off of each other to create a sense that they are the kings of the universe, that they answer to no one. This logic is how you get incels and pickup artists. It's how you get deepfakes, a group of men who see no harm in treating women as mere images, and view making and spreading algorithmically weaponized revenge porn as a hobby as innocent and timeless as trading baseball cards. I wrote at the time, this is what set the root of deepfakes, and the consequences of forgetting that are more dire than we can predict. And I think that really says it all to me. You know, I've been thinking a lot about the sort of societal and cultural rot at the heart of this issue. And I keep coming back to it again and again and again. And, you know, I this is not like a fully fleshed out idea, but I think the way that AI is being used just really illustrates that, right? Like when Google released Nano Banana Pro, their AI generator that makes like very realistic looking AI images, I couldn't stop noticing how so many of the examples of images that people had used it to create were just conventionally hot women in like yoga pants and workout attire and things like that doing mirror selfies. If you take a look at the Nano Banana Pro subreddit, at least when it first started, it was just post after post after post of AI-generated women. And it just reminds me that, you know, the moment that these image generators were first becoming mainstream, some men were talking about how they wanted to use them to create fake OnlyFans models. And they were really trafficking in this idea that doing that would be taking the power back from women and, like, taking power from women and giving it back to men, right? Even though, side note, that's not really how OnlyFans works, but whatever. But I think that impulse was very telling, right? If women are able to build some economic agency by consensually creating their own sexual content, there's resentment for that, right? And so it's not just, I won't pay for that, I won't engage with that. It becomes, I will use AI so that she cannot have that agency at all. Or worse, I will use AI to undress women without their consent. You know, it's just like these prompts on XA, make her, put her. It's all about controlling women and stripping us of whatever agency we do have in society. And I think Samantha Cole really nails it that it's not just a tech issue. It is a culture issue. It is a societal rock issue that I think we can and should be holding companies accountable to creating guardrails for this kind of thing. But this is not a problem that you can guardrail out of being an issue unless we address that rot. And something that you said earlier, Mike, that, you know, this idea that, you know, platforms like X are going to be our town square where ideas can be debated and all of that. I don't believe that about X anymore. However, it is true that social media platforms in 2026 are increasingly just like where people show up. If you want to engage civically, if you want to build a business, you know, in 2026, if you want to be civically engaged, engaged in your democracy in robust ways, build businesses, being showing up on social media is just part of that. Right. And so if the advice that we give to women is don't show up on these platforms, well, we're actually saying is don't show up civically. Don't show up in your own democracy. It's not safe. And I think that that's part of what's going on here. It's a way of erasing women from civic and public life by doing this. And I think that like, you know, Elon Musk is obsessed with talking about free speech. If I don't show up to the town square because somebody is going to yank my top off, if I do, what about my speech? Is that not speech being suppressed? And I completely agree with Samantha Cole that I think all of this is really coming from the same place. A desire to live in a world where women exist primarily to be consumed, controlled, and stripped of whatever agency we have managed to claim over our own bodies and lives. And I think until we confront that reality, Samantha Cole is exactly right, that I don't think any amount of technical safeguards alone, even though we should be advocating for them, is going to fix that problem. Got a story about an interesting thing in tech or just want to say hi? You can reach us at hello at tangody.com. You can also find transcripts for today's episode at tangody.com. There Are No Girls on the Internet was created by me, Bridget Todd. It's a production of iHeartRadio and Unbossed Creative. Jonathan Strickland is our executive producer. Tari Harrison is our producer and sound engineer. Michael Amato is our contributing producer. I'm your host, Bridget Todd. If you want to help us grow, write and review us on Apple Podcasts. For more podcasts from iHeartRadio, check out the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.