OpenAI's Fog of War + Betting on Iran + Hard Fork Review of Slop
The episode covers OpenAI's controversial Pentagon deal and employee backlash, the rise of prediction markets during the Iran conflict, and the proliferation of AI-generated content targeting children on YouTube.
- AI companies face existential tension between government partnerships and employee values, with elite technical talent still wielding significant leverage
- Prediction markets create perverse incentives during wartime, potentially encouraging insider trading and profiteering from conflict
- AI-generated children's content lacks narrative structure and educational value, representing a new form of digital pollution targeting vulnerable audiences
- The threat of AI nationalization looms as these systems become strategically important to national security
- YouTube's recommendation algorithms consistently push low-quality AI content over educational material for children
"Most Americans just don't like AI very much. When you add into that mix, it's potentially also going to be used by your own government to spy against you or maybe kill you with a murder bot. Of course, Americans are going to say, well, this fricking sucks."
"We shouldn't have rushed to get this out on Friday. It looked opportunistic and sloppy."
"It's insane. This is leg around Trump are profiting off war and death."
"In one 15 minute session that we were scrolling, more than 40% of the videos were AI generated."
"If you are a leader at one of these companies and you know that at least until 2028, we are likely to have sort of the same administration in power. If you believe that the technology is rapidly accelerating such that a year or two years or three years from now, we might have something like a superhuman country of geniuses in a data center. What does that mean you should do?"
Most all in one hr systems are a patchwork of disconnected and manual tools. Rippling is totally automated. If you promote an employee, Rippling can automatically handle necessary updates, from payroll taxes and provisioning new app permissions to assigning required manager training. That's why Rippling is the number one rated human capital management suite on G2, TrustRadius and Gartner. If you're ready to run the backbone of your business on one unified platform, head to rippling.com hardfork and sign up today. That's R I P P L-I N G.com hardfork to sign up.
0:00
Casey, something horrible has happened.
0:34
What's that?
0:35
My wife has fallen in love with an AI which AI. So for background, you know, my wife is not like a very techy person. She's not an early adopter. She works in city government.
0:36
She's like a regular adopter. Like, when it gets good, she's interested. Yes, but don't talk to her before then.
0:48
Exactly. And it just got good enough for her to take an interest. And so yesterday she discovered Claude code and started using it to do some stuff at work. And now it's all she wants to talk about. We're like, out at a party last night and she's like, I just can't stop thinking about my coding. She had a dream about vibe coding last night.
0:54
Really?
1:13
And maybe all of this is sort of some, like, karmic, you know, revenge for the Bing Sidney episode, but I do feel like I'm getting a taste of my own medicine here.
1:14
I mean, to me, this seems like your dream come true. Like, what does any man want more than his wife taking an interest in his hobby? This is, like, could be one of the best things that's happened to you
1:23
in a long time. It's true. It's true. I think we've always benefited in our marriage from the fact that.
1:34
Are you speaking about our marriage or
1:39
your marriage to your wife? My marriage to my wife. That we're sort of, like, interested in different things and we can kind of cross pollinate? And so I guess I want to ask you as someone who's in a relationship with someone who works in AI, like, how do you stop talking about it? Do you have, like, set hours where you're like, we're not going to talk about AI for this next hour.
1:40
Here's what I'll say. If I ever figure that out, you'll be the first to know.
1:58
Okay, thank you.
2:01
In my house, the few respites we get from AI during the week I would say would involve Friday night episodes of RuPaul's Drag Race. That's a good solid one hour of not talking about AI.
2:04
Okay.
2:16
And yeah, outside of that, we're really monitoring the situation, Kevin. We are fully locked in.
2:17
I'm Kevin Roos, a tech columnist at the New York Times.
2:29
I'm Casey Noon from Platformer.
2:31
And this is Hard fork.
2:33
This week, OpenAI scrambles to contain the fallout from its deal with the Pentagon. Then how prediction markets have become one of the most controversial parts of the US Attack on Iran. And finally, it's a hard fork review of Slop for Children. You'll never guess what Peppa Pig is doing now.
2:34
Well, Kasey, we are now in week two of this incredible high stakes drama that's been playing out between the Pentagon and America's leading AI companies. There's been a lot going on. We now have more clarity on why the deal between Anthropic and the Pentagon fell apart. We also know how this anthropic supply chain risk designation is actually going into effect and impacting the way that government agencies are responding. And we have been learning this week about how open AIs with the Pentagon is shaping up. So lots to discuss here, but first we should make our disclosures. I work the New York Times suing OpenAI and Microsoft in perplexity over alleged copyright violations.
2:59
And my fiance works at Anthropic.
3:37
Okay, let's start with OpenAI because they are sort of the late arrival into this story, but in some ways the most dramatic. Since Sam Altman announced last Friday that OpenAI had arrived at an agreement with the Pentagon, we have learned a little bit more about that agreement. As a reminder, according to Sam Altman, this agreement did include some prohibitions on domestic mass surveillance and autonomous weapons system, basically the same two red lines that Anthropic had set out that were causing them so much trouble with the Pentagon. And I think it's fair to say like this provoked one of the biggest backlashes in that company's history.
3:39
It really did. We've seen it across social media. Many sort of top upvoted posts on OpenAI related subreddits have been condemning this move. OpenAI has been scrambling to try to rebuild trust. But at the end of the day, Kevin, I think both the Pentagon and the OpenAI are saying to the public, you're just going to have to trust us. And the public is saying, well, we don't.
4:16
Right. So there's been a lot of people canceling their ChatGPT subscriptions and switching over to Claude as a result of all of this, people who don't agree with the Trump administration or the stance that the Pentagon has taken here, and presumably because they're seeing some, you know, some, some pain, the cancellations department, as well as just a general feeling that this narrative is not going well for them. Sam Altman has been doing some damage control. So on Saturday he hopped on X to talk about this and answer questions about the Pentagon deal along with two other employees. And these questions were sort of the kinds of things you'd expect. You know, people asking, what did you guys agree to that anthropic didn't? Where are your red lines? Who's going to be making the kinds of hard decisions during something like a war about how these models can and can't be used? What about this domestic mass surveillance thing? So I think he answered some of these questions, but really the thing that they did was also to release the language of this contract that had been in dispute, that had been the subject of so much speculation.
4:43
Well, they released what they called the relevant portion of the contract, but then we would see later commentary from experts in government procurement that said, essentially, look, until we see the entire contract, it's just very difficult for us to take at face value the idea that this is the only relevant language.
5:46
Right. So they did not release the whole contract, but they did release some relevant language from this contract with the Pentagon. In a blog post then on Monday, Sam admitted that he made a mistake. He said, we shouldn't have rushed to get this out on Friday. He also added that it looked opportunistic and sloppy.
6:03
He also. Slopportunistic, to coin a phrase.
6:18
Yes, he was slopportunistic. And he announced that OpenAI was going to amend its deal with the Pentagon to explicitly rule out the use of OpenAI's tools for, for domestic surveillance of US persons and nationals. And it includes the language, quote, the department understands this limitation to prohibit deliberate tracking, surveillance or monitoring of US Persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information. I found this all slightly confusing. Casey, do you understand what OpenAI has said and the various evolutions of its position on this?
6:22
Well, I think the key takeaways here is that they are saying that they have put in some amended language that will prohibit certain uses of their systems by the government. So, for example, they're going to prevent the government from using commercial data that they sort of acquire legally and sort of running that through GPT models for domestic surveillance. I just want to Say, though, that there is always a high risk here for what I would call Jedi mind tricks, Kevin, and for the government, because we have seen Democratic and Republican presidents do this. Right. Of sort of going to the absolute limit of what the law will allow when it comes to surveillance of Americans and a way that they'll get around that is by saying, well, we're not doing surveillance, Kevin. We're doing some intelligence gathering. Right. And so, as annoying as it is to fixate on the semantics here, I'm telling you that whether or not you personally are surveilled will come down to semantics. Right. And so that's why we're digging in the way we are.
6:58
Okay. So still lots of questions about some of the details here, and I think there is a lot of doubt and concern among some employees of OpenAI that this actually did end up in a place that they're comfortable with.
7:59
Boy, is there some of that employee discontent spilled over onto X, where you had some employees saying essentially that they didn't trust their leadership either. An employee named Leo Gao called the contract language window dressing and pointed out that it still seems to give the Pentagon control over when to deploy autonomous weapons and just doesn't do much to address some of the other loopholes, polls. And then maybe more dramatically, Kevin, on Tuesday, Max Schwarzer, who was the Post training lead for OpenAI, a vice president of research at the company, announced that he was leaving and in his ex post, while he was pretty vague, suggested that this was an important time and that he had come to really respect Anthropic's values. And so he said he's going over to work there.
8:11
Yeah. So what's your take on how the damage control is going for OpenAI? Do you think they have warded off the. The most heated criticism or are people still really mad?
8:56
I do not think that they have stemmed the tide. I think they, they made. They put a lot of effort into changing the narrative here. When I saw that they were doing that X AMA that they had put up a blog post that they were quoting at least some of the contract language, I thought, these guys are really going for it. That also told me that they were really scared. But here's the thing to remember, Kevin. Most Americans just don't like AI very much. They did it in the first place. They didn't like it for all the normal reasons of, well, my media feed is filling up with slop and my manager is telling me I have to use it every day or I'm going to get fired. When you add into that mix, it's potentially also going to be used by your own government to spy against you or maybe kill you with a murder bot. Of course, Americans are going to say, well, this fricking sucks. Right? So I think this was kind of the strategic miscalculation that Sam Altman made, was that at least according to him, he thought he was going to get into this, this dispute and sort of be able to de escalate it and sort of come in as the white knight and save the AI industry from the overreach of the US government. And what he found out instead is they're still, they're kind of holding the bag of all of the discontent that the Pentagon whipped up with this force policy change.
9:05
Yeah, it's really interesting to me because I think my assumption had been that we were sort of over the era of like worker empowerment in Silicon Valley, right? Like years ago, sort of pre Covid, we had all these like Google walkouts and all these employee protests over these military contracts. And I think a lot of CEOs and leaders at these companies sort of said we're not doing that again, like we're not going to give our employees veto power over the deals that we make or the contracts we sign. And it suggests to me what is going on at OpenAI right now, that at least for them, in their specific case, where you do have, you know, this staff of elite technical talent that are not easily replaceable, there aren't that many people who know how to like build and train these models. You actually do need to keep them happy. And so those people, maybe only those people have significant leverage still.
10:12
Yeah, let me make a sort of sweeping generalization. Right? Like, I think there's sort of like two major camps at OpenAI. One are of the camp that have sort of been there for, you know, let's say three plus years that are the, the real experts that you just mentioned that have this kind of critical knowledge for how to build next generation frontier systems that almost nobody else in the world has. And those people tend to just really care a lot about how the technology is used. These are people who joined OpenAI in part because it was a nonprofit, right? And like there is like a solid core of those folks who are still working there. And then there's a group at OpenAI that I'm just going to call the Meta people. Like the people that came from Meta a little bit more recently that are, you know, maybe a little bit more flexible in what they're willing to see their company do. And I Don't think that they're going to raise a big stake about this. The problem if you're OpenAI leadership is you actually need that original core. Right. If you're going to build a GPT 6 and 7 that is going to blow everybody's minds, those are the people you're going to need. And so, yes, almost everything that we have seen over the past few days that they've tried to do damage control is aimed at those people.
10:58
Okay, so that's a little bit of the drama going on at OpenAI. What is happening at Anthropic?
12:03
Printing money in two words, I would say. Well, I mean, you know, I wrote this in my newsletter this week, Kevin, but has an American technology company ever had such a good week and such a bad week at the same time?
12:08
Explain.
12:20
Well, so on the bad side, obviously they're in a very heated fight with the Pentagon. That continues. By the way, it seems like there is still some risk that perhaps the President will try to invoke the Defense Production act to try to compel Anthropic to make the version of Claw that it does not want to make that would sort of do its bidding. And it seems also that this supply chain designation risk is now official. We learned on Thursday that the Pentagon sent a formal letter to Anthropic. So if nothing else, this is going to result in a long and costly legal battle as Anthropic tries to ensure that American companies can still use it for non military purposes. Right. So there is actually an existential threat to the company that is buried somewhere inside there and it is by no means over.
12:20
Right. But on the good side side, on
13:04
the good side, Bloomberg reported this week that Anthropic is on Track to hit $20 billion in annualized revenue at the start of 2025, Kevin, they were on pace to earn about $1 billion in annualized revenue. So this company has 20x over the past year. They were on pace to make about $9 billion by the end of 2025. So it has doubled in barely over two months, which speaks to the rise of Claude Code. Right. And the overwhelming adoption of Claude. So in that respect, this really has become maybe the fastest growing American technology company of all time.
13:06
Yeah. And like, what's so strange about this sort of dual quantum state right now of Anthropic is like, at the same time that they are printing money and people are signing up for Claude and they're switching from ChatGPT and like, things appear to be going well for them at the same time, they are also being pulled out of the federal government. Right, forcibly. There was some reporting this week by Reuters that the US State Department has sort of have started to comply with this order from President Trump to sort of stop using anthropic models. They have switched the model powering their sort of in house State Department chatbot from anthropics models to OpenAI, according to this memo seen by Reuters. And furthermore, this Reuters report said that the State Department is going back to GPT 4.1. Now, if you have been, not been tracking all of the model names and numbers as closely as we have, that is several generations ago. That's like a, that's like a 20, early 20, 20, 25 model. And basically what that means is that the average college freshman with a ChatGPT subscription now has access to substantially better AI tools than the Department of State.
13:43
It's not great for a lot of reasons, Kevin. And one of them, as the blog Lawfare covered this week, is that there appears to be no statutory authority for the President to do what he did. There is not a statute that lets the President just sort of declare that, that federal agencies cannot use individual software. But because this is just the way the Trump administration works, everyone has just decided to defer to the President.
14:49
Yeah, I want to ask you about this other sort of interesting piece of OpenAI's response over the last week, which is that Sam Altman has said multiple times that he wants the Pentagon to extend the same deal to anthropic that it extended to OpenAI. Do you think that is sincere? What is going on here? Why is Sam Altman saying, hey, if you're making these terms available to us, you should also make them available to other AI companies.
15:10
I think that that is the part of Sam that appears to be sincere in saying that wants to de escalate this conflict. He does not want the United States government to come in and nationalize the AI companies, at least not right now. Right. And so maybe if, if OpenAI could reach some sort of agreement that would provide at least some protections for Americans and other AI companies would sign onto it, that would just sort of release the pressure on the industry overall. Now, of course, at the same time it would buy him a lot of COVID and all of a sudden people wouldn't be mounting these Quit ChatGPT campaigns because Sam could be on X saying, well, you know, Claude is doing the same thing.
15:35
Do you think that's, how big a deal do you think this consumer opposition is? I mean, I, you know, I am somewhat jaded on this point, because I can't count the number of times that people have said, you know, oh, we're, we're all going to cancel our subscriptions to this thing, or we're going to delete Uber, or we're going to, you know, quit Facebook in protest. And, like, it never really seems to have much of an impact. But, like, do you think in this case that enough people are mad about this at the consumer level that it could actually impact their business?
16:10
Not really. I think you're exactly right. I think that usually these things just tend to blow over in a few days, and I'm sure that OpenAI is counting on that. At the same time, though, Kevin, I think back to the lesson that Meta learned, which is that as it had its own series of controversies, by and large, people did not quit Facebook, they did not quit Instagram. But you know what they did do just kind of start to hate Meta as a company and develop really low trust in that company, and that winds up hurting Meta in all sorts of ways. And the particular way, by the way, that I think this is gonna hurt OpenAI is they're gearing up, up to go out and build a lot of data centers around this country, and there's already enormous backlash. And that we are seeing. Right. We're starting to see it creep into our politics. And so if they are not able to sort of reverse the narrative and convince people that AI is going to have, like, hugely positive outcomes in their lives, I think you're going to see the data center opposition ramp up as a proxy for people's just kind of distrust of that company in general.
16:39
Right. It's the. It's the visible physical symbol of all of this. And for most people, the only one that is, like, anywhere near them. And so I think you're right. It could turn into a political problem for them, even if people aren't canceling their ChatGPT subscriptions on mass.
17:37
Yeah.
17:50
I want to ask you about something else that I've been thinking a lot about this week, which is this idea that you mentioned of nationalization. There's been a debate happening on social media about this idea that if we are headed to a world with very powerful AI systems in it, as Dario Amadei calls it, a country of geniuses in a data center that eventually, eventually that will just not be allowed to happen inside a private corporation, that the US Government, whether a year or two years or five years from now, at some point will step in and say, hey, you guys built this really cool thing that's really useful and has all these, like, important geopolitical and national security implications. We're gonna just take that now. And you work for us now. And I'm curious what you make of that as a possibility, because some people who I consider quite serious and credible have been talking about this of nationalization for several years now.
17:50
Yeah. If you go to the sort of nerdy AI conferences that Kevin and I do, this comes up a lot at the tabletop role playing games that people do during lunch. Right. Is that at some point a government of one or more countries kind of steps in and takes over the AI lab. I understand in this moment that that feels like a kind of sci fi scenario. Right. Like most of the time when you're using ChatGPT, you probably don't think this is a dangerous superweapon. And we need to ensure that, you know, this is being controlled by the President. At the same time we are now at war with Iran. We know that these systems are embedded in the, like, command and control operations of the military. And so to some extent they are already becoming weapons. Right. So if you say to me, do I think that once these systems become 3, 4, 5, 10 times more powerful, the government will want to take an interest in them and potentially oversee their development and deployment, I absolutely believe that will have. I see no reason why that would happen. And unfortunately, how that goes, I think depends a lot on the quality of the government that is overseeing that AI.
18:46
Right.
19:55
And like, what do they want to do? Do they want to use it to create opportunity and safety, a democracy for all? Or do they want to, you know, mount an authoritarian takeover of the globe?
19:55
So if you are a leader at one of these companies and you know that, you know, at least until 2028, we are likely to have sort of the same administration in power. If you believe that the technology is rapidly accelerating such that a year or two years or three years from now, we might have something like a superhuman country of geniuses in a data center. What does that mean you should do? I mean, one thing that I've been thinking about is like, should these companies be doing deals with the government at all? Right. If. If the lesson of the past couple of weeks is that the federal government is not a trustworthy counterparty in these negotiations and is going to insist on total control and obedience, or else they're going to try to nuke your company. Like, I think a very rational response from these AI companies will be like, well, we're just not going to make any more deals with you. You're going to have to use some open source models for your State Department and your, your military and your treasury, because it's just too risky for US Risk and you can't be trusted with it.
20:05
I could see why that may seem somewhat rational to them, but, like, I don't think that that is the tack that they're going to take. I mean, you know, even this week, after everything that has happened with Anthropic, Dario Amade is still out there saying, we were very close to an agreement with the Pentagon. We liked working with the military, we want to work with the military again. Right. So I think that's very important to note. Like, Dario did not, like, throw up his middle fingers like, on his way out the door. He is still trying to reach some sort of agreement. And I think in part that likely is to avoid the exact sort of scenario that you describing. Right. It's. You kind of want to like, keep the tigers at bay for just a little while longer, at least while you maybe like, think through the rest of that scenario, which is admittedly a very difficult one.
21:01
Yeah, I've been rereading the Making of the Atomic Bomb this week, which is Dario Amade's favorite book, and he used to give it to all Anthropic employees, and there's still like a bunch of copies at their headquarters. It's sort of the, the company book as far as their, their mission goes, and they see a lot of parallels between what they're building and the Manhattan Project. And so I went back and I've been rereading it. And the piece that struck me from that experience was just right before the bombs were dropped in 1945, there was this point where the scientists got really worried about how their creation was going to be used. And a number of them from the Manhattan Project sort of created these petitions and reports and tried to get them to the government and say, like, hey, could you guys like, not use this against a city, at least as like a first line act of war? And the military and the government sort of like pretended to hear them out and then they just went ahead and bombed Japan anyway. And there was sort of this moment where it was like, we hear you. You're the scientists. You're the geniuses who made this all work. But now you're playing in our turf. And so we're going to control the technology from here. And thank you for your input. And I think the comparison between the Manhattan Project and the AI industry is somewhat overstated. And I think it breaks down in Some key ways. One of which is like, that was a government project. You know, the Manhattan Project was paid for by the government. These were government employees. What we're talking about now are private companies that have been developing this thing outside the public sector. So I think there's some important differences, but I do worry that we are headed toward a moment where this stuff just gets so useful to governments and militaries and confers such a decisive advantage to the countries that control it that the US Government, no matter kind of who is in power, is just going to say like, this thing is too important to be left to the private sector.
21:42
Well, I mean, keep in mind that one of the original ideas for OpenAI was that it should be a government funded project. But Sam Altman and his co founders just came to the conclusion correctly, by the way, that no government would give them the amount of money they needed to build this technology. Right. And you know, they just sort of quickly came to the conclusion that it was just going to have to be a private enterprise. But, you know, going back to the earliest days, there was thinking among the people that created this technology that the government was going to take an interest in it eventually. Another reason, though, Kevin, why I find the current situation so vexing is that you and I both covered President Biden's executive order on AI, which I personally felt like was a pretty gentle way of attempting to regulate the industry. It was sort of like, you know, inform us about your safety testing, please. When you test these new models and sort of, you know, told federal agencies to get ready for this technology. And the howls of protest on the right that said, how dare, you know, this administration come in and try to put these fetters on capitalism. We are going to lose to China because of this sort of nanny state behavior. And then to see those same people come to power and now say, we are going to tell you exactly how you are going to build your models, what they're going to do for the military, or else we will destroy you is just like the whiplash is insane.
23:27
Yeah, we didn't like that government trying to control the tech industry, but this government tried to control the tech industry. That's just business as usual. That's fine. Right? So I guess my worry zooming out from all all of this stuff that's been going on for the past two weeks is that we are sort of living through like an early dress rehearsal for what something like nationalization of the AI companies could look and feel like. I don't think it's going to be as sort of cut and dry as like it was during World War II where like the government showed up to a bunch of like steel plants and was like, hey, we run these now. I think it's going to be kind of this soft nationalization like we've been seeing over the past week, where it's like a little pressure to build your models differently. Oh maybe could you remove some of those safeguards? Oh maybe this is actually so strategically important that we need to be the people putting the clauses in the Constitution of Claude or whatever that dictate how it will behave in these high stakes situations. And I think that is a more likely direction. But I would not take full sort of like brute force nationalization off the table entirely. I think there's a decent chance that something like that happens.
24:44
Well, maybe we should set up a prediction market for it.
25:48
Speaking of prediction markets, when we come back we'll talk about how prediction markets have made it to war.
25:52
So predictable.
25:58
Most all in one HR systems are a patchwork of disconnected and manual tools. Tools Rippling is totally automated. If you promote an employee, Rippling can automatically handle necessary updates from payroll taxes and provisioning new app permissions to assigning required manager training. That's why rippling is the number one rated human capital management suite on G2 TrustRadius and Gartner. If you're ready to run the backbone of your business on one unified platform, head to rippling.com hardfork and sign up today. That's RIP P L-I N G.com hardfork to sign up.
26:22
The browser is your business's first line of defense against online threats. Keep your employees and data safe with Chrome Enterprise, the most trusted enterprise browser. Create controls that enforce company policies like rules that prevent employees from printing, pasting, or sharing company data. Access in depth reports that show the apps and extensions your employees are using and where data is going and coming from, and prevent phishing and malware attacks from reaching your employees with automatic proactive protection. Visit Chrome Enterprise Google to learn more.
26:56
Taxes can feel confusing when you're trying to manage everything by yourself, but with Intuit, TurboTax help is closer than you think. Stop by one of their new state of the art store locations. There you can ask a tax expert to sync your accounts via the TurboTax app and automatically import your documents. With TurboTax Full Service, your expert looks for every possible deduction while keeping you updated every step of the way. In the the app, eliminate the guesswork and file your taxes with a TurboTax expert. Today visit TurboTax.com okay, Kasey, so the other big news from the past week is that the United States is now at war in Iran. And one angle that really has been sticking out to me about this is the role that prediction markets are playing in this conflict. Because I think that is something that we truly have not seen before.
27:28
Yeah, it seems like every new war brings along some grim new technology. And I would say that prediction markets are maybe grim technology number one for this conflict in Iran.
28:16
Yes, it's a grim technology already, even absent the war. And now just with the war, it has become even grimmer. And we've talked about prediction markets on the show. We talked about them way back in 2023 when they were sort of this new thing that was like kind of in this legal gray area that wasn't really being done at any scale yet. It was sort of an interesting idea. Now, of course, course you cannot walk down a street in a major American city without seeing one. And probably multiple ads for prediction markets like Kalshi and polymarket.
28:28
Yeah, this sort of gambling mania that has taken over all media and advertising, you know, from DraftKings to FanDuel has now extended even further into these prediction markets.
28:57
So both polymarket and Kalshi, the two leading prediction markets platforms, took a lot of heat this week on bets they were allowing their users to make on questions related to Iran. So Kalshi, which is kind of the more regulated US based prediction markets company, does not allow bets on war or assassination, but it did allow the question. Ali Khamenei out as Supreme Leader. Basically sort of as a kind of careful proxy for betting on the outcome of a war or a strike on Iran. Yeah.
29:10
And out, I suppose could have, you know, many meetings, you know, you know, perhaps there would be a sort of gentle democratic revolution in Iran. But I'm going to assume that most of the people who were wagering on that one assumed that he was going to be killed in war.
29:41
Yeah. So people got really mad at Kalshee for allowing these bets on the fate of the Iranian leader. They also got mad when Kalshi sort of voided this market and said that it was going to reimburse anyone who may have lost money on this. Basically make sure everyone ends up in the black. But people who were supposed to make a bunch of money because they correctly predicted the death of Khomeini were mad that they didn't get paid out their expected winnings. So just a big cluster all around.
29:56
And I just want to say, if you are one of the traders who did not get your expected winnings from the death of the Ayatollah. I just want to say I don't care and it doesn't matter.
30:23
So polymarket, the other sort of less regulated offshore crypto based prediction market, was even more permissive. They allowed people to bet on the dates of strikes on Iran and other details related to the war in Iran.
30:35
Their policy was really like, imagine the worst thing you could do on our platform. You can do that.
30:49
Actually, they did draw a line when it came down to markets that allowed users to bet on the likelihood of nuclear detonations by specific dates. So sorry to anyone who is trying to cash in on nuclear war.
30:55
These woke liberals that won't let me bet on nuclear explosions. Need to go. Kevin.
31:08
Okay, so no one was happy about this. Senator Chris Murphy posted that quote, it's insane. This is leg around Trump are profiting off war and death. And also said that he was introducing legislation to ban this. And there are also a bunch of people looking into whether any of this has been done via insider trading, basically. Do you have people in the military or close to the decision makers in this conflict placing bets once they have this sort of nonpublic information about what is going to be happening?
31:13
Yeah, and I think it speaks to why. Why allowing prediction markets to take bets at least around sort of like, you know, war and death is so corrosive and bad. Kevin. Because not only is it just kind of like grim and like, how do we live in this society where, you know, gambling on war and death has become a sort of form of entertainment, but also you're just creating incentives for like the worst things in the world to happen, which doesn't seem logical to me.
31:44
Well, and it's not even a theoretical harm here. Recently, Israel arrested a number of people who were accused of using classified information to bet on military operations on polymarket. So this is already starting to happen. And I think this is why people like Senator Chris Murphy are so alarmed about this. Not just because it's sort of like gross and aesthetically offensive to have people betting on wars.
32:14
Although it is, but although it is.
32:39
But also because it could create direct incentives. If you're a member of the military and your commander gives you an order to go do an airstrike on an Iranian compound, to log onto your phone and head over to one of the prediction markets platforms and say, you know what, I could make a couple grand off this.
32:42
Yeah, that's your little Kalsha bonus. You know, this is not theoretical at All Kevin. In fact, your colleague Amy Fan at the Times wrote that it is relatively uncommon for someone to bet a significant sum of money that a US strike will happen within the next day. But last Friday, more than 150 accounts placed hundreds of bets of at least $1,000, correctly predicting that there would be an American airstrike Iran by Saturday.
32:58
Yeah, so I think one of the interesting things here is like, I am not like a blanket opponent of prediction markets. I sort of bought some of the kind of theoretical arguments for why something like a prediction market could, for example, outperform political polls. Because it would incentivize people to like come up with really good polling data and like use that to trade on and you could end up with kind of a better picture of a given election.
33:22
Or people will like, say what they really think because their money is at stake and they're not just trying to like impress a pollster.
33:48
Yes. And you've actually had some of the people who are in charge of these prediction markets sort of talking about the fact that insider trading can be good because it can get the best information to the markets as quickly as possible and kind of like give people an unfiltered understanding of what the real insiders are thinking. Now, of course, officially you are not supposed to be able to insider trade on these platforms. Right. They all have policies against it. Kalshi says, you know, they've investigated people that it is actually illegal, per the cftc, which is their main regulator, to place bets using inside information. But there are a couple problems with this. One is the CFTC is a tiny agency. It doesn't have a huge team of enforcers going out to investigate what I assume must be hundreds or thousands of trades using inside information on their platform every day. It's also not clear what is public information and what is private information. You know, there are certain types of information in the stock market that are considered material non public information that it is illegal to trade, trade on. But it is also legal to, you know, fly a drone over an oil facility to see how their production is going, or to park outside a store and see the foot traffic going in and out and use that to sort of calculate how well their sales must be going.
33:54
I find it suspicious how much you know about the insider trading rules. I have to say, I didn't know you had this much facility with the law here.
35:11
I'm calling my lawyer. But of course, this is part of the appeal of prediction markets in general is that they incentivize people with good information to Trade on that information. Yes.
35:18
And if you allow. Allow people to wager on almost anything, how are you ever possibly going to police the entire platform to understand who is insider trading and who isn't?
35:27
Yes. So I think in this specific case of war, I think it's very dangerous for some of the reasons that we've talked about. Not only do you have military officers and service people disclosing classified information in some cases to sort of make a little extra for themselves, but you also have just this incredibly strange war profiteering innovation where, like, you can just go on one of these platforms and try to make a bunch of money from something that involves a lot of devastation and destruction.
35:36
You know, the other thing that comes to mind for me, Kevin, is that, you know, as you say, the prediction market backers, their argument is like, this just helps us understand the world better. Right. This is a new kind of information that helps us see more clearly. And yet, as I look across all of the trades you just described, I don't understand really what I was supposed to see more clearly. Right. Like maybe you get a, you know, a brief heads up about something horrible that is about to happen. Maybe that's, you know, useful in at least some circumstances. But for the most part, I just don't feel like we actually have a much better understanding of the world because all of these bets are happening. Yeah.
36:08
And I think in this specific case that's especially true because if you actually look at the markets that were being traded before this strike on Iran, the conventional wisdom of the crowd was that this was not going to happen. It was a very low probability, I think something like 17% probability on one of these platforms an hour before the strikes. So these markets aren't actually distributing the best possible information at all times. They're just kind of like aggregating vibes until, like, someone with inside information shows up and like, makes a fortune.
36:45
Well, I think that's exactly it. It isn't as if these have been adopted by the mainstream and everybody's placing these sort of casual bets. And now we have this beautiful, perfect understanding of the world. What we have, as you say, is a bunch of vibes plus some insider trading. And it just doesn't actually seem that useful to me in practice for most things.
37:18
Yeah. I want to try to like, sort of steel man the defense of prediction markets here and see what you make of it. So I think someone who believes that these prediction markets are good in the aggregate might say something like the following. People have been betting on war forever. They bet on the stock prices of defense companies, they bet on things like oil prices. That is all legal. We consider that sort of part of the normal markets. Those things all fluctuate when you have a war breakout. How is this any different, your response?
37:34
Well, I think that it is actually really meaningful that these are indirect ways of betting on war. Right. It seems very unlikely to me that if I like, you know, buy oil stocks, assuming that they are going to go up, that I'm creating an incentive for somebody to assassinate the supreme leader of Iran.
38:08
But wasn't this the whole conspiracy theory about the war in Iraq? Was that it was just motivated by, like, Dick Cheney owning a bunch of stock in Halliburton?
38:27
Well, I mean, yes, that was like the conspiracy theory. You know, I don't know that that was what was actually driving. And I think that, you know, as with most wars, at least at that time, there were sort of like a number of, of interrelated factors that were going on. And, you know, maybe oil was one of them. But my point here is just that when you have the betting at some sort of meaningful remove from the action, it just like feels better for me. It doesn't create the same horribly grim incentives that this particular approach does.
38:33
Right. I think the difference for me is the directness that you mentioned. And you know, one thing that came up over and over again when I was talking to people about prediction markets a couple of years ago for this story is the assassination markets get really dark because if you have something like, you know, will this world leader, you know, be removed from power in air quotes before a certain date that could actually create a bounty on that person where someone might go out and say, hey, if I want to make money on this, I need to like kill this person before this day.
39:02
And you know, what is going to be the first thing that actually takes action on that? Kevin? Open claw. Mark my words. What are these bots plugged into? A Mac Mini is going to see a prediction market for the assassination of a world leader and it's going to say, well, I have some ideas about that.
39:33
So I think everyone, most people agree that like the assassination prediction market is sort of, you know, out of bounds and is a bad idea for lots of reasons. But I think there is still a lot of gray area around these questions about conflict and war and politics. And I think it is the risk here is that these prediction markets have gotten so popular so quickly with so little regulatory oversight that it is just kind of legal to do a bunch of stuff on them, that it's not legal to do in the regular stock market.
39:48
Yeah, well, so you mentioned that some lawmakers have talked about introducing legislation. My experience is that that kind of legislation typically doesn't go anywhere. What, if anything, do we know about what is going to happen as this war continues to unfold in Iran when it comes to these prediction markets?
40:18
I mean, I think the Trump administration is very unlikely to do anything to sort of stop the growth of prediction markets. We've already seen them sign via these sort of regulatory actions that they've dropped against polymarket that they are not going to take a firm line against these prediction markets. We've also seen them adding members of the Trump family to their advisory boards. So I think all of these prediction markets are sort of becoming entangled with the administration in ways that are going to make it very hard for them to do anything. But I certainly expect, like, Democratic lawmakers to stand up and say, like, what the hell are we enabling here? Why are we allowing people to bet on the assassination of world leaders or the outcomes of a war in Iran? This just feels all incredibly fraught to me.
40:36
My fear is that we're in a sort of time race where, like, if Democrats were able to, like, somehow advance some legislation, maybe they win some seats in the midterms, maybe they retake the presidency, maybe sometime within the next few years, they could meaningfully reign these prediction markets in. I think, though, if they continue to grow, my fear is that they will become a massive entrenched interest group like the crypto world, and they will then lobby to ensure that Democrats and Republicans both feel like they have a vested, vested interest in these things sticking around. So, you know, my fear is that if we're to do anything about some of these excesses we've been talking about today, it needs to happen soon or otherwise platforms like Kalshi and Polymarket might just have too much money for that to happen.
41:20
Yeah, I have a proposed rule for these prediction markets, which is that you should have to go to a physical place like you do for a casino. I think that putting this stuff on people's phones, making it super easy for them to do it. Like, if you want to go bet on the war in Iran, you should have to, like, go to a seedy, like, OTB betting place to do it. Like, you should have to, like, put in some effort. It should not be as easy as whipping out your phone.
42:04
All right, well, it's very interesting, Kevin. I predict we are not going to try that.
42:28
I also predict we're not going to try that, but it's a good idea. People should listen to me.
42:33
When we come back. Slop, collaborate and listen. YouTube is back with a disturbing new invention.
42:42
Most all in one HR systems are a patchwork of disconnected and manual tools. Rippling is totally automated. If you promote an employee, Rippling can automatically handle necessary updates, from payroll taxes and provisioning new app permissions to assigning required manager training. That's why rippling is the number one rated human capital management suite on G2 Trust, TrustRadius and Gartner. If you're ready to run the backbone of your business on one unified platform, head to rippling.com hardfork and sign up today. That's rippling.com hardfork to sign up.
43:14
The browser is your business's first line of defense against online threats. Keep your employees and data safe with Chrome Enterprise, the most trusted enterprise browser. Create controls that enforce company policies, like rules that prevent employees from printing, pasting, or sharing company access in depth reports that show the apps and extensions your employees are using and where data is going and coming from. And prevent phishing and malware attacks from reaching your employees with automatic proactive protections. Visit Chrome Enterprise Google to learn more.
43:47
Adobe Acrobat Studio. Your team's home base. Collaborate within a shared PDF space. You've got your docs, your plans, your specs, and then invite the crew to build what's next.
44:19
Talk up the team. Worse, they think that this design could be a contender.
44:28
But when somebody wonders what's the next steps? AI helps you finish the rest. Bolts are tight now. Your plan's refined. Run a smoother business when you're on the line. Do that with Acrobat. Learn more@adobe.com do that with Acrobat all
44:33
right, Casey, well, it's time to look at some kid slop.
44:50
Yeah, Kev. We have recently been alerted to the fact that YouTube has been beset by a bunch of AI generated slop for toddlers, and it's time to take a look and see what we're dealing with.
44:52
My colleague at the New York Times, Arietta Leica, has a new story about this called how AI generated videos are distorting your child's YouTube feed. And I read the story. I loved it. I thought, I've got to see some of this slop for myself.
45:02
So today, let's take a look at
45:16
this emerging new genre of AI slop directed at kids with a new install installment of the Hard Fork review of Slop.
45:17
The Hard Fork Review of Slop all right, what do we have up first
45:34
first is a video about the Alphabet. Do you know the Alphabet?
45:41
You know, I keep meaning to learn it.
45:44
Okay, let's see how the AI generated Alphabet videos are doing. Yeah.
45:46
Kids tv, ABC Farm Animals.
45:51
So far, so good.
46:00
A's for Aldaka. A. Aldaka.
46:01
Okay, okay.
46:05
Why are they being squirted out of a little paint bottle? Meow, meow.
46:07
Oh, I'm deeply uncomfortable with. With this. I'm going to have to answer so many Questions from my 3 year old about why ducks come out of toothpaste tubes.
46:16
Yeah, this is. If you're. Listen, if you're three years old, you need to know this. That is not how a duck is made. So this video depicts the Alphabet showing a series of animals whose names begin with a letter. And then a hand holding, I guess a tube of paint, sort of squirts out a little dollop of gross goo that then transforms surreally into an animal while sort of demented slop children sing the name of the animal in the background. Yes.
46:25
Like, if you've ever seen the TV show Alex Mack from like the 90s on Nickelodeon where she like, sort of assembles herself out of goo on the floor. It's sort of like that, but for animals.
46:56
That one was a little before my time, I'm afraid, but I'll. I'll research that one back in the archives in any case. Okay. Yeah, very strange one. Let's take a look at this next video.
47:05
Kevin, what do we have now?
47:16
I believe we're gonna see some animals appearing out of colorful clouds.
47:17
I is for impala. Green impala says boing, boing, boing.
47:22
Join, join, join. J is for jackal. Yellow jackal says.
47:26
Spin, spin, spin.
47:35
L is for lion. Purple lion says roar.
47:37
Roar. Rolling the R's on his roars. Why are they. Why is the other animal a doctor? Why are they running toward the camera?
47:41
Why is the kudu pink? And what is a kudu? Now, I have to say I'm. I'm a little older than the. The target age for these videos, but I'm learning things myself. I'm learning. So this video shows another series of animals, each one connected to a letter of the Alphabet. But this one uses a trope of having this doctor figure inject these animals with color, which. And the doctor is also an animal. And the doctor is also an animal.
47:51
Is this like anti vax propaganda?
48:22
Well, the thing is, like, whoever is making the slop knows that needles are scary to children. So this is effectively just an engagement hack. Right? You know, the kid is going to watch the injection because the kid is afraid of needles. So this is. Is just. Again, this is just one of these, like, little miniature, insidious ways that these, like, slot makers grab the attention of kids is by, like, showing them something scary in order to hypnotize them into continuing to watch the slot.
48:24
It does teach the valuable lesson that giving injections to children does result in them turning into pink kudos and running toward the camera. So all children need to learn that lesson eventually.
48:48
All right, this next one, Kevin, shows animals turning into armored vehicles, trucks, and planes. Going to show once, once again, that a key theme in Slop for Kids in 2026 is the transformation of animals.
48:58
Oh, wow. We got sort of Mecca quail. Okay, Okay. I wonder if that's a licensed use of the Thomas the Tank Engine ip,
49:16
R, S, P Safari Animal sound. Okay.
49:32
Unfortunately, that one slapped for me. My kid would be super into that, and I must never show him that.
49:36
This one reminds me of, like, cartoons that I watched as a kid where there was a lot of, you know, Transformers transforming. And so I could understand why, you know, a lot of little kids might like that one. But, you know, this is making me realize that another reason why slot makers love the Alphabet is because they can just stitch together a bunch of very short clips. And, of course, most of the AI video generators that we have in this moment can only generate clips of up to a few seconds. So the Alphabet just becomes a perfect way to stitch together all of that into one piece that still at least has some sort of coherence.
49:43
Yeah, I mean, I guess I can see a world in which, like, this stuff is not actually that harmful. Like, maybe it is gonna teach some kids the Alphabet, but it is just so weird. And it strikes me as just like. Like, we already had a lot of videos teaching kids the Alphabet. Why? Why are people doing this?
50:17
But also, you know, listen, I'm no child neurologist, but I wonder about the consequences of essentially just creating images that are designed to overstimulate a child. You know what I mean? It's like, at least when you're watching, like, a normal cartoon, there could be moments of relative calm or, like, a story might unfold over a few minutes when you're just showing, like, raw visual stimuli and bombarding a kid with it. It just doesn't seem like it's probably that good for them.
50:34
This is how I know you do not have a child in the year of our Lord 2026. Because if you go onto, like, Coco Melon or any of the Other, like, extremely popular children's programs. Like, they are essentially just this. It's like a series of very short clips. Maybe they do one song and then they do another. It cuts away to a different song. Cuts away to a different song. It is like these same sort of like hyper stimulating environments. So, like, I'm not saying that any of this is good. I'm just saying that, like, we. We have crossed the Rubicon a while ago and we are now in the land of the hyperstim, stimulating children's entertainment. And like, the big difference to my mind is now that it's just easier and cheaper to create this stuff.
51:03
This is my takeaway. When I have my child, which I do hope to do, the only visual stimulation that I'm going to allow them is a pile of sticks and Etch A Sketch. They can make their own fun, you know, with the pile of sticks. But we are not going down this road, Kevin. All right, why don't we wrap up with a lullaby about animated children tucking themselves into beds made up of fruit?
51:39
M
52:03
love a fruit bed.
52:05
What you call me Princess Blondy in her strawberry bed.
52:07
Okay, we've got a girl with a crown Going into a bed made out
52:12
of strawberries Banana bright swaying gently through. So calm and clean super boy in his own orange glow Drifts to sleep all warm and slow Casey, what do
52:17
you make of the fruit beds?
52:35
You know, this is just another one that feels a little more surreal than I am comfortable with. You know, it's like, what is the child supposed to conclude from any of it? Like, it just sort of seems designed to confuse children more than to educate or even really entertain them.
52:36
Yeah, and there have been, I will say, like a. A few sort of videos in here where I'm like, oh, my kid would like that.
52:54
But I don't like that.
53:00
Would like that. You know, it's like, I don't want him watching stuff that's just like a bunch of animals, like driving buses or lying in, you know, fruit beds.
53:01
It's also kids like lots of things that are bad for them. That's why we don't let them use cocaine.
53:08
Yes. Well, now that we've seen the videos, I feel like we are armed and ready to have the conversation about what they all mean. So let's bring in my colleague Arietal.
53:15
I don't know, I feel like I need to cleanse first. Maybe go read a play by Shakespeare or something.
53:23
Just go watch an episode of Bluey. Very heartwarming. Arietaleika. Welcome to Hard Fork thank you so
53:28
much for having me.
53:39
I wanted to just ask, like, first of all, how did you get interested in this? How did you learn about this? Did you get a tip from a toddler submitting things through the New York Times confidential tip process? How did this come onto your radar?
53:40
Yeah, so no toddlers were involved in the making of this story. We had some ethical concerns there about showing them these videos repeatedly. Started by getting interested in this because there is so much AI slop on all of our social media feeds, regardless of the platform. And I was interested in learning more about how this content is being presented to children and how it's being moderated. And so where do many children Watch Media? It's YouTube. So I started by looking at channels that parents approve of, channels like Ms. Rachel and Bluey that are considered more high quality and thoughtful, and other popular channels like Cocomel, and said, well, if I click on a Cocomelon video and I am a toddler and I look at the recommended videos right next to that video, what sort of videos would I be recommended? And we focused on YouTube shorts because a lot of these AI tools default to this vertical format. And so we just started scrolling through the feed and making a note of the different videos we were seeing. One of my colleagues actually coded a tool where I wasn't really interacting with the screen so it wouldn't influence the algorithm. And I was just like scrolling down and it making a note of all the different links I was seeing. And then we later analyzed these videos frame by frame, determined which videos were AI generated.
53:51
And were you using regular YouTube or YouTube kids?
55:13
So we started looking at regular YouTube in a private browser because a lot of parents tend to put on their own YouTube accounts for their children. But then we also look through YouTube kids, you know, because it is a more controlled environment with content, content that's supposed to be approved for, for children.
55:16
And when you started this process, what were your sort of expectations? And then how much AI generated stuff did you wind up seeing as like kind of a proportion of the total?
55:34
So I think especially when I was watching channels like Ms. Rachel and Bluey, I was expecting to see content that, that would be, you know, more in the lines of, of those programs. Right. More Bluey shorts. And I wasn't really seeing that. I was surprised just about how much AI generated content was being like in one session alone, in one 15 minute session that we were scrolling, more than 40% of the videos were AI generated.
55:43
Wow.
56:10
And how are you determining that? Is there like a detection system that you're plugging these things into.
56:11
Yeah, the detection system is called looking at the video.
56:16
No, I'm saying like, if you have a video of like a, you know, an alpaca getting squeezed out of a paint tube, like that's very clearly not photorealistic, but it could be made using some like CGI software or something like that. How did you determine that this was AI?
56:18
Right. So I think some of these videos were a little bit more obvious. But we looked at them frame by frame and tried to look for inconsistencies. Like in some cases, the objects were disappearing in the background. There was text in the background that was distorted. If there was an animal in the background, it would morph into something else. Some of the videos were actually labeled as synthetic media, even though they were animated. And then we would also look at the YouTube channel and get a sense of like, okay, what sort of content have they been posting in the last few years? And even some of the channels that had been around, you know, before the last few years where this technology really improved, they were making very simple, low quality animation that looked nothing like what they had been posting the last few months.
56:31
Now, Arietta, I have done a lot of reporting on YouTube, but it's been a few years. What are their actual policies like today on this kind of content?
57:12
So I sent them a few channels as examples and I asked them what their policy is around flagging AI content that's being made for children and that includes animated content. And they told me that content creators are required to label content that is realistic looking. In some cases we saw that creators weren't labeling the content. And I'm referring to videos of animals, you know, displaying behaviors that those animals don't do, like elephants doing gymnastic maneuvers on a tightrope. Those videos, for example, are not labeled as synthetic media. Some creators were labeling the animated videos, but it's not part of YouTube's policy. So it's really the burden is falling on parents, parents, especially for parents who may not want their child to watch AI generated media to determine, okay, what sort of video is this? And not only that, but you have to click through the video and then you see the synthetic media label. And when it comes to kids content on YouTube, YouTube doesn't allow comments. So sometimes when there's an AI generated video, you'll have people in the comments say like, this is AI. And in this case, you know, parents can't really talk amongst themselves as to why what the video is.
57:23
Has there been any backlash yet that we've Seen from parents against these kinds of videos. Like, are, are some parents upset by what they're seeing?
58:32
Yeah. We came across several Reddit forums where people were asking how to get around this AI slop for kids, whether there was a filter on YouTube. Some parents recommended making a playlist. Other parents were like, get off YouTube altogether. And I think a lot of it is also going to fall on parents to, you know, closely supervise what kids are watching since the algorithm is pushing so much of this content.
58:41
Right. Although of course the whole point of YouTube is that what, it's what you show your kids so that you don't have to spend time with them. Right. You know, it's, I mean, and not in a mean way, but like maybe you have to cook dinner.
59:05
Right.
59:12
Parents.
59:13
Yeah.
59:14
Like, if parents were, you know, sitting down to like watch YouTube with their children, like, we would not be in this situation.
59:14
Yeah.
59:20
And I'll say, like, as the parent of a almost four year old, like the couple times that I've like found him watching something like this because he like, you know, went on to the YouTube Kids app when he was supposed to or something. My feeling about it is not like, this is damaging my child. It's just like, this is so bad. Like, the quality is just so bad. I wonder if you think Arietta, like there's any sort of evidence. What do we know about the actual effects of this on kids? Is the worry that it's actually harming them or is the worry that there's like, there's just better stuff they could be watching?
59:20
So there's a lot of factors to consider. So these videos contain extraneous effects. And we, we know that when videos contain all these bedazzling elements, kids can learn as well. They're also devoid of a narrative arc, which is really beneficial for kids to watch content with the beginning, middle and end, with characters they can relate to that they're familiar with watching content that uses short phrases that they can understand and doesn't feature these abstract concepts. You know, worst case, a lot of these videos are fantastical and that could be cognitively overloading to the child. And when it comes to short form content, experts say for children under five, we know their attention systems are still developing, so it's hard for them to follow rapid changes. And it really puts a heavy burden on children to process that information, especially when we see this more realistic, fantastical content of animals, you know, showing bizarre behaviors.
59:51
You know, when I was growing up, Kevin, in the 1900s, I would watch he man, remember he man yes, it was a cartoon about a gay guy who had a sword and rode a tiger around. And as a young.
1:00:47
Was he gay? Canonically?
1:00:57
I think not canonically, but read between the lines. Okay. Read between the lines. Okay. And, you know, as a young gay kid, I could see that and relate to that and it, you know, and yes, the stories were fantastical, you know, but to Arietta's point, there was a beginning and a middle and an end. And, you know, it didn't exactly, like, teach me in the ways of the world, but, like, it maybe told me something about, like, storytelling and narrative and it wasn't just sort of like raw, flashing shapes and colors.
1:00:59
Yes. I mean, I remember before I had a kid having this argument with people at YouTube about YouTube Kids, because there was, I think there was, you know, discussion and concern even back in like, 2017 or 2018 about these weird videos. They weren't AI generated at the time, but they were like, weird sort of Coco melon ripoffs. And I remember saying to these. These YouTube people, like, you guys gotta change the way that you do this. Like, instead of just default allowing channels to. To serve content to very young children, you have to, like, kind of set up some white list where, like, if you prove that, like, your stories have a beginning, middle and end and, like, there's some thought being put into them, you are allowed to be on YouTube, kids. And if you're not, if you're just some, like, content farm that's, like, churning out coco melon ripoffs, like, you're. You're not going to be allowed on. So do you get the sense, Ariana, that there's any, like, concern or consideration of this at YouTube or are they basically just. Just sort of like, you know, we got bigger fish to fry.
1:01:22
I mean, these videos are recommended to me and these channels multiple times. And I. I've been looking at these videos since November, so I.
1:02:17
And what has it done to your brain? Yeah, and I bet you know your Alphabet.
1:02:25
I do, yeah.
1:02:29
My.
1:02:30
The Alphabet. And, you know, those songs really get stuck in your head, right?
1:02:31
I. Yeah, oh, trust me, I know.
1:02:34
And then it's just so. It's so fascinating to me how much more of these videos are recommended to me than more thoughtful content. Like PBS Kids, for example, because PBS Kids also puts shorts. But I was seeing more of this than PBS Kids.
1:02:37
I mean, just to sort of like, you know, echo Kevin's question, this has gotten me thinking about Elsa Gate. Right? There's like, 2017 controversy where parents see a lot of really surreal content in YouTube, and they get very upset. It seems like we have not seen that kind of backlash yet. And I'm wondering, is it because we just feel like the content is actually better here in some ways, or are most parents just not actually aware of just, like, the volume of slop that might be being served to their kids?
1:02:49
Because so many of these videos are animated, it may not be obvious to some parents that these are AI generated, but we were working in a more contained environment. So, like elsewhere on YouTube, there are so many videos featuring popular characters that children like in very violent scenarios. There's this show called Masha and the Bear, and I remember I typed in Masha, and I was coming across videos of Masha's stomach being cut open, and. And that channel was putting up these videos, and YouTube didn't take them down until I linked out to one of them in my story.
1:03:17
Wow.
1:03:52
And I don't know if any of you know, children who are obsessed with that K Pop Demon Hunters film, but there are so many horrible videos featuring those characters. And again, it's nonsensical. Sometimes the characters are pregnant, and those are all over YouTube as well, if you type in those characters.
1:03:53
And by the way, that was the exact scandal of Elsagate was that people were seeing Elsa from Frozen pregnant all over YouTube. So, I mean, like, this truly is just a sequel to Elsagate. And I think it's, you know, interesting that it has flown under the radar until your story, Arietta.
1:04:13
Right. And before, I mean, you had to have some sort of animation skills to do that. But, like, now anyone can do this in a few minutes. So, you know, we're gonna. We're gonna be seeing more of this content.
1:04:27
And again, what is so insidious here is that the people who are making these videos are on the fact that regular kids are going to go searching for K Pop Demon Hunters and Masha the Bear, and they are unwittingly just going to be served this slop of, like, you know, the poor bear getting cut open and the pregnant demon hunter just through the recommendation algorithm. And, like, this is where YouTube's responsibility lies. Right. Like, this is where YouTube should be saying, we're actually not going to show this incredibly disturbing stuff to kids.
1:04:38
Yeah. I would say, like, among the parents of young children that I know, like, very few of them let their kids watch YouTube, even YouTube kids, for this exact reason that it's like, you might start with Masha the Bear. You might start with, you know, a K Pop Demon Hunters video. You might start with Bluey or Coco Melon. But, like, you know, you go away for 10 minutes to cook dinner and you come back and like, they're on their third hop from the recommendation and they're watching some like, you know, weird AI generated thing with like Spider man, you know, on the surgery table or something. And like that it's like, what are we doing here?
1:05:07
Yeah, because. Because it turns out that all of these recommendation algorithms work exactly the same way. They find the video that is the closest to going over the. And they find that it gets more engagement than any other. And so that's what shows up in the feed. Right. So how, how many times are we going to have to revisit this story?
1:05:40
It.
1:05:55
It really depresses me, honestly.
1:05:56
Ariana, until the platforms like YouTube take more action on this, what can parents do in the short term? I mean, if you're going to let your kid watch YouTube or YouTube kids, is there anything you can do to like, toggle off the AI slop? Or do you just kind of have to make peace with it in terms
1:05:57
of filtering it out? There isn't an option to do that right now. I mean, January, YouTube announced that they're going to add where parents can set time limits to YouTube Shorts. We're seeing a lot of this AI content, YouTube Shorts. So that might be an option.
1:06:13
They could call it the slop Watch. You know, it's like a stopwatch, but for slop.
1:06:27
I just wonder if, like all of this is a losing battle. I feel like Kevin Roose, do not
1:06:31
bring your nihilism into the recording studio today.
1:06:37
Well, just like, look at what is happening to older children with like, tung, tung, tung sahur and like all the AI slop from TikTok. And I just feel like, you know, we're seeing, seeing old people falling for AI slop on Facebook and middle aged people fall for it on, on X and Instagram. And like, it just feels like we're kind of creating this like, lifelong pipeline of just like, whatever slop is going to be most engaging to you.
1:06:40
Well, I'm glad you brought that up because that is the thought that I'm having as we wrap up is that while, you know, right now we're talking about the concerns we have for kids, I think that, you know, older, older teens and adults wind up having the same issue, which is they open up TikTok or Instagram reels and they see something that designed to hypnotize them as well. And it might not be colors and shapes and the Alphabet, but it does seem like the basic idea is just sort of turn your brain off and experience the raw visual stimuli for as long as you know we can get you to yes.
1:07:06
Well Ariana, fascinating story. Thank you so much for doing this, digging and watching all of these children's videos so that I don't have to and so I can protect my child from them.
1:07:37
Thank you so much for having us.
1:07:46
Way to take one for the team.
1:07:47
Yes, Ariana, you really did take one for the team on the this one, I have to say.
1:07:49
All right, thanks so much.
1:07:53
Thanks.
1:07:54
The browser is your business's first line of defense against online threats. Keep your employees and data safe with Chrome Enterprise, the most trusted enterprise browser. Create controls that enforce company policies like rules that prevent employees from printing, pasting or sharing company data access in depth reports that show the apps and extensions your employees are using and where data is going and coming from and prevent phishing and malware attacks from reaching your employees with automatic proactive protections. Visit Chrome Enterprise Google to learn more.
1:08:19
A biotech firm scaled AI responsibly. A retailer reclaimed hours lost to manual work. An automaker now spots safety issues faster. While these organizations are vastly different, what they have in common sets them apart. They all worked with Deloitte to help them integrate AI and drive impact for their businesses. Because Deloitte focuses on building what works, not just implementing what's new. The right teams, the right services and solutions. That is how Deloitte's clients stand out. Deloitte together makes progress.
1:08:50
Adobe Acrobat Studio, your new foundation. Use PDF spaces to generate a presentation. Grab your docs, your permits, your moves, AI levels of your pitch gets it in a group. Choose a template with your timeless cool. Come on now let's flex those two Draft design, deliver, make it sing. AI builds the deck so you can build that thing. Do that, do that, do that with Acrobat.
1:09:27
Learn more@adobe.com do that with Acrobat.
1:09:52
Hard Fork is produced by Rachel Cohn and Whitney Jones. We're edited by Veran Pavic. We're fact checked by by Caitlin Love. Today's show was engineered by Chris Wood. Our executive producer is Jen Poyant. Original music by Alicia Beu, Rowan Nimisto and Dan Powell. Video production by Sawyer Roque, Pat Gunther, Jake Nichol and Chris Shot. You can watch this full episode on YouTube@YouTube.com hardfork Special thanks to Paula Schumann, Qui Wing Tam and Dalia Haddad. You can email us as alwaysardfork@nytimes.com Send us your toddler slop.
1:10:00
Quick choose a meal deal with McValue, the five dollar McChicken meal deal, the six dollars McDouble meal deal, or the new seven dollar Daily Double meal deal,
1:11:00
each with its own small fries, drink
1:11:07
and Four Piece McNuggets. There's actually no rush. I'm just excited from McDonald's for a limited time only.
1:11:08
Prices of participation may vary, not by Alderman delivery.
1:11:13