Your Undivided Attention

America and China Are Racing to Different AI Futures

58 min
Dec 18, 20254 months ago
Listen to Episode
Summary

This episode examines the divergent approaches the US and China are taking toward AI development, challenging the narrative of a direct technological race. While the US focuses on building artificial general intelligence (AGI) driven by transhumanist philosophy, China pursues practical AI applications integrated into existing sectors like manufacturing and healthcare, with different philosophical and economic motivations underlying each approach.

Insights
  • China's AI policy is shaped by a diverse ecosystem of actors (companies, academics, bureaucrats) rather than top-down control by Xi Jinping, making it more complex than Western perceptions suggest
  • The US and China are racing toward fundamentally different AI futures: the US toward AGI/superintelligence, China toward narrow, applied AI systems that augment existing industries
  • China's optimism about AI stems from decades of technology-driven economic growth and convenience improvements, while US pessimism reflects concerns about labor displacement, misinformation, and social dysfunction
  • Compute constraints from US chip export controls significantly limit China's ability to pursue a Manhattan Project-style AGI race, making the involution (excessive competition with diminishing returns) more likely than a coordinated national effort
  • International cooperation on AI safety is possible through parallel domestic regulation and light-touch coordination rather than binding treaties, similar to how nuclear safety evolved during the Cold War
Trends
Shift from frontier model development to AI application deployment across traditional sectors in ChinaRising concern about AI-driven unemployment in China despite previous government dismissal of the issueRobotics bubble in China with high valuations but limited real-world deployment and profitabilityInvolution dynamic in Chinese tech sectors (solar, EVs, AI) driving price wars and industry consolidationDeclining venture capital investment in China's AI sector despite technological achievementsGrowing recognition that US and Chinese AI safety researchers share common ground on existential risksDemographic crisis in China driving urgent need for automation and AI-enabled elderly care solutionsPhilosophical divergence: US AI development rooted in transhumanism and sci-fi narratives vs. Chinese pragmatism focused on economic productivityIncreasing Chinese government regulation of AI focused on content control and governance applicationsUS export controls on advanced chips creating compute scarcity that shapes China's AI strategy
Topics
US-China AI Competition and Race DynamicsArtificial General Intelligence (AGI) Development PhilosophyAI Safety and Existential Risk ManagementChip Export Controls and Compute ConstraintsAI Applications in Manufacturing and HealthcareHumanoid Robots and Elderly Care AutomationChinese AI Regulation and GovernanceVenture Capital Funding in China's AI SectorLabor Displacement and Unemployment from AI AutomationDemographic Crisis and Population Decline in ChinaDeepSeek Model Efficiency and PerformanceInternational Cooperation on AI SafetyInvolution in Chinese Technology MarketsSurveillance and Facial Recognition TechnologyMobile Internet and Digital Payment Integration
Companies
DeepSeek
Chinese AI company that achieved near-parity with US frontier models; founder is AGI-focused but constrained by compu...
OpenAI
US frontier AI lab founded on AGI/superintelligence belief; pursuing large-scale compute and energy infrastructure deals
Anthropic
US AI safety-focused company; CEO Dario Amodei discussed recursive self-improvement risks and competitive dynamics
DeepMind
AI research lab where researchers discussed AGI development in early 2000s internet forums
NVIDIA
GPU chip manufacturer subject to US export controls; constantly working around restrictions to serve China
Tencent
Major Chinese tech company that influenced AI regulation through corporate thought leadership on deep synthesis
Alibaba
Large Chinese tech company with profit-making arms funding AI development despite venture capital constraints
WeChat
Tencent's platform with significant investment in entertainment and digital products using generative AI
Uber
Referenced as example of price-war strategy similar to Chinese tech companies' involution dynamics
People
Tristan Harris
Host of Your Undivided Attention; framed the episode around Cold War missile gap parallels and AI safety concerns
Selena Xu
Technology analyst and co-author of New York Times op-ed on China's AI; expert on Chinese AI regulation and applications
Matt Sheehan
Senior fellow at Carnegie Endowment for International Peace; researches global technology issues with China focus
Xi Jinping
Chinese leader; discussed as less directly involved in AI policy details than Western perceptions suggest
Sam Altman
OpenAI leader; wrote about AGI and superintelligence in 2014-2015; pursuing large-scale compute infrastructure
Ilya Sutskever
OpenAI researcher who wrote about AGI and superintelligence in early period of AI safety discourse
Shane Leggett
DeepMind researcher who discussed AGI development in early 2000s internet forums
Dario Amodei
Anthropic CEO; publicly stated confidence in US compute advantage and recursive self-improvement capabilities
Geoffrey Hinton
Nobel Prize winner in physics; participated in International Dialogues on AI Safety in Shanghai
Andrew Yao
Leading Chinese AI scientist; participated in Shanghai consensus dialogue on AI safety and risks
Eric Schmidt
Co-authored New York Times op-ed with Selena Xu on state of AI in China
JFK
Referenced for making missile gap a central theme in 1960 election and accelerating nuclear weapons buildup
Peter Zeihan
Author who wrote extensively about China's demographic collapse and population decline risks
Quotes
"Before we open a Pandora's box with the potential for global catastrophe, we need to have the maximum clarity and situational awareness and not be led astray by false narratives or misperceptions."
Tristan HarrisOpening segment
"The biggest misconception is the idea that Xi Jinping is personally dictating China's AI policies... Most of this is happening at levels of detail that he's just not involved with."
Matt SheehanEarly discussion
"They aren't trying to build AGI. They're trying to like make a profit. There isn't this kind of anthropomorphic machine god or like the lingo that you see here in the Bay Area."
Selena XuMid-episode
"If you're in that situation with 5 million leading chips and you want to lead a Manhattan Project thing, you're probably not going to tell your local officials all around the country to be deploying AI for healthcare and manufacturing."
Matt SheehanCompute constraints discussion
"When you visit China, it feels like going into the future. And everything just works like you're 10 or 20 years further into the future than in the U.S."
Matt SheehanTechnology integration discussion
Full Transcript
Hey everyone, welcome to Your Undivided Attention. I'm Tristan Harris. In 1957, two events turned up the heat on the Cold War between the United States and the Soviet Union in a major way. The first was the launch of Sputnik, which showed the world that the Soviets were far ahead in the space race. The second was the release of a government report called the Gaither Report that warned of a, quote, missile gap between the two superpowers. And according to the report, the USSR had massively expanded their nuclear arsenal and America needed to do the same in order to ensure mutual destruction. JFK made the missile gap a central theme in the 1960 election. And after he won, he dramatically accelerated the buildup of American nuclear weapons, starting what we now think of as the nuclear arms race. But today, we know that the Gaither Report was wrong. Historical counting from Soviet documents and early satellite imagery showed that the USSR was actually far behind the U.S. in nuclear capability. Rather than the hundreds of ICBMs that the report claimed that they had, the Russians at the time only had four. The point of the story isn't that the U.S. shouldn't have taken the USSR seriously as an adversary. The point was, before we open a Pandora's box with the potential for global catastrophe, We need to have the maximum clarity and situational awareness and not be led astray by false narratives or misperceptions. And if we had had that clarity in the 1960s, we might have been able to do more to avoid the nuclear arms race and seek diplomacy and disarmament instead of racing. Well, today we're on the brink of a potentially new catastrophic arms race between the United States and China on AI. And China had their own kind of Sputnik moment when DeepSeek was launched in January of this year, showing that their AI technology was nearly on par with frontier American AI companies. And now you're hearing a lot of top voices in the U.S. government and technology use the same familiar rhetoric of the past, the idea that if we don't build extremely capable AI, then China will. And we must win at all costs. So in this episode, we want to get to clarity on what the state of AI actually looks like in China. Do they see the AI race like we do? Are we racing towards the same things? Are we in a race at all? And what kind of concerns does the Chinese government and tech community have about AI in terms of the risk versus rewards? Today's guests are both experts on AI and China. Selena Xu is a technology analyst who's written extensively about the state of AI in China and co-authored a powerful op-ed with Eric Schmidt in the New York Times. Matt Sheehan is a senior fellow at the Carnegie Endowment for International Peace, where his research covers global technology issues with a focus on China. Selena and Matt, welcome to Your Undivided Attention. Thank you for having us. Thanks. Great to be here. So I want to start by asking you both a pretty broad question. what do you each see as the most persistent misconception that Americans have about China and AI? For me, the biggest misconception is the idea that Xi Jinping is personally dictating China's AI policies, the trajectory of Chinese AI companies, that he has his hands very directly on all the key sort of decisions that are being made in this space. And, you know, Xi Jinping is the most powerful leader since Mao. He runs an authoritarian single-party political system, so he clearly has a lot of power. But just on a very practical basis, most of this is happening at levels of detail that he's just not involved with and that even senior officials within the Chinese Communist Party are not involved with. There's a huge diverse array of actors across China within the companies, within research labs, within academia, the bureaucracy, that all have a major influence on China's AI trajectory, how they see risks, how they see the technology developing. And those people are constantly feeding into the political system. They're shaping how the government thinks about the technology. They're developing the technology themselves without really hands-on guidance from officials in some cases, in many cases. And understanding that diversity of actors and the role that they play in the ecosystem is critical to being able to understand where China's going and in some cases maybe affect where they're going on this. And just to briefly elaborate on that, because there is just this narrative that China is run by the Chinese Communist Party and Xi runs the Chinese Communist Party. So it feels from external views that he really is running things. How do we know that things are coming from these different places? What's sort of the epistemology we use? One of the main focuses of my research is to essentially reverse engineer Chinese AI regulations. So to take a Chinese AI regulation like their regulation on generative AI and say, where did all the ideas in this regulation come from? Can we trace them backwards through time and find, oh, this idea originated with this scholar at this university who essentially popularized this concept? And I'll just give like one very practical example of this. Their second major regulation on AI was called the deep synthesis regulation. And specifically what they were trying to do is they were trying to regulate deep fakes. And so for a long time, the conversation in China is how we're going to regulate deep fakes. And then Tencent, one of the biggest technology companies in China, creates WeChat, who has a ton of money invested in entertainment, video games, digital products, all things that use generative AI. They started like, everyone talking about deep fakes all the time isn't so great. We need to just kind of pivot this conversation a little bit. So essentially they did what a lot of American companies do. They did corporate thought leadership, where they started releasing reports on deep synthesis, how that's really the better term for this technology. and we should really understand all the benefits of it. And we see just very directly that term, it originated from inside of them, it made its way into official discussions and it became the title of a regulation and affected how that regulation was made. And that's happening at a bunch of different levels across companies, across academics, think tanks. So it's, yeah, it's a diverse ecosystem. I think the way to think about Xi Jinping in relation to it, or just say senior leaders, they're kind of the ultimate backstop. You know, if they are directly opposed to an idea, and they're aware that that thing is happening, they're going to be able to put a stop to it. But in most cases, they don't have an opinion on the details of AI regulation. They don't have an opinion on what is the most viable architecture for large models going forward. And so those things originate elsewhere. That's super helpful. Selena, how about you? What are some of the most powerful misconceptions about AI in China? I think this is one that I think increasingly more people have started talking about, which is that we've heard a lot of AGI and U.S. and China being in a race towards artificial general intelligence, which is AI that is human level intelligence. And I think if you look at what's really happening on the policy level and including in a lot of companies, they're outside of some of the few frontier labs like DeepSeek. Most of these companies are thinking very much about AI applications, AI enabled hardware, or thinking about, oh, if you're a local government official, How do you integrate AI into traditional sectors, into things like manufacturing? So I think this is the kind of the thing you're seeing on the ground in China right now, instead of this very scaling law motivated, very leveraged economy on deep learning. Okay, so Selena, ostensibly both the US and China, you know, the US at least thinks that we're racing to this sort of super god in a box, you know, AGI racing towards super intelligence. And that's what this whole race is about. Because if I have that, I get this permanent dominating runaway advantage. And you're saying that China does not necessarily see AGI as the same prize. Could you just elaborate on this? Let's really get to ground on this because it is the central thing that's driving the kind of U.S. approach to AI right now. Yeah, I think caveat here first and foremost, it's hard to exactly know what China's top leaders are thinking. But we can look at what has been happening on the ground in the industry and also in policies. So if you're looking at the AI Plus plan, for instance, which is this major national strategy that was released, you don't really see, there is no mention of AGI. Secondly, when you look at what is actually they're championing for, it's very much embedding AI into traditional sectors like manufacturing and industrial transformation and also emerging sectors like science and innovation or even like governance. So it's very much application focused. And all of the stuff that they were trying to push for is very much how do we use AI in a way, massively deploy it so as to actually see a real productivity boost and improve our economy. So that is kind of the way people are thinking about AI. It is a bit instrumentalist. They aren't trying to build AGI. They're trying to like make a profit. There isn't this kind of anthropomorphic machine god or like the lingo that you see here in the Bay Area. And that might be because of China's history with other kinds of technologies, which is kind of interesting philosophically. But I think also at the same time, it's very much because they don't have the cultural context in the past that a lot of people in Silicon Valley have been educated on, like from like the Matrix to like her and like thinking about AI and like the Turing test way. Yeah, let's let's break that down a little bit more, because so much of this comes down to the philosophy or religion almost or the kind of historical roots of where your conceptions of AI come from. And would you both just comment a little bit more on kind of the roots of the AI philosophy in Silicon Valley versus the roots of what are the philosophical or even sci-fi or just other sort of cultural lineages or ideas that inform what AI is for both cultures? You know, the leading labs in the United States, they were founded very much on the belief. And at the time, I would say it very much was a belief that we were going to get to artificial general intelligence. And then that was going to rapidly transform into superintelligence. And this could have essentially infinite benefits or it could wipe out the human race entirely. Like that's that is baked into the DNA of open AI, Anthropic and some other leaders, a lot of leading researchers in this space. Ilya and Sam Altman were writing about this in the 2014, 2015 kind of days or people talking about AGI, Shane Leggett, DeepMind, you know, talking about this. in the early 2000s on internet forums. This is like a very deep, sort of almost transhumanist-influenced cultural idea. And yeah, you know, it builds on a legacy of the Terminator movies. It builds on a legacy of science fiction. That's not to say this is all siloed in the United States. Like, Chinese people also read international science fiction. Many people in China sort of share some of these beliefs. But I'd say when you think about the DNA of the leading companies, it's very unique in the United States. When it comes to the Chinese companies, you know, again, we kind of have to disaggregate the different actors here and even just individuals. I think the way Selena characterized the Chinese government's position on this is exactly correct. They are very focused on application. They're saying, how can this technology help me achieve my political, economic, social goals? How can it upgrade my economy? How can it jump over the middle income trap? How can it empower the party to have greater control? That's their focus. But you also do have some people like the founder of DeepSeek, who is himself, you know, as we'd say in the US, AGI He does believe that sometime in the perhaps not too distant future, we will achieve something like artificial general intelligence. This will probably have a lot to do with how much computing power we put into the models. Pretty similar, I think, from what we can tell from the public statements he's made to the way that people like Sam Altman view this. You know, he's operating within an ecosystem. He has limits on the compute that he can access. He has limits on the government that he's dealing with, the talent that he has at his disposal. So it's not to say that because the founder of DeepSeek believes in AGI, that means, you know, that's where China is heading. But there is this diversity of actors, government, sort of influential policy people, entrepreneurs, engineers. Selena, do you have any parsing of that on top of what Matt shared? Yeah, but I would say in response, I think the main thing here is DeepSeek has been pursuing a slightly different path than some of the U.S. frontier labs, possibly because of compute constraints. they are very much more efficiency focused. And that's why I think they've poured so much like technical resources and attention into basically achieving highly efficient models. And that is kind of the goal he's going towards. So that's why in January, when people kind of woke up to DeepSeek, part of the surprise was that how good it was, bearing in mind, you know, the kind of cost and compute, even though that's kind of vague and murky, but it's definitely, you know, at least an order of magnitude lower than some of the training costs in U.S. frontier labs. So I think that's kind of a different approach that they're pursuing. They are AGI-pilled, but even then, I think what they're doing is not like, oh, scaling and building ever bigger data centers that can compare with Anthropic and OpenAI. And that's just not the reality in China. May I build on that a little bit? Yeah, please, go ahead. You know, one way to think about this is like, where is the government putting its resources? And do companies need the cooperation of government resources in order to achieve their goals? I think in the United States, especially over the last year or two, the way open AI has been operating, not just with the U.S., but with governments around the world, is this belief that fundamentally this is going to be a large-scale energy, computation, huge financial costs, you know, striking deals around the world to build out these data centers that they believe are going to be essential. And so, if we're sort of thinking about it through that lens and we look over at China and we say, okay, where is the Chinese government putting its bets down? And I think the AI plus plan that Selena described earlier is a pretty clear signal that where they are putting their money down and their sort of bureaucratic resources down is on applications. The AI plus plan, you know, it sounds a little weird to our ears. It basically means AI plus manufacturing, AI plus healthcare. Essentially, we want to use AI to empower all these other sectors. And that's where they are telling their local officials, saying, you know, if you're gonna subsidize an AI company, subsidize an application that makes sense in your area. Subsidize these things They not saying hey let all consolidate all our computing resources and devote them just to DeepSeek so that they can push their one sort of mission Well this is very interesting And Matt you said earlier in a different interview that the Chinese Communist Party is like a big HR department that it kind of run like there these performance reviews and they set these top level goals as a nation and they say our goal is to make sure we're applying AI to all these different industries and we measure the performance of each local official in each province and then down to each city according to how good they are at doing that. And what you're saying is they're not saying to all those officials, we're going to judge you based on how good you are creating a super intelligent God in a Box Manhattan project. We're judging you based on the application of AI. Still, there might be some who are listening to this and saying, yes, but how would we know? what if China is secretly pouring a Manhattan Project-sized amount of money into DeepSeek? Because it's important to recognize that they did recently start locking down and tracking the passports and employees of DeepSeek. They're sort of treating it kind of like the nuclear scientists. One could sort of view it that way. I'm trying to steel man these different perspectives because there's sort of this, as we talked about in the opening with this missile gap idea, there is this deep fear that if we get this wrong and they are building a Manhattan Project, and that is the defining thing, then we could lose here. So how would you sort of further square those pictures? Yeah, I think it's very important to steelman these and to also acknowledge how much we don't know and can't know about what's going on inside China. And I do not rule out the possibility that sort of somewhere deep in a bunker in Western China, they are slowly trying to accumulate some level of chips that would, you know, power a supersized data center. Like, we cannot rule that out. I hope our intelligence agencies are very much on this and would have awareness of it before anything came to fruition. But I think, again, to just kind of where are they putting their money and their bets down, like if that's what you're trying to do, we know that China as a country on the whole is compute constrained. They have a limit on how much computational power, how many chips they have in the country, largely due to U.S. export controls. And just explain that for a moment just for people who may not be tracking. So the U.S. started these chip controls in what year was it? we stopped basically giving China these advanced AI chips? Yeah, so the big restriction came in 2022 and has been updated every year since then, 2022, 2023, 2024. And I guess the sort of simplest way to understand it is that in order to train and deploy the best AI models, you need a lot of computing power. And you want that computing power in the form of very advanced chips that are called GPUs made by NVIDIA, a super hot company right now. And basically what these different executive orders have said is we will not sell the most advanced chips to China and we will not sell the equipment needed to make the most advanced chips to China. We're going to ban the export of these things. Now, these export controls are very imperfect. They have a lot of holes in them. They're smuggling. Essentially, they've needed to update it because the companies, NVIDIA specifically, are constantly sort of working their way around it. But despite all those sort of, you know, holes in the export controls, they have imposed large-scale compute limits on China. The United States and U.S. companies, if they want to access maximal compute, they can do that. And Chinese companies just have less, Chinese companies and government. And so if you're in that situation, just say that you have, you know, 5 million leading chips. That's probably more than they actually have. If you have 5 million leading chips and you want to lead this kind of Manhattan Project thing, you're probably not going to tell your local officials all around the country to be deploying AI for healthcare and manufacturing in all these local scenarios. Because they'd be using up all the chips. So you're saying if they succeed in this AI plus plan, then it would take away from their success as a Manhattan project. They couldn't do both realistically given the finite number of chips that are currently available to them because of these controls. Yeah, a lot depends on how many chips you end up needing for the quote-unquote Manhattan project. But just in terms of signaling, the signaling that they're sending to their own officials is focus on applications, and they're deploying resources in that direction. Yes, Selena, do you want to add to that? Completely agree. And also, I think the TLDR is just that, like, if they're trying to build a Manhattan project for AGI in China, that sheer amount of chips that are required for that, if that's being smuggled in, I think there's no way that any intelligence agency or NVIDIA itself would be unaware. selena you recently attended the world artificial intelligence conference in shanghai and we just love to take listeners on kind of a felt sense for what ai feels like as it's deployed because i think the physical environment of ai reaching your senses as a human is very different in china than in the u.s currently so could you just take us on a tour like viscerally what was that like? Yeah. And there are a lot of different kinds of AI, I would say. And I don't know whether you, Tristan, have been to China, but pre-generative AI and LLMs and chatbots, there was already digital payments. People paid with their palm or facial recognition while you're entering the subway. Those are other kinds of AI that's already very visceral and kind of all around you. This time around in July for the World AI Conference, on top of all of that, I think one of the biggest things that really struck me was how just like pervasive robots were. They were everywhere. So it was basically in this huge expo center. And I think about like 30,000 people were there. All the tickets were sold out. A lot of like young children, families, even some grandparents. It was like whole of society kind of thing. And it was like a fun weekend hangout. And everybody was just like milling around the exhibition booths, shaking hands with robots, like watching them like fight each other MMA style. There were also robots just like walking around. Some of those were like mostly remote controlled by people. There were a lot of AI enabled hardware stuff like glasses or like wearables, including some like AI plus education, like dolls, you know, so all kinds of innovative applications of AI in like consumer oriented ways. And you just see people interacting with AI in a very physical, visceral way that you don't really see here in the US. Like here, people talk about AI as this like, oh, far away machine God thing. But like in China, it was very palpable. It was extremely integrated into the real world environment. Some of it is hype. Like a lot of the humanoids and robotic stuff is still very nascent and not very mature. And you can see some of the limits of that when like robots fell down or didn't really react in the right way. But I think that the enthusiasm and like the optimism really was very, very interesting. Like people were actively like excited about AI, right? Versus here, it's more like the Terminator or something. Yeah, I wanted to ask about that because I feel like if you went to a physical conference like that and given there are far fewer robots and robot companies in the US, although we do have some leading ones, I still feel like the US attitude is more this bad. Like a lot of the feeling is just this is creepy, this is weird, I don't really like this. But the thing that I keep hearing is that when you're there walking the grounds, everyone is just pumped and excited and optimistic about AI. And I'd like to develop that theme a little bit more here about why one country seems to be very pessimistic more about AI and the other, you know, China's largely optimistic. But Matt, just curious here to add on to Selena's picture here, you also were, I believe, in China in the 2010s as the mobile internet was kind of coming online. And that kind of has a role, I think, in how China sort of sees technology optimistically versus more pessimistically here. Absolutely. And I think maybe first touching on the sort of optimism, pessimism towards technology more broadly, and then we can bring it into AI. I think, you know, there's a lot of questions about exactly what did the survey results show or these good survey results? You know, how do we know this? It tends to rely a lot on anecdotes and sort of, you know, vibes. But I think maybe the most important factor here is that the rise of information technology, eventually the internet, now AI, the way it's come into people's lives in the last 45 years, since say 1980. And if you look at what happened to China since 1980 versus what happened in the United States since 1980, it's very different. This has been essentially the biggest, longest economic boom in Chinese history. And normal people have seen their incomes multiply by factors of, you know, 10 or even like 20 over that period of time. Basically, since information technology came into the world, Chinese people's lives have been getting better. United States, it's very hard to say, you know, are Americans' lives better? But a lot of people associate technology with impacts on labor, with more dysfunction at a political level, misinformation, the damaging effects of social media on kids. And this has just been a period of time when the United States has largely turned kind of more pessimistic about our society, our prospects at a national level, and I think at an individual level. Or, you know, you could take it to the last 10, 15 years since the rise of the mobile internet. This has been, you know, one of the most fractious times in American political history. And it's been, with some exceptions, a pretty good time in China, at least from the perspective of someone who's just trying to earn more, live better, have more convenience in their lives. So that's a very, you know, 30, 40,000 foot level take on the sort of optimism, pessimism. but I think it is pretty foundational to how people look at these things. Yeah, I lived in China 2010 to 2016, and this was really the explosion of the mobile internet in China. Obviously, in the U.S., you know, mobile internet was expanding rapidly too, but this is when China was very rapidly sort of catching up to and then surpassing the global frontier of mobile internet technologies. What is the mobile internet doing for ordinary people? And to me, some of those sort of visceral memories from that time are around 2014, 2015, when mobile payments kind of kicked into high gear, you suddenly had this explosion of different like real world services that were being empowered by the mobile internet. So here, you know, in the United States, obviously we have Uber and Lyft. These are, you know, real world services empowered by the mobile internet. In China, they had their own Uber and Lyft, but they also had just a huge diversity of local services. You know, as of 2013, 2014, Someone will come to your house and do your nails for you with just like four clicks. The guy who's literally selling like baked potatoes out of an oil drum has a QR code up there in 2014 to have you pay via that. It was this very visceral feeling that like technology is integrating to every factor of our lives. And in large part, it's making things way more convenient. Like when I got to China in 2010, if you want to buy a train ticket, especially during Chinese New Year, means you get up really early and you wait in a super long line for a very slow bureaucratic in-person ticket vendor to sell you the ticket. When sort of WeChat, mobile payments, all that got integrated into government services, including ticket selling, suddenly it became way more convenient, way easier to do these things. And of course, like mobile internet has led to convenience in both places. But having, you know, lived at sort of the center of this in both countries, I just think it had a much more tangible feeling in China and a feeling that it's It's genuinely like making our lives better at this point in time. Just to add to that, I mean, the thing that I hear from people who either visit China or even Americans who lived in China the last little while and no longer come to the U.S., when you visit China, it feels like going into the future. And everything just works like you're 10 or 20 years further into the future than in the U.S. Then when people actually have been in China for a while, they come back to the U.S., it feels like you're going back in time and things feel less functional and less integrated. I'm not trying to criticize one country or another. I think it's actually based on kind of leapfrogging, right, where the U.S. had to build up a different infrastructure stack, and they didn't jump straight into kind of this 21st century gig economy, immediate mobile payments built into everything, whereas China really did do that. Yeah. And I just, on our earlier conversation on China in the 2010s, I should note that simultaneous to this mobile internet, you know, transformation was a huge rise in AI-powered surveillance of citizens. You know, facial recognition everywhere. You want to literally enter your sort of gated community. And in China, gated communities are much more common. They don't indicate wealth. To just enter your little housing community, you might need to scan your face. And so, you know, at the same time that we're pointing to all the conveniences of this, this also has very much a dark side that is just important to note here. Absolutely. I think it is really important to note how, obviously, the surveillance-based approach, which we would never want here in the West. The other side of it is the just fluency of convenience, where everywhere you walk, you're already sort of identified, which obviously creates conveniences that are hard to replicate if you don't do that. And that's one of the hard trades, obviously. Yeah, absolutely. In a recent Pew study, it showed that 50% of Americans are more concerned than excited about the impact of AI on daily life. and a recent Reuters poll showed that 71% of Americans fear AI causing permanent job loss. What is the public mood in China versus the US on AI and job loss actually? Because I think this is one of the most interesting trade-offs that these countries are going to have to make because the more jobs you automate, the more you boost GDP and automation, but then the more sort of civil strife you're dealing with if people don't have other jobs they can go to unlike other industrial revolutions. I think it's definitely something on people's minds, but not necessarily related to AI. Like in the past few years, youth unemployment has been a very serious issue before the government stopped releasing the statistic. I think it was about like at least 20 to 25 percent of youths are basically unemployed in China. So that's, I think, something that the society has been grappling with and something policymakers are obviously concerned about. Did you say 20 to 25 percent? Of youths, yeah. Wow, seems high. Yeah, it's quite crazy. And because it was so high, they stopped releasing the statistics. So we can only speculate like how high it is. I expect it to be around the same range. But if you're talking to like, you know, young people in China now who are trying to funnel into, you know, STEM fields or AI vacations, there is like a huge pool of AI engineers and like increasingly limited number of jobs. So I think this is something definitely that young people are facing and there's real anxiety. But on the other hand when you talking to you know policymakers and experts in China the sense I gotten is they strangely mostly positive about AI and they kind of slightly blase about like oh, the effects of unemployment. Like one person I spoke to who basically advises the government talked about like the example where they went to do field research in Wuhan, which is a city in China that has a huge penetration of autonomous vehicles. And they talked to some taxi drivers about, hey, how concerned are you about self-driving cars? And they said taxi drivers generally told them that they are excited to work fewer hours and are excited about the improvement in labor conditions. And I'm like, okay, that is the kind of sentiment that they're trying to basically use to justify, I think. How people are feeling about it, they're slightly probably concerned, but the main thing is to upskill them. And in general, this is a better thing for society. Obviously, the tune would change. I think in China, a lot of times the pendulum just swings based on how policymakers think. Right now, it seems to me they're pretty positive on AI as like more of a productivity booster rather than like a drag on labor. But obviously that might change down the road. And in terms of just everyday people, I think youth unemployment is just something that they're really just thinking about and everyone knows and acknowledges. I don't know how much they tie it to AI, but I've heard from friends who work in the AI industry about just how cutthroat it is to get a good job. and like the sheer amount of like PhD graduates who are trying to get like the right number of citations and the right journals so as to secure a job at a place like Tencent or Alibaba. May I chime in on that? Yeah, please. Yeah, the picture I have of this is slightly different or at least I think it's evolved substantially in the last say six months to a year. I agree that if you go back maybe a year or maybe two years, both Chinese policy scholars, you know, the people sort of advising the government and it would seem the Chinese government were very blase about the unemployment concerns around AI. Like one of the things I do in my job is I facilitate dialogue between sort of Chinese policy, AI policy people, and American AI policy people. And in one of our first dialogues, we had everyone from the two countries rank a series of risks in terms of how worried are you about this risk, from existential risk, military applications of AI, or privacy of seven or eight different things. And in that risk ranking, which I think this is taking place in early 2024, the Chinese scholars ranked the unemployment concerns second to last out of, I think, eight risks. It was really low. And when I was thinking about, you know, why is this at the time? I my sort of shorthand for it was China has undergone just incredible economic disruption and transformation in the last 30 years. And it's basically come out OK. You know, in the 1990s, they dismantled a huge portion of their state-owned enterprise system. Millions of people became unemployed because of reforms to the economic system. And they're like, basically, if we grow fast enough, this will all come out in the wash. And of course, there are, you know, long-term costs to that. But they seem to have this faith that, you know, if you can just keep growing at this extremely high rate, then the job stuff will figure itself out. That, I think that has changed a bit over the last six months to a year. Again, this is partly anecdotal, speaking to people over there, kind of reading between the lines of some policy documents. But I have heard people saying that this is a sort of rising in salience as a concern for the government. And in some ways, the signals they're sending are somewhat conflicting. Like on the one hand, they're essentially like all engines go on applying AI and manufacturing on robotics. So they're pushing the automation as fast as they can at the same time that their concerns about the labor impacts are also rising. You know, we might say that that's not a totally coherent sort of strategy, but government policy is not always 100% coherent. They're still feeling out these two things, but people have been suggesting that essentially this is rising in salience and it might end up affecting sort of AI policy going forward, but it's speculative. That's fascinating, Matt, that the economic disruption from the past and the fact that they were able to navigate that successfully means that people see that maybe their job's going to get disrupted, but no big deal. well, we did that once before, we'll retrain. Of course, what's different about AI, especially if you're building to general intelligence, is that it's unlike any other industrial revolution before because the point is that the AI will be able to do every kind of job if that's what you're building. So there actually is a secondary benefit of approaching narrow AI systems, this sort of applied, narrow, practical AI, because you're not actually trying to fully replace jobs. You're maybe augmenting more jobs, but you're not having the AI eat every other job. And then when you kind of zoom out, The metaphor in my mind for this visually is something like the U.S. and China, to the degree they're in a race for AI, they're in a race to take these steroids to boost the kind of muscles of GDP, economic growth, military might. But at the cost of getting kind of internal organ failures, like you're hyping up the attention economy addiction doom scrolling thing, you're hyping up joblessness because people's jobs are getting automated at the cost of boosting the steroids level. And so both countries are going to have to navigate this. But it's interesting that if you do approach more narrow AI systems, you don't end up with as many of those problems because people can keep moving to do other things. I think that's a great metaphor. I've never heard that before, but steroids is about right. On the sort of, you know, we've been through disruption before, we can deal with it. I would say I would differentiate a little bit between the Chinese government, which is thinking in a 100% macro perspective from an individual person. I think if you told an individual Chinese person, your job is going to be automated. They might have something to say about that. I guess the question is, it's similar to the US question for UBI. If let's say we live in a completely automated society, people don't have to work, but is AI going to be able to generate enough revenue to support literally billions of people on universal basic income? Like, is that the math, as far as I've heard in the West, is that that math doesn't work out? Yeah. I mean, does the math math in this situation? I don't know. I think it's mostly, in many cases, it's going to be a political decision. And, you know, I think at a very high level, we might think, OK, China, one party system, communism, like they should be all good with just, you know, massive redistribution. And I think that's possible that it does pan out that way. But quite interestingly, you know, Xi Jinping, who's a very dedicated Marxist in terms of ideology or Leninist in a lot of ways, he personally, from the best we can tell from good reporting on this, is actually quite opposed to redistributive welfare. He thinks it makes people lazy. And, you know, China, despite being nominally a socialist on its way to communism country, she has a terrible social safety net. You know, people are largely on their own, much less of a social safety net than the U.S. And so... Really? Than the U.S.? Yeah, yeah. I mean, they have essentially like welfare that is paid to people who cannot work or disabled. It's extremely low. There's nothing like Obamacare over there. Maybe a lot of people have health insurance in some form, but access to actually good medical care is really not great. And yeah, it's one of these contradictions of modern China. They are simultaneously a communist party and sort of deeply committed to certain aspects of communism, while at the same time being more cutthroat in terms of individual responsibility than even the United States. That's so interesting. It's definitely not, I think, the common view, what you're from externally knowing that it's a communist country. you would think the opposite. Well, let's just add one more really important piece of color here that I think speaks to a long-term issue that China's having to face, which is that China's population is aging very rapidly, and they're facing a really steep demographic cliff. Peter Zaihan, the author, has written extensively about this. There's a sort of view of demographic collapse. I believe if I just cite some statistics here, China's had three consecutive years of population decline, down 1.4 million since 2023. They're on track to be a super-aged society by 2035, with one retiree for every two earners. And that would be among the first in the world. And so how can you have economic growth if you have this sort of demographic collapse issue? And this has led a lot of people in the national security world to say that China is not this strong, you know, rising thing. Maybe it looks that way now, but it's actually very fragile and demographic collapse is one of the reasons. Now, some people look at this and they say, but then AI is the perfect answer to this because as you are aging out your working population, you now have AI to sort of supplement all of that. And I'm just curious how this is seen in China, because this is one of the core things that has been sort of named as a weakness long term. I think one of the reasons that the Chinese government and also a lot of the companies have been in a frenzy about like humanoid robots and other kinds of industrial robots is precisely because of this reason. If you're thinking about in terms of the demographic decline, the shrinking workforce, Of course, a lot of the gap has to be filled in by like automation. And that's in the form of industrial robots. If you're looking at like installations, I think China has outstripped the rest of the world over the past few years. But if you're thinking about elderly care, companionship, how do you help the elderly and like the growing silver economy continue to expand? You kind of do need AI, not just in terms of like AI companions, but also like, oh, humanoids in some elderly homes, which I think some local governments have already started to push forward and pilot programs. So I think that's how people have been grappling with that. But I think apart from that, like whether AI would be able to really help elderly people in like brain machine interfaces, that's still something that people are starting to research on. And I don't think there is a very clear sign of like, you know, how close we are to that. Yeah, just building on that, I think, you know, the dynamic you described is sort of right on all the fundamentals. And there's this idea like essentially we have all these problems. This isn't unique to China or just aging. We have all these problems. They're getting worse. We don't have any solution for them. But is AI going to be this rabbit that we pull out of a hat that's going to resolve them? And I would call that a little bit of magical thinking or at least wishful thinking. It's important to put the aging stuff in the context of their sort of broader population policies. China for decades had the one-child policy, which was the greatest sort of population limiting policy that you can have, even though it was never exactly one child per family. It took them a long time to realize the damage that this was going to have on their economy long term. But they did realize it. When I was living there working as a reporter was when they put an end to the one-child policy. And since about 2015, they've actually been saying to people, actually, have more children. Have more children. Here's subsidies to have children. And it's just not having the effect that they want. And it's a very sticky and intractable problem. And it's not just China. It's across a lot of countries in East Asia as well as other societies that aren't bringing in that many immigrants. Which is another issue for China is that they're not actually bringing in lots of immigrants from all around the world because they value their – yeah. Yeah, absolutely. So is AI going to be the sort of magic wand that gets waved and resolves all these problems? I can see why people in government and society want to believe that and it could end up being true. But probably not something that you should bank on if you're the leader of hundreds of millions of people. So now switching gears yet again, in the US, there's a deep sense that we're in a major AI bubble, the amount of money that's been invested and, you know, the sort of circular deals that are going on between NVIDIA and OpenAI and Oracle. And this is just a big house of cards. I'm just curious, is there a view that there's a big bubble in AI in China? From my sense, not yet. Maybe in terms of robotics, I've heard from several, you know, VC people that, hey, there's totally a robotics bubble right now in China in terms of the sheer amount of funding, new companies. If you're looking at some AI adjacent stuff, like if you're looking at like self-driving cars, there was a bit of that previously. But now if you're thinking about LLMs, a lot of consolidation has happened. And right now the AI space, I think a lot of the funding has dried up for frontier model training. And most of the funding has gone into like AI applications. So I think in LLM or like AI frontier stuff, there isn't really a bubble in China. Yeah, you know, to have a bubble, you need to have huge amounts of money flowing into something and overhyping the evaluations. And the very sort of ironic or difficult to grasp thing in China today is that despite the headlines, despite how well a lot of leading Chinese models are doing when you kind of compare them on performance, the Chinese AI ecosystem is actually very cash strapped. They're very short of funding. That's one of the biggest obstacles, especially for startups, but also for big companies. And there's a lot of reasons behind that. I'd say, you know, the venture capital community in China is very new. It kind of started around 2010. So it's only 15 years old. And around the year 2022, that venture capital industry basically collapsed due to a bunch of things. COVID, the Chinese tech crackdown of that period of time when they were sort of beating up all their information tech companies. and just the fact that sort of a lot of the first wave VC investments didn't pay off. So when you look at the actual total amount of venture capital that's being deployed in China, it's been going down every year since 2021. And even in AI, which is, it's almost hard to wrap our head around, but the venture capital being deployed is actually going down in China. Now, there are companies that can get around this, you know, essentially DeepSeek, they started as a quantitative trading firm, so they can sort of print their own money and don't have to take on as much venture capital. And some of the big companies, Tencent, Alibaba, they have huge profit-making arms that they can sort of funnel the money into it. And so it's not to say that everybody is broke, but the investment is low. Then people might say, well, what about the government? Isn't the government just like flooding them with resources? The government is putting a substantial amount of money into this. But the government is actually also much more cat-strapped today than it has been at any point in the last 20-plus years. This is in large part due to the sort of the collapse of the real estate bubble in China. The one real bubble over there has led to huge shortfalls in local government money, which means the central government has to give money to local governments. It's, you know, it's a complex system, but I'd say the kind of the shorthand is just like, well the U seems to just be having money flooding into it from a bunch of different directions In China it very cash constrained We just double tap on what Selena said about robotics Robotics is one area where there probably is a bubble You have a bunch of these startups that you know shot to huge evaluations and are trying to list very quickly. And they might have good technology, but it's basically like demonstration technology at this point. It's not actually being used to make money in factories. And those companies are, I think many people would say they're due for a correction. So we might have our LLM bubble burst and their robotics bubble burst. And then, you know, where do we go from there? And actually, just to add one more thing, I think instead of hearing bubble, the word I hear the most in China the past few years is involution, which in Chinese is neijuan, which essentially just means excessive competition that's self-defeating because there's just ever-diminishing returns no matter how much more effort you put in. And that's been something that has spread from electric vehicles to like AI chatbots to solar panels to everything. Essentially, all these companies grind on like ever slimming profit margins and don't really see a way to get their profit back. And there's kind of no way out because of the list of reasons Matt has listed. You know, it's hard to exit. It's hard to IPO. They want to go overseas, but there's just so much competition and there's some pushback in other Western countries. So I think that's the phenomenon that's being seen in China right now, like involution. And how does that match with this sort of view from national security people in the West that China is deliberately making these unbelievably cheap products to undercut all the Western makers of solar panels and electric cars and robots and things like that. And this is part of some kind of diabolical grand strategy to, I'm not saying one way or the other, I'm reporting out things that I hear when I'm around those kinds of people. How do you sort of mix those two pictures together? I think essentially both things are true. Like the involution, which basically it means price wars. It means there's way too many companies that have flooded into the new hot sector and they're forced to compete on price. And they essentially sell their products for less than it costs to make them. And it leads to long-term consequences. And that happened in solar panels when I was living there in the 2010s. But it's one of those things where you can have a sort of a price war, a collapse of the industry, and then what emerges at the end is actually still a quite strong industry. That's what happened in solar panels. The government, I think, at a very high level does have a strategy of essentially if you undercut international markets on price, you can dominate the market and then you can hold it permanently. It's what's called like dumping in international trade law. You sell something for cheaper, you destroy your competitors. And then, you know, yeah, some might say that was what companies like Uber might have done to taxis domestically. So it's both like a self-destructive practice that bankrupts tons of companies in China. And it also might be something that the government is okay with on some level. They're currently having an anti-involution campaign sort of policy-wise. They think that this is at this point more destructive than helpful. So they're trying to limit the damage of this, but it's a complicated system. One of the main things that we often talk about in this podcast is how do we balance the risk of building AI with the risk of not building AI, aka the risk of building AI is the catastrophes and dystopias that emerge as you scale to more and more powerful AI systems that either through misuse or loss of control or biology risks or flooding deepfakes, the more you sort of progress AI, the more risks there are. And at the other hand, the risk of not building AI is the more you don't build AI, the more you get out-competed by those who do. And so the thing we have to do is straddle this narrow path between the risk of building AI and the risk of not building AI. And all the way at the far side of that is the risk of, you know, these really extreme existential or catastrophic scenarios, which it seems like both the U.S. and China would want to prevent. And yet open sourcing AI has lots of risks associated with it, and China is pursuing that. And one of the sort of key things that comes up in this conversation all the time is, as unlikely as it seems that the U.S. and China would ever do something like what the Nuclear Non-Proliferation Treaty was for nuclear arms, you know, would something like that, you know, negotiating some kind of agreement ever be possible between the U.S. and China, given shared views of the risks? I think it's very possible. It's just like there's a long list of stuff that, you know, the two presidents have to talk about. And obviously it doesn't have to happen in this administration. A lot can change with the technology. I think there is general consensus from experts, policymakers on both sides when they talk in some of these track two dialogues, which are basically non-government to non-government and track one or government to government. In these Trek II dialogues, people generally can agree on a lot of things. These can include things like very basic areas of technical research, like interpretability. How do you understand what's actually going on in an AI model, like under the hood? There's also things like general safety, guardrails, evaluations, monitoring, things like that. And then some other stuff that was agreed on during C and Biden's Trek I dialogue on like, you know, keeping a human in the loop when you're talking about nuclear weapons. So I think there's a lot of stuff that's possible. I think it's more of like a matter of mutual trust. And that's something that's quite lacking today in our political climate. Trying to say we need to cooperate with China on anything seems quite poisonous. But I think if we can expand our imagination a bit and really just grapple with like the sheer necessity and the gravity of the situation, there's a lot that can be done that's, you know, low risk and like an easy lift. And I would just say it can start from people to people instead of just government to government. It can be from companies to companies, experts, experts, and stuff like that. And just to elaborate this in a visceral sense for listeners, did you attend the International Dialogues on AI Safety? Yeah, I did. So could you just take us inside the room? As an observer, but yeah. Yeah, just take us inside the room. So for listeners who don't know, there are these dialogues where just like during the Cold War age, the American nuclear scientists met with the Russian nuclear scientists. there actually was the invention of something called permissive accident control links, which was a way of making nuclear weapons not fire in some accidental way. There was a control system. And there was a history of collaboration like that. Could you just take us inside the room, Zelina, of, you know, what does it feel like? Do you hear Chinese AI safety researchers working with American researchers on, are they agreeing to specific measures? Yeah, it's a great dialogue. This year was the first time I actually was in the room for it in Shanghai. So this happened on the sidelines of the World AI Conference when you had people like Nobel Prize winner Jeffrey Hinton visit China and participate in both this dialogue and the conference. And then you also had other people from the Chinese side like Andrew Yao, Zhang Yatine, and people like that. So it's a group of very leading AI scientists, and they get together to basically talk about what are the risks and red lines that they most agree upon. And they issue a consensus statement. So for anyone who's curious, you can read the Shanghai consensus afterwards. But essentially, I think whenever you're in any of the sessions, there was always a lot of areas of convergence. Essentially, people always agreed on fundamental things like, you know, loss of control. All of these are very well known. But I think the real issue today is that you need the companies who are building the technology to agree to these things. And right now, the race dynamic, profit incentives, all of that is just like not converging to allow them to take these risks very seriously. And even if you have the best scientists agree on these things, the current landscape is basically that the companies are the ones who are building the technology. And it's very different from, you know, the SLOMAR conference when like that was very much held in the hands of like universities and like those labs. So Matt, with that said, when you kind of ask what would it take at the political level if it's not going to happen at the researcher level, what do you see as possible here? I think it's helpful to think of sort of a spectrum of worlds or outcomes. On one end is the most binding regulatory approach where the U.S. and China agree on a very high level of very top-down system where we're both not going to build dangerous superintelligence. And then that international agreement gets filtered down into the two systems. We regulate domestically and everything is safe. On the other end is just total unbridled competition in which we think the other side is racing as fast as they can. They don't have any sort of guardrails. And so we need to race as fast as we can and sacrifice the guardrails in the interests of winning. And I think, you know, the first one of international agreement that trickles down is at this point quite unrealistic, at least in the short term. In the halls of power, there is such, such deep distrust between the countries. That might not apply to the president himself or, you know, individuals. But when you look at sort of the entire national security apparatus in the two countries, they tend to see each other as fundamentally in a rivalry. Any promise would just be bad faith. You're just saying that to slow me down and I'm going to keep you're still going to keep building it in a black project somewhere. And so I got to keep racing. Exactly. And given that, I think my hypothesis is kind of something in the middle, which what it fundamentally rests on is the idea that I think the most important thing is going to be how the U.S. regulates AI domestically for itself for its own reasons, and how China regulates AI domestically for itself and for its own reasons. China actually has a lot more regulations on AI. There's a lot more compliance requirements, mostly centered around content control, but now expanding beyond that. And essentially, I think both countries are going to be moving in parallel here. They're both going to be advancing the technology. They're both going to be seeing new risks come up. And my sort of thesis here is that we have safety in parallel, where both countries are moving forward and regulating because the risks are not acceptable themselves. And there can be this sort of light touch coordination or maybe just communication between the two sides. We're not going to have any binding agreement. You know, I'm not going to do something in the United States because I 100% believe you're doing the same thing over there. But, you know, we have a best practice over here. We have something that we've learned, like you gave the example of permissive action links. You know, we think this is a method by which you can better control AI models. We're going to do it domestically and we're going to maybe open source that or we're going to share it. We're going to have a conversation with our Chinese counterparts about it. It's not relying on trusting one another, but it's sort of building touch points and sharing information about how to better control the technology as it advances. And then maybe if we get to a point where both countries have developed really powerful AI systems and they've also in some sense learned how to regulate them domestically, or at least they're trying to regulate them domestically, then maybe we're already in pretty similar places and we can choose to have an international binding treaty around this. There's also the getting to the point where we had so many nuclear weapons pointed at each other that the risk was just so enormous that it was existential for both parties. And even Dario Amadai from Anthropic has said, don't worry about DeepSeek because we still have more compute and we're going to do the recursive self-improvement. When you signal that publicly, you're telling the other one, oh, if you're going to take that risk, then I'm going to take that risk. But then that collective risk can be existential for both parties. And I heard you also say the need for basically red phones, like we had communication between the two sides, and also red lines. How do we have red lines of what we're not willing to do? And you can imagine there being at the very least some agreement of not building superintelligence that we can't control or not passing the line of recursive self-improvement. or another one I've heard is not shifting to what's called neural ease. So instead of right now that the models are learning on their own chain of thought, which is like their own language. So the models are kind of learning from their own thought in language. But what happens when you move from words that you're thinking to yourself in to neurons that you're thinking to yourself in? And when you have that, that's when you're in some new danger. So anyway, this has been such a fantastic conversation. I'm so grateful to both of you. And I think this has given listeners So hopefully both a lot of clarity around the nature of how these countries are pursuing this technology and the differences, and also the possibility for doing this in a slightly safer way than we currently have. Anything else you want to share before we close? This has been great. I've loved talking through this stuff with both of you. And yeah, I'd encourage people to try to read some of the good work that's being put out there about what's happening in China on AI. And, you know, not expecting anybody or everybody to become experts on this topic. But the thing to know is that the Chinese are much more aware of what's happening in the U.S. than we are aware of what's happening in China. They're much more interested in learning from what's happening in the United States than the U.S. is in learning from China. We have this mentality that that's an authoritarian system. Therefore, we can't learn anything from the way that they regulate technology. You know, they're a rival. We can't learn from them. China doesn't see it that way. They say if there's a good idea in the United States, let's adopt it and let's adapt it to our own ends. And that's a huge advantage for them, being willing to learn from the United States. I think if we can kind of break down some of those mental walls and actually take seriously what's happening over there and see if there are lessons for the United States, I think that would be a huge boost. I 100% agree. And I just think if there is more mutual understanding and if people try to visit China, if you can, or read some of the interesting research or pieces that are coming out, including Matt's Substack, a gentle plug here. I think that makes for a better world. So, you know, if you're thinking about and listening to this and thinking about, oh, what can I do? Like understanding is the first part. Matt and Selena, thank you so much for coming on your Undivided Attention. This has been one of my favorite conversations. Thanks so much. This has been really great. Thank you for the great questions and for having us. team for making this podcast possible. You can find show notes, transcripts, and much more at humanetech.com. And if you like the podcast, we'd be grateful if you could rate it on Apple Podcast because it helps other people find the show. And if you made it all the way here, let me give one more thank you to you for giving us your undivided attention.