Who is really shaping the future of AI?
51 min
•Dec 19, 20254 months agoSummary
This episode examines who is shaping the future of AI by exploring competing visions between the U.S. and China. Scholar Alvin Wang Graylin argues the narrative of an AI arms race is partly hype used to justify corporate dominance, while NPR's John Ruich reports on how China's open-source approach contrasts with America's proprietary model. OpenAI CEO Sam Altman discusses safety concerns and the path toward artificial general intelligence.
Insights
- The U.S.-China AI competition narrative may be overstated by American tech companies seeking regulatory leniency and government contracts, masking corporate competition for AGI dominance
- China's open-source AI strategy (releasing models publicly) appears more globally beneficial than U.S. closed-source approaches, though it also serves China's soft power and influence-building goals
- Three plausible futures exist: Elysium (extreme inequality), Mad Max (nuclear conflict), or Star Trek (abundance through shared AI benefits), with current policy trajectories favoring the first two
- A CERN-for-AI model—pooling global resources and sharing outputs—is technically feasible but politically unlikely without a crisis forcing U.S.-China cooperation
- Taiwan's dominance in semiconductor manufacturing (90% of advanced chips) makes it the most likely flashpoint for U.S.-China conflict, potentially more destabilizing than ideological AI competition
Trends
Geopolitical framing of AI as national security issue driving regulatory capture and government subsidies rather than genuine technological necessityChina's practical, distributed AI deployment strategy (education, healthcare, manufacturing) versus U.S. consumer-focused, proprietary modelOpen-source AI models from China gaining adoption in universities and startups globally, challenging U.S. technological dominance narrativeSemiconductor supply chain concentration in Taiwan creating strategic vulnerability for both U.S. and China, driving chip manufacturing diversificationAI safety concerns (bioterrorism, cybersecurity, loss of control) becoming secondary to geopolitical competition in policy discussionsCorporate consolidation of AI development around few players (OpenAI, DeepSeek) limiting democratic input on technology governanceShift from AI safety-first approach to deployment-first learning model, with real-world feedback replacing precautionary developmentGlobal AI governance initiatives (China's Global AI Governance Plan) positioning developing nations as stakeholders rather than consumersAgentic AI systems (autonomous agents) emerging as next frontier, requiring new safety frameworks and regulatory approachesStrategic ambiguity in U.S. policy (Taiwan, chip exports) creating unpredictability in tech competition outcomes
Topics
Artificial General Intelligence (AGI) development and strategic advantageU.S.-China AI competition and geopolitical implicationsOpen-source versus proprietary AI models and distribution strategiesAI safety frameworks and preparedness for advanced systemsSemiconductor supply chain and Taiwan's strategic importanceAI governance and international cooperation models (CERN-for-AI concept)Agentic AI systems and autonomous agent capabilitiesAI education integration in K-12 curriculum (China's approach)Corporate consolidation in AI industry and market dominanceRegulatory capture and government subsidies for AI developmentAI-driven job displacement and economic inequalityBioterrorism and cybersecurity risks from advanced AISoft power and influence-building through technologyData privacy and censorship in AI model trainingStrategic ambiguity in U.S. foreign policy on technology
Companies
OpenAI
CEO Sam Altman discusses AGI development, safety frameworks, and company's shift from open to proprietary models
NVIDIA
Mentioned for stock performance and recent Trump administration decision to allow chip sales to China
Intel
Alvin Graylin worked on early GPU chip development in late 1980s-1990s that became foundation for modern AI
IBM
Alvin Graylin helped develop early AI chips for IBM in late 1980s-early 1990s
DeepSeek
Chinese AI company released R1 model in January 2025, circumventing chip embargo and triggering U.S. national securit...
TSMC
Taiwan Semiconductor Manufacturing Company produces 90% of world's advanced chips needed for AI, creating geopolitica...
Apple
Referenced as example of TSMC customer; chips used in iPhones and other consumer products
People
Alvin Wang Graylin
Scholar, engineer, entrepreneur with 35 years in AI/tech; advocates for CERN-for-AI model and warns of three possible...
Sam Altman
OpenAI CEO discusses AGI development, safety concerns, agentic systems, and company's responsibility in AI distribution
John Ruich
NPR tech correspondent based in Silicon Valley; covered tech in China and Taiwan; reports on U.S.-China AI competitio...
Manoush Zomorodi
Host of TED Radio Hour; frames episode around question of who shapes AI's future and explores competing narratives
Chris Anderson
TED curator; conducts on-stage interview with Sam Altman about AI risks, AGI, and agentic systems
Yashua Bengio
AI godfather advocating for CERN-for-AI model and global cooperation on AI development
Demis Hassabis
AI godfather advocating for CERN-for-AI model and global cooperation on AI development
Jeffrey Hinton
AI godfather advocating for CERN-for-AI model and global cooperation on AI development
Peter Thiel
Venture capitalist whose monopoly-focused philosophy contrasts with Graylin's collaborative approach to AI development
Xi Jinping
Chinese President; released Global AI Governance Initiative promoting open-source, collaborative AI approach
Quotes
"The direction that we take it and the policies we put behind it is going to affect the trajectory of our civilization for the next decades, if not centuries."
Alvin Wang Graylin•Early in episode
"We need to change our mindset and to agree that the world is not zero-sum and that actually we can all win together."
Alvin Wang Graylin•Mid-episode
"The way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low."
Sam Altman•TED stage interview
"It's almost the equivalent of folks who are working on the Manhattan Project to develop the first nuclear bomb, just handing blueprints over."
John Ruich•Discussing chip sales to China
"American, Chinese society locked down, but open source software, whereas in the U.S. it's the other way around. They're locking down their tech, but it's sort of open source humans."
John Ruich•Late in episode
Full Transcript
Hey, it's Manoush Zamarodi here, host of NPR's TED Radio Hour. Can you believe that 2025 is almost over? There were times when it felt like this year would never end, right? 2025 was a seriously tough year for NPR and local stations. Despite the loss of federal funding for public media, despite attacks on the free press, we're still here. NPR won't shy away from exercising its critical right to editorial independence guaranteed by the First Amendment. And with your support, NPR will keep bringing you the news without fear or favor. Here at TED Radio Hour, that includes bringing you science, technology, human behavior, neuroscience, and nature, topics that help you navigate and find meaning in a world that is changing so fast. If you are already an NPR Plus supporter, thank you so much. We are so grateful. If not, please join the community of public radio supporters right now before the end of the year at plus.npr.org. Signing up means you are directly supporting public media and you also get a bunch of perks like bonus episodes from some of NPR's podcasts. End your year on a high note. Invest in a public service that matters to you. Please visit plus.npr.org today and thank you. This is the TED Radio Hour. Each week, groundbreaking TED Talks. Our job now is to dream big. Delivered at TED conferences. To bring about the future we want to see. Around the world. To understand who we are. From those talks, we bring you speakers and ideas that will surprise you. You just don't know what you're going to find. Challenge you. We truly have to ask ourselves, like, why is it noteworthy? And even change you. I literally feel like I'm a different person. Yes. Do you feel that way? Ideas worth spreading. From TED and NPR. I'm Manoush Zomorodi. In 2025, it felt like so many conversations at work, at home, online, eventually turned to talking about AI. I know. Chat GPT is transforming the way we live and we work. There's a growing debate about whether AI should have a place in the classroom. AI is the key to reshaping American politics. And the questions that we all still have. Is this a bubble inflated by hype and speculative investments? Open AI has a valuation of $500 billion. Nvidia shares hitting another record high. Or are we living through the opening seconds of a true transformation of our society? Are we heading towards AI with human level intelligence that could change how we work, live and learn? I think there's a little bit of truth to both. This is Alvin Wang Graylin. I've been studying and building solutions in AI and immersive tech, cybersecurity, semiconductors for 35 years. Alvin is a scholar, entrepreneur, and engineer. His book, Our Next Reality, examines the good and bad potential outcomes of what he says will eventually be a technological revolution. Yeah, well, I mean, we're definitely at a very important fork in the road. You know, the last 10 years has made AI an amazing technology and it's about to really mature and get to the next phase. And like many experts right now, Alvin says we can't talk about where AI is going without talking about the two big players in the field, the U.S. and China, and how their goals and values differ. Because in the short term, we may not see politics affect our tech. In the sense of the technology is going to take a few years for it to really mature and to then filter into our lives and economy. But over the next few years, global affairs will be crucial to getting the world we want with AI. Because the direction that we take it and the policies we put behind it is going to affect the trajectory of our civilization for the next decades, if not centuries. So today on the show, making sense of this moment in tech, the various directions that AI might go, how visions of artificial general intelligence, AI that's as smart as us, are shaping political strategies, and whether the AI arms race between China and the U.S. is about global dominance or more just a narrative being pushed by American tech entrepreneurs. We'll hear from a reporter who's covered the tech worlds of both Beijing and Silicon Valley, and from OpenAI's Sam Altman in conversation with Ted's Chris Anderson. But first, we return to Alvin Wang-Graylin, who has an unusual perspective on AI. Yeah, my background having worked in China, in the U.S., in Taiwan, really gives a perspective that might be a little bit different than what you normally hear. That perspective has roots in his family history. Alvin was born in China during the Cultural Revolution. But his grandmother was American, a journalist. My grandmother was a reporter for the New York Tribune back during the Sino-Japanese War, and she went to China in 1937 to report on what was happening and had to escape China in 1941 when the Japanese bombed Pearl Harbor. And she left my mom in China. And that's kind of how I came to be. When Alvin was nine, the family moved to the U.S. They became naturalized a few years later. So we are definitely proud Americans. And in fact, my brother was the first Chinese-American to work on a U.S. nuclear submarine. Meanwhile, Alvin went on to MIT. He got one master's degree in computer science, another in business, and ended up being a pioneer in developing the tech needed for AI today. I helped IBM and Intel develop their early chips back in the late 80s and early 90s. And in fact, I worked on a project for Intel back in 1993. And it was what became the technology that all GPUs are based on today. The next year, he helped Intel open its first office in China. And, you know, at that point, there was really no consumer PC market. And I essentially told the GM of China, I said, hey, I think we should try to build this market. And here's a 20-page plan of how we can do it. And he said, OK, well, go make that happen. And I was torn to— Bring PCs to China. Yeah. And at that time, PCs were $2,000 or $3,000. And the average income in China was around $200 or $300 per year. So for that market to exist, we had to make a lot of changes. Since then, Alvin has gone back and forth, working between Silicon Valley and China. But mostly today, he wants to talk about the impact of all this new tech and his ideas for getting heads of state aligned on what is a global situation. Because the way Alvin Graylin sees it, there are three possible paths for the future of AI. And being a total sci-fi nerd, Alvin has a movie reference for each one of these futures. First up, the dystopian thriller Elysium, starring Matt Damon. Where the year is 2154, most humans live on an overpopulated, disease-ridden planet Earth, while the uber-wealthy live on a luxury space station called Elysium. We undocumented ships are approaching Elysium airspace. Yeah, so the first future that is the most likely is you essentially have a few trillionaires and then you have the have-nots. And it is going to be ultra-stratified even more than we are today. So the Elysium version is this sort of future where there's extreme social and economic inequality and the wealthy techies sort of rule everything and the rest are kind of left behind. Yeah, yeah, pretty much. And really with no upward mobility, right? I think in the past, what made America great was the American dream that if you work hard, good things can happen. And the fact that right now our governments are essentially captured by the oligarchs and by the industry leaders, it is going to just make that even go more extreme as these technology becomes smarter and more capable. The second path is even worse. It's what Alvin calls the Mad Max future, recalling the 1979 Australian dystopian action film with Mel Gibson. In the not-too-distant future, there will be no civilization. There will be no heroes. There will only be madmen. We right now have this AI race, technology race, between U.S. and China to get to AGI. and that AI race gets to become an AI war and then somehow it escalates into a kinetic war over maybe something related to Taiwan and then goes out of control and becomes a nuclear war because the ultimate destination of that future is escalation into a nuclear war and a post-apocalypse and then taking essentially centuries to maybe regain the modernity that we have today. What is path number three? It's got to be something better than those two, please. Yes. The third future is actually what I call the Star Trek future. And it is something that essentially models after what happened with Star Trek. So in the Star Trek lore, essentially the Vulcans came to Earth. They were a rational, advanced species with advanced technologies. Live long and prosper. And they gave us these technologies, allow us to have a world of abundance. And then we started to discover the stars and pursue a self-actualization versus greed and hoarding. I suspect our future is there waiting for us. We became a world of plenty and the technology enabled us. And so instead of relying on the kindness of an alien species, we actually now are creating AI, which is a rational, advanced technology that could bring us solutions to our problems. So it's actually a more liberating perspective because we get to control when this technology comes. We get to control how we use it. And so where are we now when it comes to which of these scenarios is most likely? Unfortunately right now, we're not necessarily looking at it with the right perspective that this should be a public good that should be shared. It's seen as a weapon of domination or a tool for self gain or wealth. And those perspectives are going to lead us down a very dark path. So from your perspective, they're not at all thinking about the public good. All these companies are just racing to beat each other to develop artificial general intelligence, AGI. These are machines with human level intelligence. Well, I think it's less about the AIs becoming sentient and doing things for itself. It's really more this race to AI. There's something called the strategic decisive advantage. It's an idea that whoever gets to AGI first will then use that power to self-improve and create what's called ASI, so artificial superintelligence. And when you get to ASI, then you have the power to dominate the world. And that's unfortunately what right now a lot of people in D.C. thinks is winning, is that they think it's getting to AGI first and then using it to, in some people's words, send China back to the Stone Age. And that very scary when that is the intent not just to make yourself progress but then to hold other people back And what I asked these people when they said this is like so do you think they would just let you do that And they go well no they probably fight back And then I said, well, what happens when they fight back? Well, then we'll probably fight back some more. And then I said, what is that going to lead to? And he says, well, it means war, but, you know, war is inevitable. And when I hear things like this from people who are very influential in the decision-making process in America, it feels a little bit ignorant and it feels very scary. And, you know, I was in another conversation recently where there was senior people that said, we can't let, you know, China cure cancer because that means we've lost. And, you know, to have that type of a mentality that, you know, curing cancer is something that only we can do, only America can do, and that anybody else doing good things for the world is seen as a bad thing, that is a very, I think, misguided view of the world. In a minute, the plan that Alvin and other scientists are mapping out to veer us away from these dark paths, and why he believes the AI arms race between China and the U.S. is a huge distraction. If you listen to people like Peter Thiel, he says, you know, the only way to be successful is to create a monopoly and then to control everything. But the reality is that we need to change our mindset and to agree that the world is not zero-sum and that actually we can all win together. On the show today, who's really shaping the future of AI? I'm Anoush Zomorodi, and you're listening to the TED Radio Hour from NPR. Stay with us. It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. Today on the show, we're asking, who is really shaping the future of AI? We were just talking to scholar, engineer, and entrepreneur Alvin Graylin. Alvin is an American technologist with roots in China. And for the last 35 years, he has been working in both places. And he believes it's a convenient strategy for American AI companies to say they need more data centers, more chips, and more money to beat China. A lot of companies are using that same refrain because it works, right? Because it's what the military industrial complex have been using for nearly a century in terms of creating fear within the government and then helping then get more lenient regulations, more government support, more contracts, and more support from the politicians. And, you know, it's framed as a national competition when the reality is I think most of them realize that it is a competition right now between companies. And all of them are trying to figure out how can I create a moat for myself? How can I create an advantage where I get there first? I want to be the guy who's known for who invented AGI. And I want the fame and the glory and the monetary rewards associated with this. And in multiple interviews, you've heard people like Sam Alman say, hey, this is my responsibility to decide how this is going to be distributed. And to hear him say things like that, I feel who gave him that right to be the person who decides this? Well, I guess the argument is that it's a national security issue that starts by working with China to a certain extent, but always staying ahead. The U.S. model, and if you look at the America's AI Action Plan, which is what came out of the White House kind of middle of this year, it said we are going to race our way to the most advanced AI. and saying that by doing whatever we can to build the biggest data centers and powering them with whatever means possible. And then we are going to take that AI and spread it to our allies. And anybody who is going to be our friends has to use our stack, our chips, and our models. And that's how we win, by creating a dependency where everybody in the world depends on us. And it's a very one-sided perspective in terms of what winning means and how to win. But if you look at how China is looking at the AI space, they have a very different approach. Oh, interesting. Tell me. There's actually something called the AI Plus Plan. And the AI Plus plan is a 10-point plan to say, hey, how can we get AI into medicine? How can we get AI into manufacturing? How do we get AI into education? How do we get AI into agriculture, et cetera, et cetera? It's essentially saying, how do we deploy this technology so that we can make each of them more efficient and seeing the benefits into our population and our economy where it's really looking for spreading the benefits of the technology to the larger community. But I feel like the story that we're hearing is, oh, the U.S. has to win at AI because this is not just a competition of a marketplace, but it's a competition between two ideas of what the world should be, either democratic or authoritarian. Yeah. From a narrative perspective, it sounds really good. And, you know, America, democracy, liberty, you want us to win. But if you look at what's being, I guess, propagated in terms of what the Chinese model is, three days after the U.S.-America Action Plan, the Chinese published something called the Global AI Governance Plan. And their idea was that, hey, why don't we work together with the rest of the world to build an advanced AI that then incorporates everybody's data? And so we now have a more capable AI that we share to the world and we make it open source rather than making it a tool for any one country or any one company to be successful. So if you look right now that the leading five open source models in the world are all made by AI labs, right? You would think if China wants to dominate the world, they would be saying, hey, I want to create these big data centers and I'm going to make the big model and I'm going to keep it to myself so that I can beat America and, you know, beat democracy. And that's not what they're saying. They're saying, here's what I invented. And what you'll find right now is that universities and startups are using Chinese models for their research because it's open source. It's open license. You don't have to credit them for anything. You can use it commercially. You can use it for research. You can retrain on it. I mean, that to me doesn't sound like it's an evil plan to do bad things. It sounds like what America probably should also be doing as well. I mean, we have to acknowledge that obviously the United States is not a perfect nation, hardly. But in China, there is regular surveillance of citizens. There have been questionable human rights approaches to certain minority groups. They tightly control information, speech, sometimes behavior. So why should we think that China is like, oh, we're going open source. what? We should all benefit from this. Yeah, I mean, I am not saying that everything that the Chinese government does is perfect. And they have definitely a lot of their own faults and issues, and they're dealing with it with their own methods. But in terms of AI, and I've been studying this for 35 years, their approach from a technical perspective and from a innovation perspective actually makes a lot of sense. And the versions of these models that are available to their local population, you know, definitely has some level of internal censorship, you know, in terms of certain topics or certain facts. But the things that they are posting onto the downloadable sites for other people to download, those are unconstrained. And they're just the raw models. And you can then fine tune it to whatever, you know, methods or perspectives that you have. So from that perspective, I feel like, you know, even though you may not agree with everything in their politics, the way they're approaching AI is actually making sense from a global good perspective. But by China going open source with its AI models, does it just look like they're taking the higher road with their tech, but actually this is a way of expanding their influence with all the other countries that want to catch up with AI, of integrating other research into their own models? Well, I think what you're saying actually is probably part of the intention of the government in terms of saying, how can we increase our soft power around the world, right? Just like America has used Hollywood over the last hundred years as a way of instilling American values and kind of ideals to the world. So with all this in mind, how do we get on a better path? How do we avoid the Elysium and Mad Max futures that you described were possible earlier and get to something more fruitful? It doesn't even have to be utopian. Let's survive. Yeah. If you listen to people like Peter Thiel, he says, you know, the only way to be successful is to create a monopoly and then to control everything. But the reality is that we need to change our mindset to agree that the world is not zero-sum and that actually we can all win together. We get further, we use less resources, and we decide that the benefits that come from AI should be shared and is not seen as a weapon or as a tool for self-interest. Yeah, so take us through, though. Help us envision what it could even look like. So the first part is that we essentially need to pull our resources. Instead of building trillions of dollars of new data centers, we can actually take the trillions of dollars we've already put into data centers and just say, hey, let's link them together. Let's create federated networks that we can then train large models together and create what I call a CERN for AI. Actually, it's not a new idea. It's been talked about for probably five, six years now of doing what we did for particle physics or what we did for fusion reactors with ETHER or space science with the International Space Station. To have essentially groups of countries or the world come together and invest in a large capital expenditure, but then to take whatever comes out of it to share with the world. Is there a precedent for this? Like you mentioned the space station and CERN, which was, it's the European Organization for Nuclear Research. This is, you know, a global coming together of scientists in a way that everyone agreed. But is that what you're looking at? Yeah. If you look at the World Wide Web, actually the concept of the World Wide Web came from CERN. And we also did this from another natural disaster that happened to us, which was the ozone layer degradation. And over the 80s and 90s, we essentially came together with the Montreal Protocol and said, hey, let's all decide to stop making CFCs. And now the ozone layer has started to come back and become much more healthy and normal for us. So we can work together. Countries can work together when we realize that it is for our collective good. And to make this CERN for AI possible, we also need to create something I call a global AI data pool or data cloud. Essentially taking everybody's language, their culture, their science, their history, and put it into one source so that we can train this AI to understand the world wholly, not to understand one single perspective. Is there a conversation going on behind the scenes amongst scientists about how to actually do this Because the creators of these companies are all about maximizing profits and presumably they don want to be a part of it Well, so actually, there are these conversations, and it's people like Yashua Bengio or Demis Hassabis or Jeffrey Hinton, who are some of the godfathers of AI who have seen this technology grow and have seen and understand the capabilities of these technologies. And they're advocating for a CERN, for AI type type model. But unfortunately, a lot of the politicians aren't listening, right? I mean, we need to get past this politicization of this technology and we need to see AI and the benefits that it provides as a public good. I mean, it sounds like idealistic utopia, but we are actually at a point where it's possible. Before, we lived in a world of scarcity, and hoarding and fighting made a difference. But if we are at the one-yard line of getting to a place where we can have a technology that brings us abundance, we don't need to fight anymore, and we need to realize that and let that happen. That was author and entrepreneur Alvin Graylin. His book is called Our Next Reality, How the AI-powered Metaverse Will Reshape the World. You can watch his conversation with me on the TED stage at TED.com. So Alvin Wang-Graylin thinks the U.S. and China should be cooperating on AI a lot more than they are. What is the likelihood of that? And is AI competition really just an American thing? I decided to follow up on my conversation with Alvin by talking to NPR's John Ruich. John, you were based in China for years. You covered politics and tech there, among many things. And now you're back in the U.S. covering politics and tech from Silicon Valley. So give us your perspective on this AI arms race. Is it mostly a story that American tech CEOs are telling to lock down their tech to get more investment, avoid regulation as Alvin explains it? Yeah. From my perspective, like, yes, to a certain extent. But right. Yes, that's capitalism. The corporate competition is real. Absolutely. But there is definitely more to this arms race mentality, I think, than just business. Um, I was, uh, a few months ago, I had a conversation with a venture capitalist and, um, you know, he was telling me that there had been this debate and discussion around the safety of AI and that companies in the United States had potentially discussed banding together, maybe sort of hitting pause on sort of the breakneck speed at which everybody was racing towards the AGI, if that's where they were headed to sort of set some rules. But then DeepSeq R1 happened. That was the release in January of a Chinese AI model. And it was kind of a Sputnik moment. Like DeepSeq did things that people did not think China was capable of doing because of the restrictions on high-end microchips going to China. They did things, but they kind of circumvented the embargo on chips to put forward an AI model, an AI product that was pretty darn good and pretty close to what American companies were doing. So the discussion shifted to national security. Everybody was like, okay, China's for real, and we need to go at it again. So tell us more about China's approach to AI. Because I think in the U.S., AI is mostly making people kind of nervous for all sorts of reasons, their jobs, the amount of energy it uses, the slop it can produce. But what have you learned about how regular people feel about AI there? Yeah. So in China, the government is obviously very focused and strategic and it's thinking about AI. And so I went on a trip to China in October and did some reporting around AI in society, basically. And one of the places we went was an elementary school in Beijing. This elementary school, and in fact, at all K through 12 schools in Beijing starting this fall semester, they started to fold education about AI and education on sort of how AI is created and how you can use it into the computer literacy programs that they run. So we went and we talked to a couple of 10-year-olds, fourth graders, talked to their parents, talked to their teacher about what this AI education was like. And as you can imagine, like it's not in-depth, but they are being consciously and proactively familiarized with artificial intelligence, with how to use it. They're using it to do drawings, to do images, for instance. The parents were very clear-eyed about the project, which I thought was interesting. They basically were all sort of all in on it because they see it as, you know, it is this technology that's coming down the pike. The government backs it. It's coming for us. And the only way for our kids to have a future is to start to understand it. So we have the Trump administration with its American AI Action Plan, focusing on maintaining America's dominance, countering China. And then a couple of years ago, President Xi released his global AI governance initiative. And it's completely different. It's an open source, sharing AI models, data, security, bridging the digital divide. What's your sense about how politics and interests are fueling these different approaches? And are they sort of veering away from each other or coming to a head? Yeah. Context is sort of key here for China. There's been this program, it's sort of part of the party's DNA now to build influence and clout around the world, make the world safe for, you know, the Chinese Communist Party and the Chinese way of politics. And I think the global AI governance initiative at least has two sides to it. One is the altruistic part, right? Let's work on this together. Let's do it together. It's also sort of a textbook example of how China operates, how China builds power and helps itself in the world. Just sort of like, look at us. We're available. What's the big fuss about? That kind of thing? Exactly, right. Look, we're being big about this. We're trying to fix the problem. We're trying to get everybody together. Let's all work together. And it's easier for an underdog to say that than somebody who's got the top AI companies, right? And as an underdog, why not, you know, try to get as much cooperation and collaboration as you can, because that can advance your own technology, right? It's very practical for China, which doesn't have the data centers or the compute firepower that the U.S. has, right? It can help accelerate development even. It's also practical for what we were just talking about, China, in terms of garnering friends and allies and building influence. In a minute, more with John Ruich on what all this AI competition really means for the rest of us. On the show today, who is shaping the future of AI? I'm Manoush Zomorodi, and you're listening to the TED Radio Hour from NPR. We'll be right back. It's the TED Radio Hour from NPR. I'm Manoush Zomorodi. On the show today, we are asking who is really shaping the future of AI? We were just talking to NPR tech correspondent John Ruich about the latest twists in U.S.-China tech relations, including the Trump administration's recent move to let NVIDIA sell advanced AI chips to China. So what does a move like this actually signal? A thaw, a strategic gesture, or something more complicated? I mean, from people I've talked to from a technical perspective are scratching their heads, like chip experts or China experts. They're saying, well, you know, This is the sort of perhaps the one advantage that we have over China. Like China has more and cheaper electricity than we have, which is critical for data centers. China, I read a stat the other day that said over 40% of Chinese college graduates are in STEM. That's twice the rate in the United States, right? So like they've got the people, they've got the electricity, they've got the political will for sure. They can't get their hands on the chips and now they're sending these chips over. I don't, I, people, one guy I spoke with said it's like, it's almost the equivalent of folks in who are working on the Manhattan Project to develop the first nuclear bomb, just handing blueprints over. You did just get back from Taiwan where the factories are that produce 90% of the world's semiconductor chips, the ones that are needed to power AI. Can you just tell us about what you saw there? Yeah, I was in China for about two weeks on the way out. I stopped in Taiwan and took a train down to a town called Xinzhu, which is on the West Coast. It's like 100 miles due east of the Chinese coast and went to visit TSMC. There's a science park there where Taiwan's semiconductor manufacturing company, TSMC, was established in 1987. So over the past 40 years, this company has become the dominant player in contract chip manufacturing. And they're in everything. They're in your iPhone. They're in your car. They're in satellites. They're in F-35 fighter jets, on and on and on. And I talked to the CFO who, you know, it was really interesting. Like politics, the politics around microchips, the politics of Taiwan, China was kind of a black hole. Every time I raised it, he avoided answering it. And he would talk about the customers and what the customers want. And the interesting thing about that is that the customers are being pushed very strongly by political winds. And that's the reason why in 2020, TSMC decided they would set up a chip making facility in Arizona. They've got plans to expand their operations in Japan, Germany also for the auto industry. And he basically said they're there. He didn't say it in so many words, but he said they're outgrowing Taiwan. So I was at a conference a couple of months ago where there were – it was off the record and many economists and former government officials were warning like Taiwan is the reason the U.S. and China will inevitably go to war. So based on what you're saying, is that overblown? Because it sounds like chip production is going to be spread well beyond Taiwan. Yeah, that's a good point. Chip production is now – they're expanding beyond Taiwan. I will say TSMC's CFO told me they will continue to invest in Taiwan. But I do think it's fair to say that Taiwan is probably the most likely reason that China and the U.S. would have a fight. I mean, the South China Sea might be another possibility if there were some sort of, you know, accident to happen that spiraled out of control. But the most sort of dire consequences would come from war over Taiwan. Beijing considers it part of China, as you know. The U.S. government is bound by law to help Taiwan arm itself, to prepare it to deter and repel a Chinese invasion, basically. And the U.S. has had a policy of strategic ambiguity when it comes to Taiwan. It's a tenuous situation, but it has been for 80 years, right? So that brings us back to Alvin Graylin. I mean, when I talk to him, you know, in some ways, I'm like, oh, this China-U.S. thing, it's not so intense. And then you hear him describe the Mad Max future and you think this is a disaster. I don't even know what to make of it. Well, there's a couple of thoughts I have. One is I'm glad I'm glad he's out there saying these things, because if there's a way to find some sort of middle ground, some small academic committee that can then grow over time or expand to include government and then be a way for Beijing and Washington to talk about this stuff, maybe start to think about moving forward in ways that they can cooperate. Great. But the other thought I had was that he seems to be a product of Silicon Valley. And this is the place where people come up with audacious ideas and pitch them, and they're pretty good at pitching them. I don know what would ever at this point I don see what would push the U and China to cooperate on AI or to talk about cooperating on AI without a crisis or a near crisis You know, it makes me want to ask, are we missing a trick? In the United States, tech companies are so focused on getting consumers to buy subscriptions to these products, and they're being so protective of their tech. Meanwhile, many people are terrified that they're going to lose their jobs because AI will replace them. But then there's China trying to diffuse AI throughout their whole society and make it available open source to other countries. They seem to have a more practical and strategic rollout in some ways. I think each country has its advantages, right? I think one thing about the U.S. system is that that Wild West mentality sort of applies. It's like, this is the cutting edge of capitalism. This is hardcore capitalism. There are winners and there are losers. And I think that there are reasons why China does it the Chinese way. We don't know what's going to happen with AI yet, but they've done a lot to make life more comfortable in, for instance, their cities or across the country. It also fits into sort of this broader, that broader context of China building influence, building cloud, making itself indispensable in the world, if it's the one driving this plan, if it's the one that can get buy-in from the ally countries that it works with that perhaps are, you know, more open to using open source or for whom closed source is too expensive, right? I've got sort of a hot take, which is, you know, American, well, Chinese society locked down, but open source software, whereas in the U.S. it's the other way around. They're locking down their tech, but it's sort of open source humans. Do you know what I mean? They kind of like let us loose and help let us figure it out. And then wherever the marketplace takes us, well, that's where we go, for better or worse. That works. The open source thing about Chinese tech, though, too, it's open source for the code and for to figure out how to do it. But you can't, like, if you're in China, you can't ask DeepSeek, hey, tell me about what happened on June 4th, 1989 in Tiananmen Square and get a straight answer. Right, right. And that is open to a degree and then not open. And believe me, I think if China had this technology locked down and the U.S. was following, I don't think they'd be open source with it. So you're saying watch this space. We might be seeing a lockdown on these models in the future if they catch up. Maybe, yeah, if they can't, if they catch up, if they surpass, if they've got something like, yeah, off the wall, perhaps. The DeepSeek guys, DeepSeek is a private company. I guess they got some, I'm not, actually, I'm not exactly sure where they got their funding. But, you know, after the model came out in January and blew everybody's minds, they were not available. You could not talk to them. You could not get anywhere near them. Was that because the cyberspace administration said to them, yeah, we need to talk or you should not talk to the press or were these guys just savvy? We don't know. 2026, John. I'm excited. Pumped. Getting fired up for it. Excellent. I mean, the saga is just going to get more interesting. Yeah. Let's hope it doesn't veer into those terrifying territories just yet. Let's cross our fingers. That was John Ruich. He's NPR's tech correspondent, and he is based in San Francisco. So we've talked a lot about American tech CEOs this hour. So now let's hear directly from one. Earlier this year, OpenAI's Sam Altman sat down for a grilling by Ted's Chris Anderson. Sam took pains to stress that he's thinking less about geopolitics and more about us, the users. Here he is with Chris on the TED stage in Vancouver in April. Sam, welcome to TED. Thank you so much for coming. Thank you. It's an honor. Your company has been releasing crazy, insane new models pretty much every other week, it feels like. But talk about what is the scariest thing that you've seen. Because outside, a lot of people picture you as, you know, you have access to this stuff, and we hear all these rumors coming out of AI, and it's like, oh my God, they've seen consciousness, or they've seen AGI, or they've seen some kind of apocalypse coming. There have been moments of awe, and I think with that is always like, how far is this going to go? What is this going to be? But we don't secretly have, you know, we're not secretly sitting on a conscious model or something that's capable of self-improvement or anything like that. I continue to believe there will come very powerful models that people can misuse in big ways. People talk a lot about the potential for new kinds of bioterror, models that can present a real cybersecurity challenge, models that are capable of self-improvement in a way that leads to some sort of loss of control. So I think there are big risks there. Do you check for that internally before release? Of course, yeah. So we have this preparedness framework that outlines how we do that. and we are very proud of the safety track record. But we're talking about an exponentially growing power where we fear that we may wake up one day and the world is ending. So it's really not about travel credit. It's about plausibly saying that the pieces are in place to shut things down quickly if we see a danger. Oh, yeah, yeah. No, of course. Of course that's important. But the way we learn how to build safe systems is this iterative process of deploying them to the world, getting feedback while the stakes are relatively low, learning about like, hey, this is something we have to address. And I think as we move into these agentic systems, there's a whole big category of new things we have to learn to address. So let's talk about agentic systems and the relation between that and AGI. So artificial general intelligence, it feels like chat GPT is already a general intelligence. I can ask it about anything and it comes back with an intelligent answer. Why isn't that AGI? It doesn't continuously learn and improve. It can't go get better at something that it's currently weak at. It can't go discover new science and update its understanding and do that. And it can't just sort of do any knowledge work you could do in front of a computer. You can't say like, hey, go do this task for my job. And it goes off and clicks around the internet and calls someone and looks at your files and does it. And without that, it feels definitely short of it. But in a world where agency is out there and say that, you know, maybe open models are widely distributed, are there red lines that you have clearly drawn internally where you know we cannot put out something that could go beyond this? Yeah, so this is the purpose of our preparedness framework. And we'll update that over time. But we've tried to outline where we think the most important danger moments are. I can tell from the conversation you wish AI, you're not a big AI fan. Actually, on the contrary, I use it every day. I'm awed by it. I think it's essential to hold a passionate belief in the possibility, but not be over-seduced by it because things could go horribly wrong. I totally understand that. I totally understand looking at this and saying, this is an unbelievable change coming to the world, and maybe I don't want this. Or maybe I love talking to Chad CPT, but I worry about what's going to happen to art, and I worry about the pace of change. And maybe on balance, I wish this weren't happening, or maybe I wish it were happening a little slower. I think the fear is totally rational, rational, but A, there will be tremendous upside. B, I really believe that society figures out over time, with some big mistakes along the way, how to get technology right. And C, this is going to happen. This is like a discovery of fundamental physics that the world now knows about, and it's going to be part of our world. We have to embrace this with caution, but not fear. There are two narratives about you out there. One is, you know, you are this incredible visionary who's done the impossible. But the other narrative is that you have shifted ground, that you've shifted from being open AI, this open thing, to the allure of building something super powerful. What are your core values, Sam, that can give us the world confidence that someone with so much power here is entitled to it? Look, I think like anyone else, I'm a nuanced character that doesn't reduce well to one dimension here. So, you know, probably some of the good things are true and probably some of the criticism is true. In terms of open AI, our goal is to make AGI and distribute it, make it safe for the broad benefit of humanity. I think by all accounts, we have done a lot in that direction. Clearly, our tactics have shifted over time. I do think it's fair that we should be open sourcing more. I think it was reasonable, as we weren't sure about the impact these systems were going to have and how to make them safe, that we acted with precaution. But now I think we have a better understanding as a world, and it is time for us to put very capable open systems out into the world. But, you know, there's trade-offs in everything we do. And we are one player in this one voice in this AI revolution trying to do the best we can and kind of steward this technology into the world in a responsible way. This was a beautiful thing you posted. Your son. I mean, that last thing you said, that I've never felt love like this. I think any parent in the room so knows that feeling, that wild biological feeling that humans have and AIs never will, of your holding your kid. What kind of world do you believe, all things considered, your son will grow up into? I remember when the first iPad came out, it's like 15 years, something like that, watching a YouTube video at the time of a little toddler sitting in a doctor's office waiting room or something. And there was a magazine, like one of those old glossy cover magazines, and the toddler had his hand on it and was going like this and kind of angry. And to that toddler, it was like a broken iPad. And he never, she never thought of a world that didn't have, you know, touchscreens in them. And to all the adults watching this, it was this amazing thing because it was like, it's so new. It's so amazing. It's a miracle. Like, of course, you know, magazines are the way the world works. My kid, my kids, hopefully, will never be smarter than AI. They will never grow up in a world where products and services are not incredibly smart, incredibly capable. They will never grow up in a world where computers don't just kind of understand you. And it'll be a world where, like, individual ability, impact, whatever, is just so far beyond what a person can do today. I think that's great. Sam, it's incredible what you've built. Thank you very much. Thank you for coming. That was OpenAI's CEO, Sam Altman, on stage with TED's Chris Anderson. You can watch the full conversation at TED.com. Thank you so much for listening to our show this week, and Happy New Year. This episode was produced by Katie Monteleone and edited by Sanaa's Meshkinpour and me. Our production staff at NPR also includes James De La Housie, Matthew Cloutier, Harsha Nahada, Fiona Guerin, Phoebe Lett, and Rachel Faulkner-White. Our executive producer is Irene Noguchi. Our audio engineers were Stacey Abbott, Becky Brown, and Zoe Vangenhoven. Our theme music was written by Ramtin Arablui. Our partners at TED are Chris Anderson, Roxanne Hylash, and Daniela Ballarezzo. I'm Manoush Zomorodi, and you've been listening to the TED Radio Hour from NPR. wherever you get your podcasts.