Digital Disruption with Geoff Nielson

AI, Power, and the New Global Order with Nina Schick

57 min
Jan 26, 20263 months ago
Listen to Episode
Summary

Nina Schick discusses how AI and AGI represent the most consequential technological shift in human history, with profound implications for geopolitics, national security, and the global economy. She explores the concentration of AI power among US tech giants, the infrastructure race between superpowers, and how businesses and democracies must adapt to a world where intelligence becomes a cheap, abundant utility.

Insights
  • AI infrastructure dominance is becoming a form of hard power; US tech companies' control over compute infrastructure (80% in Europe) translates directly to geopolitical influence and national security advantages
  • The real economic value of AI won't emerge until intelligence is industrialized as a utility with near-zero inference costs; current consumer AI applications are rudimentary compared to what's coming
  • Democracies face an existential challenge: they must leverage public-private partnerships to compete in AI development, or risk ceding technological sovereignty to authoritarian regimes with top-down control
  • The labor market will bifurcate—highly skilled workers using AI as a force multiplier will thrive, while average performers will face automation; asset ownership and entrepreneurship become critical survival strategies
  • The geopolitical order is shifting from 30 years of US hegemony back to a multipolar world where hard power and technological superiority determine national influence
Trends
AI scaling laws and efficiency gains are accelerating; inference costs dropping while model capabilities improve exponentiallyMassive CAPEX investment in AI infrastructure ($500B+ in US in 2025, $600B+ in 2026) signals this is an infrastructure play, not a bubbleGeopolitical bifurcation: US-led Western alliance vs. China-Russia bloc competing for AI/technology dominance as proxy for global powerPublic-private partnerships emerging as the US model for competing in AI (echoing Manhattan Project, Apollo, Silicon Valley origins)Labor market transformation: shift from job security to asset ownership; entrepreneurship and risk-taking becoming more valuable than traditional employmentEnergy and infrastructure sectors booming due to AI compute demands; nuclear power experiencing renaissanceAutonomous weapons systems and intelligentized military capabilities becoming central to national defense strategiesDeepfakes and AI-generated disinformation weaponization accelerating; information ecosystem corruption becoming a national security threatCompute infrastructure consolidation: vertically integrated tech giants (Google, XAI) outcompeting non-integrated players (OpenAI) long-termGlobal supply chain redrawing around critical AI/semiconductor resources; US-Gulf partnerships and LATAM resource deals emerging
Topics
Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI) timelines and feasibilityAI as geopolitical hard power and national security determinantUS-China technology competition and AI dominance raceAI infrastructure investment and CAPEX requirements for scalingPublic-private partnerships for AI development and deploymentAI scaling laws and efficiency improvements in model trainingDeepfakes and AI-generated disinformation as security threatsLabor market disruption and workforce transformation from AI automationAI in scientific discovery and research (AlphaFold, protein folding)Autonomous weapons systems and military AI applicationsDemocracy and governance challenges in the AI eraAsset ownership and wealth distribution in an AI-driven economyEnergy infrastructure and nuclear power for AI compute demandsVertically integrated tech companies vs. specialized AI startupsAI deployment across industries and organizational change management
Companies
Google DeepMind
Pioneered breakthrough AI research including AlphaGo (2016) and AlphaFold; represents frontier of AI capability devel...
OpenAI
Created ChatGPT, the viral moment that made AI mainstream; discussed as non-vertically integrated player facing long-...
XAI
Elon Musk's AI company; predicted to lead in frontier model breakthroughs in 2026 alongside Google
Google
Vertically integrated tech giant with infrastructure dominance; CEO Sundar Pichai quoted on AI investment necessity
Meta
Mark Zuckerberg's vision for consumer AI criticized as trivial distraction; represents consumer-focused AI approach
Lockheed Martin
Defense contractor; CTO quoted as skeptical of AI hype and emphasizing practical organizational application
NATO
Military alliance discussed as facing challenges adapting to AI-driven warfare and geopolitical power shifts
People
Nina Schick
Guest expert on AI geopolitics; worked with NATO, Biden White House, MIT; discusses AI's impact on global power
Geoff Nielson
Podcast host; conducts interview exploring AI disruption, geopolitics, and business implications
Demis Hassabis
Founder of Google DeepMind; pioneered breakthrough AI research including AlphaGo
Sundar Pichai
CEO of Google; quoted on AI investment necessity: 'biggest risk is under-investing, not over-investing'
Mark Zuckerberg
Meta CEO; vision for consumer AI criticized as trivial compared to scientific and security applications
Xi Jinping
Chinese leader; displayed intelligentized PLA capabilities at 2025 Tiananmen Square military parade
Donald Trump
US President; conducting tech diplomacy with AI company leaders; represents renewed US government interest in tech do...
Elon Musk
Founder of XAI; company predicted to lead frontier AI breakthroughs in 2026
Kim Jong-un
North Korean leader; appeared with Xi Jinping and Putin at 2025 military parade showcasing AI capabilities
Vladimir Putin
Russian leader; appeared with Xi Jinping and Kim Jong-un at 2025 military parade; part of emerging bloc
Quotes
"I think this is potentially the most consequential moment in human history, right? Because the quest for AI has always been, can we create a non-biological general intelligence?"
Nina Schick
"The biggest risk for us is not over-investing. It's actually under-investing, right? If we're actually in a race to create a non-biological intelligence, which will be run as a utility throughout the economy, this is an infrastructure play."
Sundar Pichai (quoted by Nina Schick)
"What are you going to do in a world where the price of intelligence is almost zero? If these capabilities keep improving and the cost of inference keeps dropping, how will you apply that within your organization?"
Nina Schick
"You can't just be a world where you say, I'm going to survive and support myself and my family on the fruits of my labor because I just, you know, I think that's fundamentally going to change."
Nina Schick
"The biggest threat to democracy is actually if you don't rise to the occasion. We're creating non-biological intelligence and the biggest risk for democracies is that they don't use these technologies to rebuild the base of sovereignty and prosperity for the next century."
Nina Schick
Full Transcript
Hey everyone, I'm super excited to be sitting down with Nina Schick. She's a leading voice not just on AI, but on its intersection with geopolitics and power. She's worked with NATO, the Joe Biden White House, and organizations like MIT, Ted, Wired, and Bluebird, on how AI is reshaping global power in the 21st century. I want to ask her about her forecast on the level of disruption this technology is going to bring to our lives, our countries, and our work. Who will be the winners and the losers? And what should leaders be thinking about if they're going to harness the next generation of technology and build prosperity for their citizens and their employees? Let's find it. Nina, thanks so much for being here. Super excited to have you on the show. Maybe just to kick things off. Tell me a little bit about your outlook for AI, for AGI. What impact do you see them having in the next handful of years? And what sort of level of disruption do you think is most likely? I think this is potentially the most consequential moment in human history, right? Because the quest for AI has always been, can we create a non-biological general intelligence? And for decades, that was just theory, but what has been happening in particular over the past decade, thanks to a new model in accelerated computing power, is that we are entering the foothills of actually being able to create a non-biological general intelligence and the progress. I mean, when you talk to people at the frontier, it's crazy, right? What's happened in the last five, six, seven, eight years? And what we have been seeing emerging is that there's a new power law that's kind of dictating this progress, the AI scaling law. And then you couple that with efficiency and just the sheer amount of competition, not only amongst the frontier labs to do this, to crack this nut, but amongst nation states as well. And I think that it's no hyperbool to say that, you know, AGI, if you want to call it that, a general intelligence that's non-biological, that's better than human intelligence is, you know, probably on the horizon, maybe even something we'll see in our lifetime. So again, if you look at this from a historical perspective, is there anything in the history of human civilization we've only been around as a species for 200,000 years, that's more powerful than that. And it's worth remembering that even if we do get to some point like AGI or ASI, that's not the end, right? We have much more intelligent can a non-biological system become. So for me, I think it's literally the most fascinating time to be alive. And, you know, it's going to change everything as far as I'm concerned with society, but also politics. And yes, amazingly interesting for the frontier of knowledge, but it's going to be really disruptive too. I mean, it's hard to, based on your answer, like it's hard to understate the amount of disruption, it sounds like that it's going to, that it's going to create for us. And so, you know, as you from your perspective, and with some of the people you've spoken with, you know, stare down the, the barrel of this change that's coming, you know, what's your level of, you know, sort of excitement for us versus, you know, fear or concern about the risk because obviously if we're talking about this level of change, it's extremely difficult to predict. I could go in any direction. How do you, you know, what's your kind of sentiment looking out over the horizon? So I've said on both sides of that debate when I initially came into the world of AI, I mean, my background is in geopolitics and policy. And my first kind of, bottle lightning moment when I kind of began to understand that what was happening at the frontier of deep learning was actually different from kind of the theoretical debate we had been having for many decades that, you know, there was actually real progress starting to happen with regards to this ambition of creating a general intelligence is around 2016, 2017, and a part of that was informed by the fact that I was based in London, you know, this is where kind of my political career started, and it was around that time that Google DeepMind, you know, the company pioneered by Demisis Abbas was really starting to make incredible breakthroughs, right? 2016 was the year when Alphago beat Lisa Dull. So I was in the right place at the right time when some incredible researchers were making these breakthroughs in deep learning. And initially, the first kind of, I say the first viral use case when these capabilities started leaking out of the lab into the real world, what was the first application? Okay, so Google DeepMind kind of pioneered what was possible beginning to build some building blocks of a general intelligence through video games. And when that increasing capability started to escape out of the lab in 2017, what was the first thing that people made? Well, they made non-concentral pornography, right? Deep things were the first kind of viral manifestation of AI's new capability leaking out of the research lab. So given at the time I was working in geopolitics and really thinking about how everything to do with exponential technology was changing the information ecosystem, the balance of power we're thinking about social media platforms, we're thinking about a corroding information ecosystem. And then I see deep fakes. I was like, oh my god, this is going to be weaponized. This is going to be extremely dangerous. And already now, you know, less than 10 years down the line, those really concerns that I had in 2017 about how the information ecosystem could be corrupted, how bad actors might use these increasingly capable systems to rack havoc in the case of initially, by creating non-concentral pornography and then and forward like all of that is playing out. But I've also now for the past few years been on the other side where you understand that actually the ability to solve intelligence, right? That's what the pursuit of artificial intelligence is all about is so exciting because it raises the ceiling in terms of human knowledge, right? And I think the killer app for AI might actually be scientific discovery. So if you follow the scientific method and then begin to understand that with these computational systems and with these incredibly capable and again, non-biological intelligence, which we're only just at the very foothills of, you know, what is it possible to uncover? This is why I think the most exciting applications of AI happen to be at the frontier of where computer science meets hard scientists. And again, we can talk about, I just recently watched the thinking game, which is the documentary going into how deep mind built Alpha Fold, which was another program to uncover the structure of proteins, which is one of the biggest challenges in biology unsolved for 50 years. And we were able to kind of unfold the structures of 200 million proteins, entire proteins known in existence thanks to an AI program. So you begin to understand, you know, how much scientific knowledge may come from these AI applications. So it's really in the end, it's both, right? It's this technology, which is extremely powerful. It's really a story that's as old as the story of human nature itself, or, you know, is a human inherently good or bad. And it's going to be both. But I think this thing that is different is just how quickly it's happening, just how quickly it's happening, just how capable it's becoming. And in order, I mean, the way that I think about it now is that the race that's on amongst the frontier companies and amongst the nation states is not only how can we create an intelligence that's superior, but also how can we scale it? How can we industrialize it? So it's a utility, an industrial scale utility. And if you look at what's happening right now, the AI scaling laws have been pretty consistent. But along with that, the cost of inference, right? The price of actually running AI is dropping. Meanwhile, efficiency, how much intelligence can you get per watt or per flop? So per unit of electricity or per unit of compute is accelerating as well. So if you create this incredibly capable non-biological intelligence, but at the same time, it's becoming so cheap and efficient, the speed of diffusion throughout the economy and the society is probably going to be faster than anything we've seen before. And that inevitably will come with huge disruption. This is why I say, this is going to become the biggest political story of our time, not only because of at the macrogeopolitical level, you see this competition between the superpowers, namely the United States and China to gain technology dominance as a way to have cloud in the world as a way to shore up their sovereignty because ultimately it comes back down to everything that's related to economic prosperity and national security ultimately comes down to is downstream of advanced technology. So you have this macrogeopolitical competition going on, but then even at the level of society, everything that matters, right? Am I going to have a job? Will I be economically prosperous? Issues like the environment, issues like the distribution of wealth, issues like the relationship between labor and capital on all of us every single vector, you know, everything that's contentious in society or everything that we discuss and debate right now, there's an AI angle to it. So not only is it shaping this macroeconomic geopolitical race, but on our day-to-day lives every issue that is contentious in our society right now. I mean, this is going to all be bubbling around this issue of AI and I think that's going to accelerate. We haven't even seen anything yet. I think we're just starting, you know, in the very early phases. It's, it's, and you already see that when you see politicians well across the world, but across also I'm based in the US now, but across the partisan divide, right? On both the conservative side and on the left side saying, you know, raising questions about AI and culpability and accountability and how this, because ultimately it deeply impacts what is power, right? And our relationship to power. So yeah, we're in for a wild ride. If you work in IT, InfoTech Research Group is a name you need to know. No matter what your needs are, InfoTech has you covered. AI strategy covered. Disaster recovery covered. Vendor negotiation covered. InfoTech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe. There's, there's so much to unpack. There's so much there that I want to unpack. But I'm glad you ended on a note about power because power is where I wanted to go next. And I don't mean compute power. I mean, power at, you know, a world stage at a geopolitical level. Nate, whether it's nation state, whether it's enterprises. And when we think about this technology, I mean, right now, it's no coincidence that you mentioned Google DeepMind, you know, several times there. We've got a technology where there's only a handful of major players right now. And so I'm curious as you look at the implications, whether it's in enterprises, whether it's in nation states, is this concentration of power, you know, a risk? Is it something that we need to be mindful of? And how are the big players looking at making sure that they can be competitive here? And that they can, you know, sort of use this, that they can gain power here versus lose it in this kind of world where it's being increasingly concentrated? I think, look, technology has always been directly related to power, right? If you look at kind of a history of civilization, the civilizations or the organizations or the groups of people who had control over the most advanced technologies became powerful. They became economically prosperous. They had an advantage when it came to their defense and security. And it just so happens that for kind of the past few decades, again, if you look at the long cusp of history and you look at maybe a Chinese perspective of, you know, China's place in the world historically, it will be seen as an anomaly that for the kind of the last few hundred years, the Western nation states have been kind of the most powerful. And a lot of that had to do, by the way, with the industrial revolution, right? Before the industrial revolution for much of civilized history, it was actually China and India that accounted for most of global GDP. So there is a historical precedent that shows that those civilizations that own the most powerful technologies become the most economically prosperous. It was actually a reason why, again, European civilizations became richer and more powerful from again, the perspective of a country like China, thanks to their technology. Now, what's been happening more recently is the emergence of these tech giants, right? They were the monoliths that built the platforms and the technology of the information age, if you want to call it that. But I think when you look back at the end of the 20th century, in the beginning of the 21st century, what we now know as a technology is the information age, it's just going to be a continuation, if you will, a stepping stone to now what is becoming the age of intelligence, right? It is all of those technologies that laid the groundwork for what is able to happen now, this idea that we can scale this non-biological intelligence. It is because of the advances in hardware and it's more slow, dictated the progress for the last 30, 40 years about the digitization of everything, how the computer chip has silicone became almost like the central beating heart of our economy, but also our existence. I mean, it's pretty difficult to imagine living your day on a day-to-day life without all these devices and all the technology that has become totally integrated into who we are. But it was also the internet and the fact that all this data, everything known, the entire corpus of human knowledge to this point is basically on the internet. That's allowed this early training for these early versions of AI models to be successful, right? The hardware and the training. But what's also becoming clear now is that to scale this non-biological intelligence, it isn't only about data and hardware, but you need to have industrial capacity. And that's what's happening right now. You see this, again, this is where the geopolitical contest comes in. If you think about running intelligence as a utility that's on 24-7, you don't only need this huge industrial base to build the model's capability, but you actually need it more to run inference, right, to have this switched on as a utility 24-7. So that's why the CAPEX is so phenomenally vast. That's why you hear that, I mean, in the US in 2025, the CAPEX, the hyper-scaler CAPEX, just on building out this AI infrastructure to build intelligence as a utility, is an excess of $500 billion. In 2026, it's going to be an excess of $600 billion. Who's got the money to do that kind of thing? It's not governments, and again, again, you see the comparison between the United States versus Europe, where you have the EU announce a scheme like, oh, we're one billion euros, it's our apply AI scheme. And meanwhile, the hyper-scalers, the majority of which are American in terms of their influence across the world, are able to commit this resource, which is historic, unprecedented resources to build out this infrastructure, and then the question is why. Why are they doing this? And there's so much fear about the AI bubble. But I think that Sundar Pichai, the CEO of Google, said it best when he's like, the biggest risk for us is not over-investing. It's actually under-investing, right? If we're actually in a race to create a non-biological intelligence, which will be run as a utility throughout the economy, this is an infrastructure play. So who owns the infrastructure for the utility that everyone is going to need? That's going to be diffused through every part of the economy. And of course, what you see happening when it comes to, and this very long answer to your question, is I think that those advantages that were accrued to the US tech companies over the past 20, 30 years in the early days of the information age, means that they are placed extremely well to compound their infrastructure and their power. And those regions of the world that can't compete in terms of having these infrastructure and technology companies, well, it just means that everybody else has to build on top of this infrastructure that is now being developed. I would argue, mostly in the United States. So it becomes not only a question like we always debate, and we have been debating for the past 10 years, in particular, ever since all the controversies around social media and the internet and understanding that there's this dark underbelly to these technologies that isn't only that we're going to be in this utopian age of information, that there's going to be deep societal disruption. We talk about democracy and accountability and the tech platforms. On the other hand, the fact that these tech platforms are American companies is also a huge testament, if you will, to American power in the world. And you can think about that in very concrete terms about even just computational architecture and the fact that any kind of national security or defense system still needs to run on compute and compute infrastructure. For instance, in Europe, 80% of the compute infrastructure belongs to American companies. So it's not only a question about democracy and accountability, which is going to become such a toxic political debate because there is no doubt that these companies are more powerful in the nation states. These companies are the ones that are building the infrastructure that everyone's going to be dependent on. So there'll be a lot of political controversy around that. But on the other hand, it is also a projection of hard power in the world. And I think that the president currently Donald Trump, he understands that, which is why there have been multiple kind of traveling embassies of Donald Trump flanked with America's tech leadership where they go to different parts of the world and promote the full stack of American technology, signing multi-billion dollar deals because it is a projection of geopolitical power. So very long answer to your question there. No, it's great. It's great. And it gives us a lot to think about as we look at the direction the world is going and how some of this might play out. Now, you've done some work with NATO around the use of AI as a hard power. And obviously, NATO encompasses more than just America. But America playing such a dominant role now with these big tech companies in owning and building the infrastructure here. As you talk to leaders at NATO and any other kind of nation state organization, or nation state or trans nation state organizations, what are they thinking about? What's on their radar? And you use the term hard power. AI is a hard power. How are they looking at adapting to this new world? And what are they worried about getting right or getting wrong? I think there's an understanding that perhaps the anomaly in history has been almost the last 30 years of American hegemony, where you talk about this liberal rules-based democratic order. Of course, it was never very liberal nor very rules-based. But I think the key point was you had a single hegemon, which was America and its Western allies. And there was this belief, I mean, I'm a child of the 80s and the 90s that this was the end of history that everyone is marching towards the natural end state of a liberal democracy. And of course, what's happened over the past decade and a half is that that utopian kind of ideal, which coincided with the birth of the internet and the advent of all these technologies, that that is not so. Right? So there is an understanding that we are heading back into a world where hard power speaks. And if you again look at it from a historical lens, that's the way things have always been throughout history. The past kind of 30, 40 years, that's been the anomaly. And along with that, there is an understanding that that hard power needs to be backed by technology, because it is so relevant to national security and defense. And at the same time, an understanding that conventional means of warfare are radically changing. Right? If we are creating a non-biological intelligence, and at the same time, the kinetic manifestations of warfare, so drones, missiles, the physical weapons you use to wage warfare are fundamentally being changed by being looped into intelligent systems. Then you have to not only are we heading back into a world where disruption is happening, where hard power matters, where there is this unbelievable technology competition, but the tools and the way that we wage warfare is also changing, this is increasingly going to be led by autonomous systems. Well, then that's a pretty radical reset. Right? And right now, I mean, what is interesting from the perspective of NATO is the fact that the kind of transatlantic relationship is super strained. And that has a lot to do with Trump's presidency, but also the fact that the balance of power shifting to the sense where America isn't just the headgemon anymore. Its focus is going to the east. And I think what is clear from what's been happening again over the past 30, 40 years is that from the perspective of the CCP, technology is the chosen instrument of the CCP to regain its rightful place in history on the global stage. So at the same time as all this disruption is happening, the relationship between the Western allies is fracturing. And the US feels that its hegemony is threatened by China rising in the east. And this is playing out primarily through these technology battles. But in order to secure that kind of technology superiority, we also see new kind of battles happening when it comes to trade wars or supply chains. So new relationships are being built notably. I mean, if you look at the kind of deals that are being done between the US and the Gulf, this is really interesting where they're selling the full kind of stack of American technology capabilities. But also this is emerging as a kind of military alliance of military partnership or the redrawing of critical supply chains in the region, you know, in Latav where there's an understanding that the kind of resources that we need for advanced defense supply chains, we can't like source those only from China. So the structure of global power is radically shifting as we speak. And I think the predominant reason that is happening is because the era of American hegemony is over. We're entering into this period of hard power. And it'll be interesting to see whether or not the Western alliance, what we kind of took for granted growing up in the 80s and 90s is going to be one of the casualties of that. There's a particular aspect of that that caught my attention lately. And so when we talk about, you know, the East and the West or certainly, you know, America and a lot of these Western powers and then the CCP, China, one of the fundamental differences, societally, but also in terms of the approach around AI is the governance structure or the system of governance. And so, you know, China is a one-party nation under the CCP versus, you know, these more, you know, democratic countries in the West. And you can see it manifesting itself around the, I guess, the approaches around AI, but also in some ways, I think the speed and the urgency around which, you know, there's investment and education around the AI technologies. And so I'm curious, you know, from your perspective, Nina, one of the questions of the day is whether AI is, whether one of those systems is better than the other for dealing with these technologies. And frankly, whether AI is actually a threat to democracy and whether we're going to start to see it reshape these political systems. So I recently did a speech on this where I said, you know, the biggest threat to democracy is actually if you don't rise to the occasion, right? We're creating non-biological intelligence. I'm increasingly bullish that the capabilities are going to be there, that the point of whatever you want to call it, let's say you call it ASIR-AGI. Very powerful computational intelligence is going to be a reality in the next, you know, decade. So given that the applications are so profound both for economic prosperity, but also within military and security applications, it seems to me that the biggest risk for democracies is that they don't use these technologies to rebuild the base of sovereignty and prosperity for the next century, right? That we focus too much on things like trivial consumer apps, you know, one of the things that I dread, I have young children, is Mark Zuckerberg's version for consumer AI where every American will have five AI friends. So do we just want to enter into a world where we just dull ourselves and kill ourselves with distraction literally being entertained to death? Or are we going to use this kind of non-biological capability to rebuild the base of prosperity and think about it, you know, how is that wealth going to be distributed throughout society and security? So it comes down to this prosperity and security and not just some trivial consumer apps because a lot of what's been happening over the past few decades is that some of the brightest minds, the best people, you know, that's what they've been doing. They've been building kind of trivial consumer apps like food delivery services. So I think in addition to that, you see this competition between capabilities, right? So who can build the best models? And there's, and to be honest with you, I think there's been a lot of debate about, oh, China's catching up on the frontier capabilities, but I don't know if that's true because I think the contest is between the American frontier labs. In part because China is so compute constrained. And yes, we're unlocking like incredible new architectures to make the models more efficient, but it seems to me, my bet for 2026 is that the biggest kind of breakthroughs in terms of model capabilities are probably going to come from Google and XAI. So I don't think it's going to come from a Chinese frontier lab. But then the second competition you're engaged in is deploying broadly across society, right? Actually getting the capability within a system is only one part of this equation of industrializing intelligence. The second part of the equation is like, okay, the societies that are going to have the most transformation are those who actually take this utility of intelligence and deploy it widely across society. And importantly coming back to this question about security in military applications as well. And there, I mean, I'm not an expert on how the CCP is deploying AI, but what is really interesting is that as soon as AlphaGo came out in 2016, they took it really seriously. So in 2017, that's when the CCP launched its policy, its next generation AI development plan, making it an explicit policy to be the global leader in AI by 2030. And by 2019, they had also laid out their policy position on how to intelligentize the PLA, right? The People's Liberation Army. And in summer 2025, you had a pretty historic military parade in Tiananmen Square, where Xi Jinping was flanked by Kim Jong-un as well as Putin, the first time, kind of the three leaders of North Korea, China and Russia had been seen together since the Cold War, at this military parade where a big part of it was displaying the intelligentized kind of new capabilities of the PLA. So in the US, how do you, if you don't have this top-down kind of control and command system that you have with the CCP, what's the model that works? Well, I can tell you what doesn't work because I moved to the US from Europe and my career, my early career was in geopolitics and working in EU policy and seeing just how fractured the 27 states of the European Union are, there is no kind of, there is no kind of cohesive, there is no top-down, first of all, but there's no bottom-up either. And you see that now with strategic vulnerabilities in Europe on defense, on energy sovereignty, on economic policy, on migration, you name it, the gamut. So that's model doesn't seem to work. But what I see happening in the US, and again, there's a historical precedent for this, where it can actually work, is the spirit of public private partnership. Right? And people see the US government now taking an interest in these issues because there's an understanding that yes, technology superiority, it's fundamental to our national security, and there's a lot of dismay because I think the messenger is Trump and he obviously evokes very partisan reactions, but that has always been the spirit of great American innovation in the 20th century, right? The Apollo project, the Manhattan project, even the history of Silicon Valley comes down to this public private partnership. I mean, people have kind of written that out of history recently, that Silicon Valley actually starts with in partnership with the US military. Even semiconductors, you have semiconductors themselves, silicone, the thing that the entire world runs on comes from this great tradition of public private partnerships. So you're really starting to see that amping up here in the United States. So I think that's going to be the question of the 21st century, right? If the European model doesn't work, I don't think it's going to work. I don't think they're going to be a contender in this race. You have obviously in China, where yes, they might not have the frontier model capability, but I think in terms of deployment and mission, there is a mission, right? There is a sense of we want to restore our place in history on the global stage. And now you have this renewed sense, I think of national purpose in the United States as well, where it is about more than let's build a consumer app or five AI friends for people. It's about, hey, how do we actually protect sovereignty, democracy? How do we ensure kind of the ideals of freedom and prosperity and cure? And I think that's going to be the most interesting geopolitical contest of the 21st century. And there's two players in the race. And I want to come back to the notion of the public-private partnership. And you talked earlier about a big component of this is the notion of deployment and how you can get this technology out into the hands of people, into the hands of organizations. So I want to talk a little bit about that for a minute. What does that look like? And when you're talking to business leaders or presenting to business leaders, hearing their concerns, certainly we're at a moment in history as we said, where there's a lot of concentration of this technology with a few different big companies who own a lot of the infrastructure, who are way ahead of everybody else in terms of the capabilities and the research. What does it look like for everybody else? If you're running an organization in whatever non-tech sector of the economy, how should you be thinking about AI and deploying it and using it to be more competitive in your own business? So the first thing is that the huge infrastructure giants, the tech giants, the monoliths, they're playing a different game from everybody else. So there's no out competing them and it'll be very interesting to see what happens with OpenAI. Because because in terms of actual sheer capability on creating intelligence, they became the bottled lightning moment for the world to start realizing that this AI thing was a big deal. Thanks to ChatGPT, which was, I don't think there was any idea that it would be as wildly successful as it was. We know, OpenAI didn't pioneer LLMs in the first place, but it will be interesting to see whether or not they can prevail because they're not a fully integrated infrastructure, vertically integrated tech company in the same way that kind of XAI is or Google is, right? So if OpenAI fails, it kind of shows you the reason why, in the long run, nobody can play that game of building intelligence as a utility unless they're vertically integrated infrastructure and technology company in the way that I see a Google to be. But for everyone else, we're not doing that. You're not playing the game of building intelligence, you're not building, you're not in the game of creating it as a utility, maybe kind of providing there's a whole cottage industry to provide the picks and shovels to kind of industrialize intelligence. So a great time to be in the energy sector, a great time to be in the networking sector. I mean, it seems to be a new dawn for the age of nuclear as well. But for everybody else in the broader economy, the question is, okay, I go to lots of meetings where you talk to business leaders and everyone's obsessed with the latest capability or how do I apply AI in my business or what's the ROI or what are the use cases? And my message is still, we're way early, right? We're way early. So when you look back, like the tools that we have now, whatever these agentec workflows or like the LLM, so you're going to seem very like extremely rudimentary clumsy tools, probably within the next six months, within the next 12 years. So as a business leader, I think what's far more important is to understand the direction of where we're going, right? So this is why I always talk about AI not being a tool. It's a capability, this non-biological general intelligence, which the race is on now to industrialize as a utility. So you have to think, what are you going to do in a world where the price of intelligence is almost zero? So if these capabilities keep improving and the cost of inference keeps dropping, how will you apply that within your organization? That's far more interesting for me in the medium to long term than how are you using a chatbot right now within your organization? And yes, you're starting to see some really interesting early successful use cases of AI, but I think the real economic gains and the real use cases and the real value of this isn't going to be evenly distributed or even start to emerge at scale until we actually crack the nut of like industrializing the intelligence itself. So then I think what matters is, again, true leadership in the sense that your company might not change overnight. You're not going to have AI as a magical panacea to all ills. I loved it when I recently spoke to the CTO Lockheed Martin and he's pretty skeptical on AI or he hates at least how the debate on AI kind of either presents it as like a magical panacea or that it's either everything or nothing and he said AI is a magical pixie dust. It's true like AI is a magical pixie dust. You still have to look at your organization. You know, what's the capability gap? What's the thing you're trying to solve? And then you think about, okay, how you apply intelligence in that afterwards. And then I think this is very real thing about your workforce. How are you going to manage your, that is maybe even the most important thing. How are you going to manage your team? How are you going to organize your hierarchy? Because you're already starting to see it. I know that a lot of people are blaming layoffs on AI. That's that's not it, right? In the olden days you'd call in McKinsey and everyone get laid off and now AI is kind of emerged of the excuse. So I don't think AI is actually leading to massive layoffs yet. But I think that almost inevitably will be the case, especially when it comes to knowledge work. So as a leader, it's more like, how do you build the team? What's your vision? What's your capability gap? And what are you guys going to build in a world where the price of intelligence is zero? I think that's far more important than the latest tool that's come out because those are going to evolve very quickly. Let's stay on the workforce piece for a minute because there's there's so much, interesting stuff to unpack there. And I love your perspective on the AI layoffs. And by the way, I completely agree with you. I think it's just sort of cover fire for where we are in the economic cycle, which is, which sucks in some ways because I think it creates a consumer and an employee backlash against AI looking at, oh, AI is this thing that's taking my job when it's not the, you know, it's that's just just an excuse. But there's a really interesting question, which is if these layoffs are happening because of the point in the economic cycle historically, well, then there's an upswing later in the economic cycle as it starts to rebound and we rehire, you know, a lot of this workforce that's been laid off. Do you see that happening? Or are you concerned that we're going to be in a world in the next few years, whereas you said the price of intelligence is so close to zero that the workforce you'll need is completely different. And, you know, as you, you know, as you take out your crystal ball, is it, is it fewer jobs? Is it different jobs? What's the impact going to be in and what do we need to do to be ready? Really difficult to say, but if we are heading to a world where the price of intelligence is going to be close to zero, right? This is what the whole infrastructure race is about. This is what, you know, some of the best minds in the world are building. They're not only building like an intelligence that's increasingly capable, but they're trying to make sure that that intelligence is cheap and abundant and can be applied into any industry or any use case, whether that's cracking, you know, the hardest problems of science or, you know, whether you want to use that to run, you know, your own agentic workforce. It seems to me that the relationship between labor and capital is going to be pretty fundamentally transformed, right? If the price of intelligence could be zero. So I think, first of all, there's a huge need for people. There's a huge need for people on this buildup. So are you a plumber? Are you an electrician? Do you have any kind of engineering expertise? I mean, part of the reason I moved from Europe to America was, well, my conviction that this is an interesting, the most important geopolitical race in that, you know, the US is kind of grand zero and the US is a contender in this race, so I want to be close to that. But I'm literally close to it because I'm in Texas. We're part of this infrastructure buildout is actually happening. Why? Because you have cheap and abundant energy here because it's easier to get the permits to kind of build this vast infrastructure. And there's not enough people, right? That is a huge problem. There are not enough people. So if you're an engineer, you can build, you're an electrician, I think it was Google trying to train up 8,000 electricians, you know, they just didn't have the right skills. And you similarly see that same story in the defense sector where you're thinking about building the next generation kind of defense capabilities. Actually, industrial capability and that you just don't have the skills to build it. So it's a good time to be a certain type of employee. But broader than that, I think, yeah, I think what's going to happen, you already see it happening is that AI, even something that's going to be as rudimentary as an LLM is raising the floor, right? So something that used to be good enough, like isn't good enough anymore. You can't just get by with average. If you want to be excellent, you can really be excellent and you can use again, perhaps the best manifestation of that is AI is a tool of scientific research to unlock some of the greatest mysteries in science. That's human and machine together. So if you are somebody who's got this intense curiosity about understanding biology, or you want to build the best company, like, why would you not use these capabilities? It's going to supercharge you. And yet, if you're somebody who's just been coasting, skating, isn't maybe that good. And you can be automated, I think you probably would be automated. So again, this and this comes back to this philosophical question I think about is AI going to make us smarter or dumber and in a way, it's probably going to be both. So I think that will be felt throughout the labor market and to say that it won't be or there'll be plenty of jobs for everyone. There'll be more jobs, maybe net net. There will be much more prosperity and more jobs, but there will be a period of disruption, no doubt, which is why I think it's so important to become an asset owner. Again, that's one of the things that's so different in the United States as opposed to Europe. People, there's much more of a culture of investing in assets and it's much easier to get a stake in these companies that are publicly traded, that are basically building this infrastructure, which I think is going to become the most valuable infrastructure in the world. So I think it's really important at the same time that you think about jobs and automation and labor and capital. You start thinking about becoming an asset owner and how do we distribute these vast potential economic gains among society and a part of that has to do with investing and financial literacy. You can't just be a world where you say, I'm going to survive and support myself and my family on the fruits of my labor because I just, you know, I think that's fundamentally going to change. There's an interesting tension there that I want to ask you about and call out explicitly, which is on the one hand, we've got, it feels like fewer larger organizations that are, you know, way out of head on here. And then there's also the notion that for a lot of these organizations, aside from the physical build out because of the price of intelligence going down so rapidly, maybe they don't need to be as large as they were historically. And you mentioned that, you know, asset ownership and being able to, you know, especially in America, but in everywhere, I think sort of increase your abilities as a laborer is becoming important too. And so I'm curious, when you look at the economies of the future, do you see, do you see them being more diversified, more sort of fewer, like more entrepreneurial, I guess I can call it, you know, people in this world of close to free intelligence, does that lead to a need for more creative types, more entrepreneurs, more smaller businesses? Or does it, is it winner take all? And, you know, it'll be completely concentrated. I don't think it's winner take all. I think the BMS will obviously be extremely powerful because they run the infrastructure and the capability for this most valuable utility. And you can do more with less. However, and again, this is something that I've experienced in my own life in a very dramatic way when you think about the four bearers of AI and everything that's happening now. If you go back to the internet and the information age, I'm half Nepalese. I grew up in Nepal, my mother, you know, grew up in a village where there was like no electricity, you know, access to infrastructure pretty much lived a life that Himalayan mountain farmers had been living for centuries, hundreds and hundreds of years. Yet in one generation, right, my generation, we were the first children of the internet. Everything changed. Everything changed. The entire society changed, economic opportunity changed. The entire culture, cultural fabric of my country was transformed thanks to the age of information. So you have lots of entrepreneurs, lots of young people creating their own businesses, lots of people using it as a way to access opportunity and education, which is completely unprecedented, right? Didn't even exist 30, 40 years ago. 180 in a single generation. So you see how this technology when it's widely dispersed is also this tool of empowerment, but yes, societal upheaval and disruption. And I think it really depends on your perspective ultimately then. Are you coming at it from the perspective where you think, well, I want to go into a company, I want a job for life, I want security, and I don't want any disruption. Well, probably I would say that type of world is going to become far less likely. Whereas if you're an entrepreneur, you want to build for yourself, you're creative, and also if you're willing to take some risks, I think those type of people might be rewarded far more handsomely. So even now, when you look at these big corporations that are doing layoffs, yeah, I think it's inevitable, you know, as they streamline and became more efficient and yes, intelligence becomes like a software, you get intelligence, intelligent automated agents working within organizations are, is there going to be headcount loss? Yes. However, as an individual, as an entrepreneur, can you use those same capabilities for yourself? Also, yes. So it's both sides, but I think this idea that, you know, everybody goes, and then this touches on super philosophical themes about education and standardized testing and intelligence itself, you know, are you, what's the point of putting your children through this rigorous system of education, which is all about achievement in standardized tests to get those jobs, which were so lucrative and sought after for the past few decades, like being a lawyer or banker or getting a job in a big tech company, if there's going to be less and less of those jobs, more competition for those jobs, and you're actually competing against non-biological intelligence, you know, I think people will start working differently, they'll have to become more entrepreneurial, and part of that will also be driven by need and opportunity. What are the most important skills do you think of the next to five, ten years? Maybe the duration of the 21st century? You know, I think I'm a historian by training. I love history, I love politics, I love, you know, it just fascinates me to just contemplate on how brief our stint as a species on this planet has really been, and then when you think about what's happening now with regards to the technology that we're creating, what a radical departure point this is, I think that perspective, again, of what human nature really is, how history has gone through these periods of huge transformational change, and that society also changes with it, and it can be very dangerous and disruptive, but that you have this human spirit that is able to endure human ingenuity always comes through. That kind of makes me positive, so I guess an important skill is perspective, read, understand history, believe, and a real belief in, I think, human ingenuity and capability, and I would say also being able to take a risk is really important, so this idea that everything should always be the way that it's been, and everything needs to be, you know, the sense of like fear and anxiety, because things are changing, and they are changing, we're not going to, I don't think we're going to be able, anyone's going to be able to stop that, needs to, you need to kind of grapple that a little bit, I think, you need to be able to deal with change and somehow be resilient and not lose your belief in humanity, and maybe that's why you could also become very mission driven, you know, to understand why, actually, if you think about the best manifestations of this non-biological intelligence, how it has the potential to raise the barrier of human knowledge in a way that's just completely unprecedented historically, is so much cause for optimism, so I guess that's all to say, don't be too anxious, don't be too scared, be able to lean into some risk, and somehow be able to manage the inevitable reality that not everything is going to stay the same, that change is happening, and that change is natural, by the way, even when it comes down to, you know, the very basic laws of physics, I think that mindset is probably the really important, and the second thing I think is really important is being human, connecting, talking to people, actually seeing people in real life so ironic, because we're obviously doing this virtually, but that human connection is going to matter more than ever, really, and you already see this now in business transactions, right? The most important currency is trust, how do you, what are your values? How do you espouse those values in your organization and amongst the people you work with, and how do you maintain that trust amongst your peers, your colleagues, but also your clients? I think that those are going to be the enduring features, it's being able to deal with change, it's being able to take a bit of risk, being resilient, staying optimistic, and cultivating trust, being human, leaning into that even more than ever before. I love that. Nina, I wanted to say a big thank you for joining me today, this has been really, really interesting, really insightful, and I super appreciate your perspective. Thank you so much. If you work in IT, Infotech Research Group is a name you need to know, no matter what your needs are, Infotech has you covered. AI strategy covered, disaster recovery covered, vendor negotiation covered. Infotech supports you with the best practice research and a team of analysts standing by ready to help you tackle your toughest challenges. Check it out at the link below and don't forget to like and subscribe.