Reed, it is a delight to be back here with you again. Thanks for being on. We are once again going to be talking about AI and specifically about chips. So according to Reuters, data shows that Chinese firms now control about 40% of their domestic AI chip market. And that's up from roughly 35% just a year ago. So not huge changes. But Nvidia's share has fallen to the mid-50s. And analysts are expecting for Chinese homegrown chip production to keep increasing. Obviously, US export controls have accelerated Chinese push for self-sufficiency. Companies like Huawei, Alibaba, Baidu, they're building their own chips and creating entire AI ecosystems around them. Chinese firms are already optimizing models to run efficiently on domestic hardware, even if the raw performance sometimes still lags by several years at the very cutting edge. And so when people are thinking about AI hardware chips, how should they be thinking about this? Is this just a supply chain issue? Or is this more about, this is the foundational layer of geopolitical power? Like how important are chips right now? Well, the true answer is chips are very important, although not decisive to the end. People have this tendency to be extremist. Either chips don't matter at all because, in fact, China is going to do its own chips, or chips matter everything, nothing else matters. And as all of these important things, it is a very important factor, but not the only factor that matters. And there's a lot of reasons why the leading chips, which are currently in video as the key leading chip really matters is because it's compute density. It's because it's a question of, as you get to larger and larger and less effective ecosystems, less efficient chip ecosystems. In terms of how it operates, you get more probabilistic failure in the system. You get longer time horizons in terms of training. You might even have certain capabilities. They're a lot less. And that's why the leading chips do, in fact, matter. And that's the reason why, for example, all of the US players who are leading this do have, at minimum, intense training clusters with NVIDIA. And so that really matters. Now, there's a set of things that then come off of this, though, which is, for example, China has had some leading innovations in efficiency of compute. And part of we are learning from that within the Silicon Valley ecosystem. It's one of the reasons why the Chinese government has, generally speaking, started imposing certain kinds of checks and controls, because they don't want to have Chinese leading edge IP leaking to the West. And so you've got some challenges there. You've also got the questions around how do open source models work. You've got the questions around distillation. As far as we can tell with Quen, Kimi, and other leading Chinese models. There's actually a lot of distillation that happens from open AI, anthropic, Gemini, Copilot, et cetera, in terms of these. So you've got that kind of playing out as well. So the short answer is, chips matter. The short answer is the TSMC generated chips, which include NVIDIA, include TPUs, include other leading edge chips. Really does matter. But it's only one major factor around a set in how AI future plays out. Now, the geopolitics of it are beginning to move from the talk of AI and compute fabrics being geopolitical power to the reality of it. This year, next year, we'll begin to show some really substantive questions, whether it's anything from, obviously, who saw this kind of dust up around the Pentagon and what is the compute matter there with anthropic and others. But also, what does this mean for industry adoption and what's actually happening? Because as you see the coding revolution kick off, the enterprise and work transformation will be in various what we call in the industry, jagged edge, will be also picking up speed this year. Now, I don't think you're going to see layoffs this year because of it. I think you'll see people claiming layoffs because of it. But I think what we've already seen in software engineering is much more Jevons paradox, which is as these tools have gotten efficient, it just increases demand apace for what there is for software engineering. Absolutely. And so going back to a little more of the geopolitics, does this, as China and the US become these polls, are people going to have to take sides? As we go, is Europe our companies throughout Europe, throughout South America, throughout Asia, are they going to have to decide where they're getting their chips? And does that lead to more fractured society as opposed to closer collaboration because you're going to have to choose? I don't think you're necessarily going to have to choose. I think one of the things when it gets to the rest of the world, the fact that they can bid them against each other is good. And I think this is part of the reason why a kind of a international trade policy of tariffs and threats and retaliation is terrible because that will then offset some natural advantages we had, which is a higher trust than the Chinese ecosystem relative to the US setting, the global platform, the US being the having the companies who are the providers of these things. And I think that the last year of alienating friends and partners and allies as a general strategy, whether it's tariff threats or tariff actualities, threats on Greenland, speeches to say, piss off, et cetera. All of which will mean that we move much more rapidly to a bipolar or multipolar provider of these. And I think that that's going to be true for chips, true for data center architecture, and true for software. All of which we would want to be as close to a US technologically ecosystem as we can for the economic prosperity of the US and for the economic prosperity of US companies. It feels like someone needs to send the White House a copy of How to Win Friends and Influence People, and maybe they can take some hard one lessons. That's if they read. Yeah, exactly. Yeah, yeah, yeah, we're starting slow. All right, so moving from geopolitics to actually another concern that people talk about with AI, which is cybersecurity. And so anyone who's been reading the news, or at least my Twitter feed, was full of hot takes about what's happening in cybersecurity recently, because we've seen a wave of breaches and leaks. And this is some big names in the space. So we saw that Merkor got hacked, and the attackers claim they stole four terabytes of data. They said the breach was tied to the compromise of LightLLM, which was an open source tool in the AI stack, meaning the company was hit through a supply chain rather than a direct hack on its own perimeter. At the same time, we also saw recently that Anthropik accidentally leaked more than 500,000 lines of clod code source code through a bad MPM release. And so they were exposing internal implementation details, unreleased features. A lot of people are talking about what does being sort of secure mean in this new world. So how should founders and operators right now be thinking about security in a world where it seems the attack vectors are endless? And we always talk about, yes, AI helps secure, again, cybersecurity, but it also is increasing the ability to have these attacks. So I think this is just the beginning. And I think that part of the question is there's a couple of reasons why cybersecurity opens up much more intensely in the age of AI. One simple one is now that things are moving so much faster that there isn't as much time for the previous kind of iterations about how one did security. So the fact that there is all these tools that are being deployed in production and everything else where you're like, well, actually, in fact, the security intensity of red teaming, multiple attacks, et cetera, et cetera, hasn't happened yet. So I think there's just the speed. Second thing is, obviously, on a set of AI things, including how one deploys LLMs, even if one is deploying open source models of LLMs in one's own production environment, is that these LLMs are inherently insecure because they're probabilistic systems that we don't understand that well. And so even though we do a whole bunch of alignment training, part of what happens with even the frontier models, which we do an intense amount of alignment training through their chat interfaces is we still get them doing odd things, which can include, of course, cyber attacks or other kinds of things as ways of doing that. And so there is actually, in fact, some new surfaces in the software stack platform that are unclear how you secure them. And this is, of course, within the standard of cybersecurity, which is inherently never fully secure. So the way that you make a fully secure system is you air gap it, which is you disconnect it from the network. And that there's very limited set of systems, why would you can do that? And so all of this opens up an intensely new, intense kind of new waves of insecurity. And given speed and iteration, new increases, like even though you said, OK, well, we secured that last thing. Well, here's a set of new things that have changed and are now insecure, which, of course, means we're going to have to evolve how we play this cybersecurity game. Like, is there going to be agents and tools that are specifically not only on the pure code hacking form, but on phishing? Because AI-generative tools are some of the best phishing amplifiers. Well, but about phishing defense. And how does that play out? And so there's going to be a whole stack of new approaches to security and new needs for defense that haven't existed yet. Now, in the venture business, I was very happy because, like, Greylock and other places that are kind of like the cutting edge of enterprise software, like Palo Alto Networks and a whole stack of other things, Greylock was at the beginning of. And there's a couple other venture firms that are equally enterprise. Cybersecurity tends to be an evergreen category with new companies being started every couple years that's really important. And obviously, that's one of the things that Sheeam and Saum and the other folks at Greylock are really focused on. And on the other side, how would you think about it as a consumer? So I feel like every week you're getting an email that's like, oh, your data was breached and there was a leak in your passwords, et cetera. And then at the same time, we're heading up to April 15th. You see a million vibe coded apps telling you here's how you can do your tax returns with this new vibe coded app. And you don't necessarily want to put your W2 into some vibe coded app that hasn't sort of looked at the security side. How do you think that'll either affect consumer behavior or if you were a consumer, what would you do? Would you sort of trust these new things that were coming along? Well, so fundamentally, consumers are not terrifically informed in these areas. It's like, for example, it's one of the reasons why, generally speaking, if you are in a mall and you have a Ferrari sitting in the mall and you say, here is a five page fill out of all of your personal form, like your children's names, every single place that you've lived in your last, in your whole life and all the rest, and you go, okay, I'll fill it all out because I want the chance to the party. And you're like, sure, what are they doing with all of this data? Because it's an economic model by which the Ferrari is the expense by which they sell all this data. And consumers broadly, like maybe 10 or 20% of them understand this, but the majority does not. Totally. And so what they generally look for is companies to keep them safe. And if they encounter something to have that be trusted, some of this may end up becoming government regulation. That's part of the reason why you've had government regulation on finance and other kinds of things for how this plays out. And it may need to be there in some ways, but previously, the number of just, call it, train wrecks in cybersecurity and data that you see even from high quality companies, like Mercore and Anthropic, then you've got all the vibe coded apps that people are putting up. And like, dating apps where all the information has suddenly been leaked and posted and all the rest of this stuff. And so I actually think that part of what it's gonna end up being is that people are gonna go, all right, I might trust the startup, but the startup is gonna have to go harder to establish its trustworthiness credentials. Because I think it'll grow over time that people will be much less, kind of like, sure, whatever I encounter on the internet or whatever I encounter on my mobile app, that'll be fine and the data will be fine. And now I think there'll be a growing concern and I think that we may need to actually even put some extra juice into growing that concern because of how insecure the environment's becoming. Yeah, it'll be interesting to see if this, we obviously think that startups, AI native companies are gonna be the ones who sort of win everything here. And yet I do think sort of brand trust security, like that will become more of an issue as we sort of hear more of these horror stories. And to your point, maybe people should be more worried than they are, but we will see. And so stepping back from security, it's actually really interesting if you look at the divide between consumer and enterprise. So on the consumer side, I mean, chat, GBT in particular, but all of the AI chatbots have been adopted at enormous pace. I mean, faster than social media, faster than sort of all other technologies that we've seen. But I think that people expected this spread to happen faster in the enterprise space. We expected sort of companies to take it on more quickly. And in practice, it's been slower and more uneven. You see pockets of people going crazy and using it intensely, but I think other companies are unchanged from five years ago. And people probably underestimate how long it takes for these new technologies to sort of go through these enterprise systems. So why do you think, like, what do you think that people underestimate about how new technologies diffuse through large organizations? And can we see this pace? Will it speed up, slow down, continue as it has been? Well, one of the things that I think is kind of funny is obviously in the Valley, everyone says network effects. And relatively few people understand it in depth, even in the canonical places of network effects, which is like everything from fax machines, to messaging clients, the social networks, et cetera, marketplaces, and they don't track, like for example, other kinds of network effects. Like for example, perhaps the first and most important economic amplifier network effect is cities. Like you build the technologies, you can make villages and cities. And now not only does that allow all the different things like specialization, but it also allows a mass of amount of economic productivity, not just because of the Adam Smith specialization, but because all this kind of trade and knowledge and information can all spread. And that's of course the reason why, you know, we've just gone through our recent set of idiots called declaring peak Silicon Valley, Silicon Valley over, you know, whether it's Florida or Texas or whatever else. It's like, no, you don't understand network effects. Silicon Valley has network effects. Well, the same thing is true of companies. Companies have network effects. Now, generally speaking, some of these network effects are very positive, but network effects are kind of a portion of lock-in. And so part of the reason why work transformation with the new companies tends to be more slowly is like, you say, well, the work that locks in a company and makes, you know, even back in the day, forward with the model team more effective, is it locked in a set of network effects in terms of how the company operates, which means the transformation is difficult. So unsurprising, when you look at, you know, kind of transformation of work right now, that transformation of work happens most intensely in startups, mostly intensely in small groups and big companies that are kind of doing their own thing. They've hopped on their AI, ATVs, and they're kind of going through the dune, it's on their, you know, on their cognitive industrial revolution buggies, you know, kind of doing things. And the temps for companies to say, well, I do a proof of concept, and I have three people doing it, it's like, that's not, it only works if in a coherent work group. Now that being said, part of what I think is interesting about what happens in transformations, network effects in in transformations like this, is it's slow and then fast. And the fast is we're now moving entirely to this new network paradigm. And we're abandoning the old one, we're changing the new one. And I think it will happen, but it happens more slowly than people predict because it's first slow, then fast. And so that's the pattern that we should be thinking. And now, you know, like, when does it be, you know, whatever it was a couple of years ago saying, hey, what I was saying, hey, everyone's gonna have their own coding assistant for doing the work, that will happen. Obviously the coding assistant is an amplifier. So like when you have coding assistants that long version thinking and compute as applied to your particular problem, which might be law or accounting or finance or anything else. And I think that's part of, you know, how that operates. But anyway, so that's, I think the slow, then fast is a feature of network effects. I also think it's really interesting because one thing that some people don't realize is one of the reasons why Silicon Valley was successful was because non-competes weren't allowed. And so you had this diffusion of folks to other companies or starting their own or sort of bringing their knowledge with them. And still in so many parts of the country, it's actually those non-competes which slow down innovation. And so not that they're gonna topple Silicon Valley, but we actually could see greater innovation in other cities and states if we got rid of those non-competes so that people could take their knowledge with them, whether it's on AI or something else. Well, 100%. And some of the stuff was some of the good stuff that Lena Kahn was doing. And so I think that having, because it's redefining the network because opposed to having the non-, the anti-competes or non-competes as keeping you as a structural lock-in per company, it's opening it up to the local ecosystem. And Annali Saxenian's book, Regional Advantage, covers this really well from a viewpoint of, venture capital was invented in Boston, bunch of technical universities, venture capital, et cetera, et cetera, why did Silicon Valley outstrip Boston? It was because it actually loosened the ability to lock-in network effects as companies in terms of the anti-competes and spread it to the region, the Regional Advantage. And just to kind of add to my earlier, like, oh yeah, these idiots describing peak Silicon Valley, my own point of view has been, we want as many Silicon Valley as we can have, both within the US and within the Western world. And so I'm supportive whenever, when Mike Bloomberg came out to say, how do I do Silicon Valley in New York or various governors? And so that notion of trying to help as many Silicon Valley areas start as possible is I think actually a really good one. It's just that you've gotta presume the density of the network effect of the existing Silicon Valley in your strategy. Absolutely. Reed, thank you so much, appreciate it. Always a pleasure. Possible is produced by Palette Media. It's hosted by Ari Finger and me, Reed Hoffman. Our showrunner is Sean Young. Possible is produced by Tanasi DeLos, Katie Sanders, Spencer Strasmoore, Imou Zou, Trent Barboza, and Tafadzwa Nima Rundwe. Special thanks to Suria, Yala Manchilli, Sayyida Sapieva, Ian Alice, Greg Beato, Parth Patil, and Ben Rallis.