The a16z Show

Marc Andreessen's 2026 Outlook: AI Timelines, US vs. China, and The Price of AI

82 min
Jan 7, 20263 months ago
Listen to Episode
Summary

Marc Andreessen discusses the current state of the AI revolution, comparing it to the biggest technological shifts in history like electricity and the steam engine. He covers AI business models, the US-China AI competition, regulatory challenges at state and federal levels, and venture capital strategies for navigating multiple competing AI paradigms simultaneously.

Insights
  • AI represents the culmination of an 80-year journey from neural network theory to practical implementation, making it the biggest technological revolution of our time
  • The AI industry benefits from unprecedented democratization through cloud infrastructure, allowing startups to access cutting-edge AI capabilities without massive upfront costs
  • Usage-based pricing models are driving rapid adoption, but successful AI companies are experimenting with value-based pricing that can reach $200-300 per month for consumers
  • The gap between leading AI models and open-source alternatives is shrinking rapidly, with Chinese models like Kimi replicating GPT-5 capabilities in smaller form factors
  • State-level AI regulation poses a greater threat to US competitiveness than federal oversight, with over 1,200 bills being tracked across 50 states
Trends
Rapid commoditization of AI capabilities through open-source modelsShift from seat-based to usage-based pricing in AI applicationsConvergence of big tech cloud providers offering AI-as-a-serviceEmergence of China as a major competitor in AI developmentBackward integration by AI application companies building their own modelsProliferation of AI chips beyond Nvidia's GPU dominanceState-level regulatory fragmentation threatening national AI competitivenessGrowing demand for physical infrastructure and energy to support AIAcceleration of AI model capability improvements beyond Moore's LawDemocratization of AI development knowledge through open-source releases
Quotes
"This new wave of AI companies is growing revenue like, just like actual customer revenue, actual demand translated through to dollars showing up in bank accounts. Like an absolutely unprecedented takeoff rate."
Marc Andreessen
"These are trillion dollar questions, not answers. But once somebody proves that it's capable, it seems to not be that hard for other people to be able to catch up. Even people with far less resources."
Marc Andreessen
"If you run a survey or a poll of what, for example, American voters think about AI, it's just like they're all in a total panic. It's like, oh my God, this is terrible, this is awful. It's going to kill all the jobs, it's going to ruin everything. If you watch the revealed preferences, they're all using AI."
Marc Andreessen
"Venture we can bet on multiple strategies at the same time. We are aggressively investing behind every strategy that we've identified that we think has a plausible chance of working."
Marc Andreessen
"High prices are often a favorite of the customer. It's actually really funny. A lot of like the naive view on pricing is the lower the price, the better it is for the customer. The more sophisticated way of looking at it is higher prices are often good for the customer because the higher price means that the vendor can make the product better faster."
Marc Andreessen
Full Transcript
2 Speakers
Speaker A

This new wave of AI companies is growing revenue like, just like actual customer revenue, actual demand translated through to dollars showing up in bank accounts. Like an absolutely unprecedented takeoff rate. We're seeing companies grow much faster. I'm very skeptical that the form and shape of the products that people are using today is what they're going to be using in five or 10 years. I think things are going to get much more sophisticated from here. And so I think we probably have a long way to go. These are trillion dollar questions, not answers. But once somebody proves that it's capable, it seems to not be that hard for other people to be able to catch up. Even people with far less resources. When a company is confronte fundamentally open strategic or economic questions, it's often a big problem. Companies need to answer these questions and if they get the answers wrong, they're really in trouble. Venture we can bet on multiple strategies at the same time. We are aggressively investing behind every strategy that we've identified that we think has a plausible chance of working. If you want to understand people, there's basically two ways to understand what people are doing and thinking. One is to ask them and then the other is to watch them. And what you often see in many areas of human activity, including politics and many different aspects of society. The answers that you get when you ask people are very different than the answers that you get when you watch. If you run a survey or a poll of what, for example, American voters think about AI, it's just like they're all in a total panic. It's like, oh my God, this is terrible, this is awful. It's going to kill all the jobs, it's going to ruin everything. If you watch the revealed preferences, they're all using AI. AI is moving faster than any technology way before it, and the rules are being written in real time. For decades, new platforms followed a familiar arc. Build infrastructure, attract developers, capture the value. AI is breaking that pattern. Models are improving weekly, costs are collapsing and entire markets are being rebuilt before incumbents can react. What looks stable today may not exist a year from now. No one has seen more technology cycles up close than Marc Andreessen. From the early Internet to mobile cloud and now AI, he's watched multiple eras reset the economy, and he believes this one is larger than all the rest. In this broad ama, Mark joins the conversation to unpack why AI still feels early despite the hype. How model economics are reshaping software, and why usage based pricing and open competition are accelerating adoption at unprecedented speed. He also dives into the hard big versus small models, open versus closed ecosystems, and the role of startups versus incumbents and how China and geopolitics factor into the future of AI. Mark explains why this moment feels different from past cycles, why venture portfolios are uniquely positioned to better cross conflicting futures, and why the different opportunities may emerge where technology becomes cheap, abundant and embedded everywhere. We hope you enjoy.

0:00

Speaker B

A lot of folks that send questions ahead of time. And what I've done is kind of curated into a few different sections in an AMA this morning with Mark. So what we thought we'd do is cover four big topics. So AI and what's happening in the markets, policy and regulation, all things a 16Z and then we've got a fun catch all which we're calling sandbox of things if we get to it. So starting first, maybe with the biggest question, we're sitting in the middle of the AI revolution, Mark, what inning do you think we're in and what are you most excited about?

2:35

Speaker A

First of all, I would say this is the biggest technological revolution of my life and hopefully I'll see more like this in the next, whatever, 30 years. But this is the big one. And just in terms of order of magnitude, like this is clearly bigger than the Internet. The comps on this are things like the microprocessor and the steam engine and electricity. So this is a really big one. The wheel. The reason this is so big, I mean, maybe obvious to folks at this point, but I'll just go through it quickly. So if you kind of trace all the way back to the 1930s, there's a great book called Rise of the Machines that kind of goes through this. If you trace all the way back to the 1930s, there was actually a debate among the people who actually invented the computer. And it was this sort of debate between whether they kind of understood the theory of computation before they actually built the things. And they had this big debate over whether the computer should be basically built in the image of what at the time were called adding machines or calculating machines. We think of sort of essentially cash registers. IBM is actually the successor company to the National Cash Register Company of America. And that was of course, the path that the industry took, which was building these kind of hyper literal mathematical machines, you know, that could execute mathematical operations billions of times per second. But of course had no ability to kind of deal with human beings the way humans like to be dealt with. And so, you know, couldn't understand human spe. Human language and so forth. And that's the computer industry that got built over the last 80 years. And that's the computer industry that's built all the wealth and financial returns of the computer industry over the last 80 years, you know, across all the generations of computers, from mainframes through to smartphones. But they knew at the time, they knew in the 30s, actually they understood the basic structure of the human brain. They understood, they had a theory of sort of human cognition, and actually they had the theory of neural networks. So they had this theory that there's actually the first neural network paper, academic paper, was published in 1943, which was over 80 years ago, which is extremely amazing. You can watch the interview on YouTube with these two authors, McCullough and Pitts, and you can watch an interview, I think, with McCullough on YouTube from, I don't know, 1946 or something. He was like on TV in the ancient past. And it's an amazing interview because it's like him in his beach house and for some reason he's not wearing a shirt and he's talking about this future in which computers are going to be built on the model of the human brain through neural networks. And that was the path not taken. And basically what happened was the computer industry got built in the image of the adding machine and the neural network basically didn't happen. But the neural network as an idea continued to be explored in academia and in sort of advanced research by sort of a rump movement that was originally called cybernetics and then became known as artificial intelligence basically for the last 80 years. And essentially it didn't work. Like, essentially it was basically decade after decade after decade of excessive optimism followed by disappointment. When I was in College in the 80s, there had been a famous kind of AI boom bust cycle in the 80s inventure in Silicon Valley. I mean, it was tiny by modern standards, but at the time was a big deal. And by the time I got to college in 89, in computer science departments, AI was kind of a backwater field. And everybody kind of assumed that it was never going to happen. But the scientists kept working on it, to their credit. I mean, they built up this kind of enormous reservoir of concepts and ideas. And then basically we all saw what happened with the ChatGPT moment. All of a sudden it sort of crystallized and was like, oh my God, right? It turns out it works. And so that's the moment we're in now. And then really significantly, that was less than three years ago, right? That was the Christmas of 22. And so we're sort of three years in to basically what is technically an 80 year revolution of actually being able to deliver on all the promise that the people on the alternate path of sort of cognition model path kind of saw from the very beginning. And then the great news with this technology is it's already kind of ultra democratized. You know, the best AI in the world is available on chat, GPD or Grok or Gemini or these other products that you can just use and you can just kind of see how they work. And same thing for video. You can see with Sora and video, kind of state of the art with that. You can see with music, you can see Suno and Ido and so forth. And so we're basically seeing that happen. And now Silicon Valley is responding with this just like incredible rush of enthusiasm. And you know, really critically, this gets to the magic of Silicon Valley, which is Silicon Valley long since has ceased to be a place where people make silicon that not long ago moved out of California and then ultimately out of the U.S. although we're trying to bring it back now. But the great kind of virtue of Silicon Valley over the last 80 years of its existence is its ability to kind of recycle talent from previous waves of technology and new ways of technology, and then inspire an entire new generation of talent to basically come join the project. And so Silicon Valley has this recurring pattern of being able to reallocate capital and talent and build enthusiasm and build critical mass and build funding, support and build human capital and build, you know, everything, enthusiasm for each new wave of technology. And so that's what's happening with AI. I think probably the biggest thing I could just say is I'm surprised. I think essentially on a daily basis of what I'm seeing. And we're in the fortunate position to kind of get to see it from two angles. One is we track the underlying science and kind of research work very carefully. And so I would say, like every day I see a new AI research paper that just like completely floors me of some new capability or some new discovery or some new development that I would have never anticipated, that I'm just like, wow, I can't believe this is happening. And then on the other side, of course, we see the flow of all of the new products and all the new startups. And I would say we're routinely kind of seeing things that again, kind of have my jaw on the floor. And so, you know, it feels like a lot, this giant vista. I do think it's going to kind of come in fits and starts. These things are messy processes. This is an industry that kind of routinely gets out over its skis and over promises. And so there will certainly be points where it's like, wow, this isn't working as well as people thought, or wow, this turns out to be too expensive and the economics don't work or whatever. But against that, I would just say the capabilities are truly magical. And by the way, I think that's the experience that consumers are having when they use it. And I think that's the experience that businesses are having for the most part when they're working on their pilots and looking at adoption. And then it translates to the underlying numbers. I mean, we're just seeing this new wave of AI companies is growing revenue, just like actual customer revenue, actual demand translated through to dollars showing up in bank accounts at like an absolutely unprecedented takeoff rate. We're seeing companies grow much faster. The key leading AI companies and the companies that have real breakthroughs and have real very compelling products are growing revenues that kind of faster than any wave I've certainly ever seen before. And so just from all that, it kind of feels like it has to be early. It's kind of hard imagine that we've like, we, we've topped out in any way. It feels like everything is still developing. I mean, quite frankly, it feels like the products, to me, it feels like the products just are still super early. Like, I'm, I'm, I'm very skeptical that the form and shape of the products that people are using today is what they're going to be using in five or 10 years. I think, I think things are going to get much more sophisticated from here. And so I think we probably have.

3:01

Speaker B

A long way to go maybe on that, that topic. So one of the big knocks is, yes, the revenue is immense, but the expenses seem to also be keeping pace. So like, what are people missing as a part of that discussion and topic?

8:54

Speaker A

Yeah, so just start with just like four business models. Right. And so you're right, there's basically, this industry basically has two, two core business models, consumer business model and the quote unquote enterprise or infrastructure business model. You know, look, on the consumer side, we just live in a very interesting world now where the Internet exists and is fully deployed. Right. And so I'll give you an example. Sometimes people ask is like, is AI like the Internet revolution? It's like, well, a little bit. But like, the thing with the Internet was we had to build the Internet. Like we, like we had, we had to actually build the network and we actually had to. You and ultimately it involved an enormous amount of fiber in the ground. And it involved enormous numbers of mobile cell towers and enormous number of shipments of smartphones and tablets and laptops. In order to get people on the Internet, there was this just incredible physical lift to do that. And by the way, people forget how long that took. The Internet itself is an invention of the 1960s, 1970s. The consumer Internet was a new phenomenon in the early 90s, but we didn't really get broadband to the home until the 2000s. That really didn't start rolling out actually after the dot com crash, which is fairly amazing. And then we didn't get mobile broadband until like 2010. And people actually forget the original iPhone dropped in 2007. It didn't have broadband. It was on a narrowband 2G network. It did not have high speed, it did not have anything resembling high speed data. And so it wasn't really, until really about 15 years ago that we even had mobile broadband. So the Internet was this massive lift, but the Internet got built, right? And smartphones proliferated. And so the point is now you have 5 billion people on planet Earth that are on some version of, you know, mobile broadband Internet, right? And, you know, smartphones all over the world are selling for, you know, as little as like 10 bucks. And you have these, you know, amazing projects like JIO and India that are bringing, you know, you know, the sort of the remaining, you know, kind of the remaining population of planet Earth that hasn't been online until now is coming online. So, you know, so we're talking 5 billion, 6 billion people. And then the consumer, the reason I go through that is the consumer AI products could basically deploy to all of those people basically as quickly as they want to adopt, right? And so sort of the Internet's the carrier wave for AI to be able to proliferate at kind of light speed into the broad base of the global population. And let's just say that's a potential rate of proliferation of a new technology that's just far faster than has ever been possible before. Like, well, you know, like you couldn't download electricity, right? You couldn't download, you know, you couldn't download indoor plumbing, you know, you couldn't download television, but you can download AI. And this is what we're seeing, which is the AI consumer app, you know, the AI consumer killer applications are growing at an incredible rate and then they're monetizing really well. And again, I mentioned this already, but generally speaking, the monetization is very good, by the way, including at higher price points. One of the things I like about watching The AI wave is the AI companies, I think are more creative on pricing than the SaaS companies, the consumer Internet companies were. And so it's for example, now becoming routine to have 200 or $300 per month tiers for consumer AI, which I think is very positive because I think a lot of companies cap their kind of opportunity by, by capping the pricing kind of too low. And I think the AI companies are more willing to push that, which I think is good. So anyway, so that, you know, I think that's reason for like I would say, you know, considerable rational optimism for the scope of consumer revenues that we're going to be talking about here. And then on the enterprise side, you know, there the question is basically just, you know, what is intelligence worth, right? And you know, if you have the ability to like inject more intelligence into your business and you have the ability to do, you know, even the most prosaic things like raise your customer service scores, you know, increase upsells, you know, or reduce churn or if you have the ability to, you know, run marketing campaigns more effectively, you know, all of which AI is directly relevant to like, you know, these are like direct business payoffs, you know, that people are seeing already. And then if you have the opportunity to infuse AI into new products and all of a sudden, you know, all of a sudden your car talks to you and everything in the world kind of lights up and starts to get really smart, you know, what, you know, what's that worth? And again there you just, you, you kind of observe it and you're like, wow, the, the leading AI infrastructure companies are growing revenues incredibly quickly. You know, the pull is really tremendous. And so, you know, again there, it's just, it feels like this just like incredible, you know, product market fit. And the core business model, right, is actually quite interesting. The core business model is basically tokens by the drink, right? And so it's a sort of tokens of intelligence, you know, per dollar. And oh, and then by the way, this is the other fun thing is if you look at what's happening with the price of AI, the price of AI is falling much faster than Moore's law. And I could go through that in great detail, but basically like all of the inputs into AI on a per unit basis, the costs are collapsing and then as a consequence there's kind of this hyper deflation of per unit cost. And then that is like driving just like a more than corresponding level of demand growth with elasticity. And so even there we're like, it feels like we're just at the very beginning of kind of figuring out exactly how expensive or cheap this stuff is getting. I mean, look, there's just no question tokens by the drink are going to get a lot cheaper from here. That's just going to drive, I think, enormous demand and then everything in the cost structure is going to get optimized. Right. And so, you know, when, when people talk about like, you know, the chips or you know, whatever, you know, kind of the unit input costs for building AI, you know, you now have these like math, the loss of supply and demand are gonna, are gonna kick in. Right. What's the, you know, in any market that has sort of commodity like characteristics, you know, the number one cause of a, of, of a glut is a shortage. And the number one cause of a shortage is the glut. Right. And so you have, you know, to the extent you have like shortage of use or shortage of whatever inference chips or shortage of whatever data center space, if you look at just the history of humanity building things in response to demand, if there's a shortage of something that can be physically replicated, it does get replicated. And so there's going to be just enormous build out of all. I mean there is, there's just hundreds of billions or at this point trillions of dollars maybe going into the ground and all these things. And so the per unit cost of the AI companies are going to drop like a rock over the course of the next decade. And so like, yeah, I mean the economic questions of course are very real and of course there's you know, microeconomic questions around, around all these businesses, but the, the sort of macro forces have been, at least here I think are very strong. And, and yeah, I, I just given the underlying value of the, of, of this technology to both the consumers as the enterprise users and given the just like incredibly aggressive discovery that's happening of, of all the ways that people can use this in their lives and in their businesses. Like, it's just, it's really hard for me to see how it both doesn't grow a lot and generate just enormous revenue.

9:05

Speaker B

Yeah, and actually I think it was what, two or three weeks ago where AWS was saying like the GPUs that they've been using, they've been able to extend back to even like seven plus years. So like the shelf life also of the GPUs that they're using is now extending in ways of which they can optimize better than maybe perhaps the last couple of cycles as well is that the right way to think about as well?

15:36

Speaker A

Yeah, that's right. And then, and then that's one really important question and observation. And then by the way, that also gets to this other kind of question where there's different theories on it which is basically big models versus small models models. And so a lot of the data center build is oriented around hosting, training and serving the big models for all the obvious reasons. But there's also the small model revolution is happening at the same time and if you just kind of track, you can get, the various research firms have these charts you can get. But if you just kind of track, if you track the capability of the leading edge models over time, what you find is after six or 12 months there's a small model that's just as table and so there's this kind of chase function that's happening which is the capabilities of the big models are basically being shrunk down and provided at smaller size and then therefore smaller cost quite quickly. So I'll just give you the most recent example that's just got hit over the last two weeks. And again this is the thing that's just kind of shocking is there's this Chinese company that has a, well, I forget the name of the company, but it's the company that produces the model called Kimi, just spelled K I M I which is one of the leading open source models out of China. And the new version of Kimi is a reasoning model that at least according to the benchmark so far is basically a replication of the reasoning capabilities of GPT5. Right. And the reasoning models of GPT5 were a big advance over GPT4. And of course GPT5 costs a tremendous amount of money to develop and to serve. And all of a sudden, you know, here we are, whatever, six months later and you have an open source model called Kimi. And I think, I don't know if they've had, it's either shrunk down to be able to run on either it's like one MacBook or two MacBooks. Right. And so you can all of a sudden if you have like an application, if you're a business and to have a reasoning model of the GPT5 capable but you, you know, you're whatever, you're not going to pay the whatever GPT5 cost you, you're not going to want to have it be hosted and you want to run it locally, you know, you can do that. And, and, and again that's just like another, just, it's like another, you know, it's another breakthrough. Like, it's just, it's another, another Tuesday, another huge advance. It's like, oh my God. And then of course it's like, all right, well, what is OpenAI going to do? Well, obviously they're going to go to GPT6. Right. And you know. Right. And so there's this kind of laddering that's happening where the entire industry is moving forward. The big models are getting more capable, the small models are kind, and then the small models provide a completely different way to deploy at very low price points. And so, yeah, and we'll see what happens. I mean, there are some very smart people in the industry who think that ultimately everything only runs in the big models because obviously the big models are always going to be the smartest. And so therefore you're always going to want the most intelligent thing, because why would you ever want something that's not the most intelligent thing for any application? The counterargument is just there's a huge number of tasks that take place in the economy and in the world that don't require Einstein, you know, where, you know, where, you know, 120 IQ person is great. You don't need a, you know, 160 IQ, you know, PhD in, you know, string theory. You just like have somebody who's competent and capable and it's great. And so, you know, I, I, you know, and we've talked about this before, I tend to think the AI industry is going to be structured a lot like the computer industry ended up getting structured, which is you're going to have a small handful of basically the equivalent of supercomputers, which are these like giant, you know, kind of we call God models that are, you know, running in these giant data centers. And then I'm not convinced on this, but my kind of working assumption is what happens is then you have this cascade down of smaller models ultimately all the way. The very small models that run on embedded systems run on individual chips inside every physical item in the world. And that the smartest models will always be at the top, but the volume of models will actually be the smaller models that proliferate out and. Right, that's what happened with microchips. It's what happened to computers which became microchips. And then it's what happened with operating systems and with, with a lot of everything else that we built in software. So, you know, I tend to think that's what will happen just quickly on the chip side. Again, like chips, you know, if you look at the entire history of the chip industry, shortages become gluts and you get just, you know, like anytime there's a giant profit pool in a, in a new chip category, you know, somebody has a lead for a while and kind of gets, you know, let's say the profits appropriate to what we, what we call robust market. But in time what happens. Right, is that that draws competition. And of course, you know, that's happening right now. So Nvidia's, you know, Nvidia's an absolutely fantastic company, fully deserves the position that they're in, fully deserves the profits that they're generating. But they're now so valuable, generating so many profits that it's the bat signal of all time to the rest of the chip industry to figure out how to advance the state of the art and AI chips. And that's by the way, and that's already happening. Right. And so you've got other major companies like AMD coming at them and then you've got really significantly, you've got the hyperscalers building their own chips. And so, you know, a bunch of the big, a bunch of those kind of big tech companies are building their own chips and of the Chinese are building their own chips as well. And so it's just, it's like pretty likely in five years that, you know, AI chips will be, you know, cheap and plentiful at least in comparison to the situation today, which again, I think will, you know, will, will tend to be extremely positive for the economics of, of the kinds of companies that we invest in.

15:56

Speaker B

Yep. And the startups are also starting to go after new chip design as well, which is exciting.

20:46

Speaker A

Yeah, well that's the other thing is, yeah, you have these disruptive startups and actually that just for a moment on the chips, we're not really big investors in chips because it's kind of a big, it's kind of a big company thing. But it's a little bit of historical happenstance that AI is running on quote unquote, GPUs, which a GPU stands for graphical processing Unit. And basically just for people who haven't tracked this, there were basically two kinds of chips that made the personal computer happen. The so called CPU central processing unit, which classically was the intel x86 chip. It's kind of the brain of the computer. And then there was this other kind of chip called the GPU or graphical processing unit that was the sort of second chip in every PC that does all the graphics. And you know, and this is graphics, like you know, 3D graphics for gaming or for CAD Cam or for, you know, anything else, you know, Photoshop, or for anything that involves, you know, lots of visuals. And so the kind of canonical architecture for a personal computer was a CPU and a gpu, by the way. Same thing for smartphones, by the way. And over time, you know, these have kind of merged. And so like a lot of CPUs now have GPU capability built in. Actually, a lot of GPUs now have CPU capability built in. So this, you know, this has gotten fuzzy over time. But like that, that was like the classic breakdown. But the fact that that was the classic breakdown, you know, kind of meant that while intel had a, you know, for a long time on CPUs, there was this other market of GPUs, which Nvidia, you know, basically fought the GPU wars for 30 years and came out the winner. Like what was the best company in the space? But it was like a hyper competitive market for graphics processors. It was actually not that high margin. It was actually not that big. And then basically it just, it turned out that there were two other forms of computation that were incredibly valuable that happened to be massively parallel in how they operate, which happened to be very good fits for the GPU architecture. And those two basically highly lucrative additional applications were cryptocurrency starting about 15 years ago, and then AI starting about whatever, four years ago. And Nvidia, I would say, very cleverly set itself up with an architecture that works very well for this. But it's also just a little bit of a twist of fate that it just turns out that if AI is the killer app, it just turns out that the GPU architecture is the best legacy architecture is devoted to it. I go through that to say, like, if you were designing AI chips from scratch today, you wouldn't build a full gpu. You would build dedicated AI chips that were much more straight, much more specifically adapted to AI, and would have, I think, would just be much more economically efficient. And you know, Jen, to your point, there, there are startups that are actually building entirely new kinds of chips oriented specifically for AI. And you know, we'll have to see what happens there. You know, it's hard to build a new chip company from scratch. You know, it's possible that one or more of those startups makes it on their own, and some of them are doing very well. It's also possible, of course, that they get bought by big companies that have the ability to scale them. And so we'll see exactly how that unfolds. And of course, we'll also, by the way, see, the Koreans are going to play here for sure. The Japanese are going to play, and then the Chinese in a major way as well. And they have their own native chip ecosystem that they're building up. And so there are going to be many choices of AI chips in the future and it's going to be a, you know, that'll be a giant battle. That'll be a giant battle that we observe very carefully and that we make sure that our companies basically are able to take full advantage of.

20:50

Speaker B

While on the topic of international, you mentioned Kimmy earlier. So it seems like some of the best open source models today are from China. Should this be worrisome to folks? How are you thinking and talking about this topic with folks in dc? I know you were just there last week me how much of this is a concern for US companies, particularly just having seen the rise of China do unnatural things in solar markets, car markets. Are they kind of flooding the ecosystem so that they can eventually kind of take share and increasingly own the ecosystem?

23:59

Speaker A

Yeah, so, you know, a couple things. So one is, you know, you know, you want to start these discussions by just kind of saying, like, you know, look, there's vigorous debate in the US and around the world of like, you know, how much are we in a new cold war with China? You know, and exactly like, how hostile, you know, should, should we view them? And it, you know, it's, it's very tempting, by the way. It's very tempting and I think it's a very good case to be made that we're in like a new cold war that's like, you know, that in a lot of ways is like the US versus USSR in the, in the 20th century. You know, it is counterargument, it is more complicated than that because the US and the USSR were never really intertwined from a trade standpoint. And a big part of that, quite frankly, was the USSR never really made anything that anybody else needed, I guess, other than weapons. But like, you know, the USSR's primary exports were literally like, you know, literally like wheat and oil. Whereas of course, China exports just a tremendous number of physical things, right? Including like a huge part of like the entire supply chain of parts that basically go into everything that American manufacturers, you know, kind of make, right? And so by the time a U.S. you know, whatever, by the time an American company brings a toy to market, right, Or a, you know, or a car or anything, or a computer or a smartphone or whatever, like it's got a lot of componentry in it, that was made in China. So there, so there is a much tighter interlinkage between the American and Chinese economies than there was the American and Soviet economies. And, you know, maybe, you know, Adam Smith or whatever might say, you know, that's good news for peace and that, you know, both countries need each other. By the way, the other part of that argument is that the Chinese, basically, the Chinese, you know, the Chinese governance model is based on high employment, you know, because, you know, if, you know, at least all the geopolitical people say if China ended up with like 25 or 50% unemployment, that would cause civil unrest, which is the one thing that the CCP doesn't want. And so the corresponding part of the trade pressure is China needs the American export market. You know, the American consumer is like a third of the global economy, a third of global consumer demand. And so, you know, China needs the US Export market or it has high. All of a sudden a lot of its factories would go kind of instantly bankrupt and, you know, would cause mass unemployment and unrest in China. So, so anyway, like, you know, there is this complicated, it's a, it's a complicated intertwined relationship. Having said that, you know, the, the mood in D.C. basically for the last 10 years on a bipartisan, has been that we need to take, we, the U.S. need to take China more seriously as a geopolitical foe. And you know, under, under, under that school of thought, there's sort of this sort of, you know, there's, there's the military dimension, which is, you know, this sort of, you know, the, the risk of some kind of war in the South China Sea, the risk of some kind of war around Taiwan. And so that, you know, that, that has everybody in Washington on high alert. You know, there's also this, this economic question around the kind of deindustrialization of the US and the potential reindustrialization and what that means about, you know, the dependence on China. And then there's this AI question. And the AI question is an economic question, but it's also like a geopolitical question, which is, okay, basically AI is essentially only being built in the US and in China, the rest of the world either can't build it or doesn't want to, which we could talk about. So it's basically US versus China. And then AI is going to proliferate all over the world. And is it going to be American AI that proliferates all over the world, or is it going to be Chinese AI that proliferates all over the world? And so, so, and I would say just generally across party lines in D.C. this, you know, the, the things I just went through are kind of how they look at it. And the Chinese are in the game. And so the, you know, the Chinese are in the game for sure. You know, with software, you know, Deep Seq was kind of the big, you know, kind of fired the starting gun in the software race. And now you've got, I think it's, I think you've got four. It's like Deep Seek, which is a deep. So Deep Seek is an AI model from actually a hedge fund in, in China. It's a little bit, kind of took a lot of people by surprise. Then Quinn is the model from Alibab, but Kimmy is from another startup. Oh, called Moonshot. The company's called Moonshot. And then there's, you know, and then, you know, there's also Tencent and Baidu and bytedance, you know, that are all primary, you know, companies doing a lot of work in AI. And so, you know, there's somewhere between three to six, you know, kind of primary AI companies. And then there's, you know, tremendous numbers of startups. And so, you know, they're in the race on, you know, they're in the race on software. They are, you know, working to catch up on chips. They're not there yet, but they're working incredibly hard to catch up. And just as an example of that, you know, the, at least the common understanding, you know, in the US is that the reason you haven't seen the new version of Deep Seek yet is that basically the Chinese government has instructed them to build it only on Chinese chips as a, as a motivator to get the Chinese ship ecosystem up and running. And, and then the main chip company, there is Huawei, although there could be more in the future. And then there's, you know, so, so, so, so there's that and then, and then there's everything to follow, which is basically AI in kind of robotic form. Right? And so there, there's this basically global, global technological, economic robotics competition that's kicking off. And, you know, China kind of starts out ahead on robotics because they're just ahead on so many of the, so many of the components that go into robots. Because the, you know, this sort of, like I said this, the kind of entire supply chain of like, electromechanical things, you know, basically moved from the US to China 30 years ago and has never come back. So, so, so that's kind of the, the, the, the, the DC lens on it. And, and I would say, you know, DC's watching it, you know, quite carefully. The big kind of supernova moment this year was the Deep SEQ release. The Deep Seq release was surprising on a number of fronts. One was just how good it was. And again, along this line of, it took the capability set that we're running in large models in the cloud and kind of shrunk it onto a, you know, into, into a, into a, A sort of a, A reduced size, you know, a smaller version of a sort of equivalent capabilities that you could run on small amounts of local hardware. And so there was that. And then it was also a surprise that it was released as open source, and particularly open source from China, because China, China does not have a long history of open source. And then it was also a surprise that it actually came from a hedge fund. So it didn't come from a big R and D, you know, sort of university research lab. It didn't come from a, you know, from a big tech company. It came from a hedge fund. And it, like, as far as we can tell, it basically is this somewhat idiosyncratic situation where you just have this incredibly successful corporate quant hedge fund with all these super geniuses and the founder of that hedge fund basically decided to build AI. And at least external indications are this was a surprise to even the Chinese government. It's impossible to prove what the Chinese government was surprised by or not, but at least the atmospherics are that this was not exactly planned. This was not a national champion tech company. At the time that Deep SEQ was released, it sort of came out of left field, which, by the way, is very encouraging for the field that it was possible for somebody to do that kind of who was unknown, because it kind of means that maybe you don't need all these super genius superstars, researchers, maybe actually smart kids can just build this stuff, which I think is the direction things are headed. And so that kicked off, I would say, like this kind of, I don't know, copycat's the wrong word, but that was sort of. It feels like the success of Deep Seq and the success of Deep Seat from China as open source kind of kicked off a sort of trend in China, releasing these open source models. You know, look, the cynics, you know, in D.C. would say, you know, yeah, like they're dumping, right? They're obviously dumping. They're trying to, you know, they see that the west has this opportunity to build this giant industry. You know, they're trying to commoditize it right out of the gate. You Know, there's probably something to that. You know, the Chinese industrial economy does have a history of, you know, sort of, let's say, subsidized production that leads to selling, you know, selling things below cost in some cases. But I think also it's. It like, I think that's almost too cynical of a view also because it's just like, all right, wow. Like they're really in the race, like open source, closed source, whatever. Like, you know, they're actually really in the race. You know, we've talked in the past, I think, on LP calls about, about, you know, these policy fights that, you know, we've been having in D.C. for the last two years. And you know, there was a big, pretty, pretty big push within the U.S. government, you know, two years ago to basically, you know, restrict, you know, or outright ban, you know, a lot of AI. And you know, it's very easy for a country that is the only game in town to have those conversations. It's quite another thing if you're actually in a foot race with China. And so I think actually the policy landscape in D.C. has, I would say has improved dramatically as a consequence of sort of an awareness now that this is actually a two horse race, not a one horse race.

24:35

Speaker B

For sure. Yeah. Actually, on the point, I'll jump ahead here to policy and regulation just because it seems like the current stance on 50 different set of AI laws by state seems like a catastrophic way to put us effectively with one of our hands tied behind our back here in terms of the AI race. What's the state of plan that. Are folks recognizing that that would be catastrophic for progress and development? Where do most people at least stand on that topic today?

32:29

Speaker A

Yeah, so it's a little bit complicated. So I'll rewind to say like two years ago I was very worried about like really ruinous federal, federal legislation on AI. And there was, there was. We, you know, we engaged, you know, kind of very heavily at that point which we've talked about in the past. And I think the good news on that is, I think the risk of that sitting here today is very low. There's very little mood in D.C. on either side of the aisle style to really, you know, essentially there's very little, there's very little interest in doing anything that would prevent us from beating China. So, so, you know, on the federal side, things, things are much better now. There, there will, there will be issues and there are tensions in the system, but like, things are looking, looking pretty good. That has translated, Jen, to your point. That's translated a lot of the attention to the states. And basically what's happened is, you know, under our system of, of federalism, you know, the states get to pass their own laws on a lot of things. And so, so yeah, basically a lot of, with these things, it's always a combination. A lot of well meaning people are trying to figure out what to do at the state level. And then of course there's a lot of opportunism where AI is just the hot topic. And so if you're a aggressive up and coming state legislator or whatever in some state and you want to run for governor and then president, you want to kind of attach yourself to the heat. And so there's like a political motivation to do state level stuff. Yeah. And sitting here today, we're tracking on the order of 1200 bills across the 50 states. And by the way, not just the blue states, also the red states. And so, you know, I've, you know, for the last like five years or whatever, I spent a lot of time complaining about, you know, kind of what Democratic politicians are threatening to do, to attack. There's also a lot of Republicans, like Republicans are not a block on this and there are quite a few like local Republican officials in different states that also, I think have, you know, let's say, you know, misinformed or ill advised views and are trying to put together, put out bad bills. You know, it's a little bit weird that this is happening and that, you know, the federal government does have regulation of interstate commerce and technology. AI kind of by definition is interstate. There's no AI company that just operates in California or just operates in Colorado or Texas. AI of all technologies. AI is obviously something that's sort of national in scope. It's sort of obvious that the federal government should be the regulator, not the states. But the federal government needs to assert itself, needs to step in. There was actually an attempt to do that. There was a, there was an attempt to add a moratorium of state level AI regulation that basically would reserve the right of the federal government to regulate AI and sort of prevent the states from moving forward with these bills. That was I think part of the negotiation for the quote, one big beautiful bill. And then that, that there was a deal behind that and that deal kind of blew up at the, at the last minute and that moratorium didn't happen. And you know, in fairness, the critics of that moratorium, it probably was a, was, it was probably too much of a stretch. Oh, it was, it was definitely too much of a stretch to get enough support to pass, but it was also probably too much of a stretch in terms of restricting the states from certain kinds of regulation that they really should be able to do. So. So it just, it didn't quite come together. There's a very active, we're having very active discussions in D.C. right now about kind of the next, you know, the kind of the next turn on that, you know, the administration is, I would say the administration is very supportive of the idea of, of the federal government being in charge of this as part of it being an actual, you know, 50 state issue and, and an issue of national importance. And then, you know, I'd say most, most Congress people on both sides of the aisle, you know, kind of get this. So we just, we kind of have to figure out, know, to land this. But, but I think that'll happen. Some of the state level bills are wild. The, the Colorado passed a very draconian regulation bill last year and against like serious objections from the local startup ecosystem in, in and around Denver and Boulder. And actually they're, they're now actually trying to reverse their way out of that bill, you know, a year later.

33:00

Speaker B

Some of the, the nuance of it, like the algorithmic discrimination and like how to mitigate like what were some of the, the extreme versions of what they had proposed.

36:35

Speaker A

Yeah, so the really draconian one was, the one that we really fought hard was the one in California which was called SB 1047. And it wasn't, it was basically, it was modeled basically after the, was called the EU AI Act. So the European Union's AI act. Okay. And this is the backdrop to all the US stuff, which is the EU passed this bill called the AI Act, I don't know, whatever two years ago. And it basically has killed AI development, but it's actually killed AI development in Europe to a large extent. And then it's so draconian that even big American companies like Apple and Meta are not launching leading edge AI capabilities in their products in Europe. Like that's how, that's how like draconian that bill was. And it's sort of a classic, it's a classic kind of European thing where they like, you know, like they just thought that, you know, they have this kind of view that it's just like, well, you know, if we can't be the leader, they literally say this by the way, if we can't be the leaders in innovation, at least you can be the leaders in regulation. And then they pass this like incredibly, you know, kind of ruinous self harm kind of thing. And then, you know, a few years pass and they're like, oh my God, what have we done? And so they're, you know, they're kind of going through their own version of that, that. By the way, you know, I, I, you know, when I talk about Europe, I tend to be very dark about the whole thing. I will tell you, the darkest people I know about Europe are the European entrepreneurs who moved to the US are just like absolutely furious about what's happening in, in Europe on this stuff. But, but even there, like it's, it's so bad in Europe, like they, they shot themselves in the foot so badly that there's actually a process now at the, at the EU to try to unwind that they're trying to unwind the gdpr. So anyway, for people tracking Europe, I, Mario Draghi is the former, I guess, prime minister of Italy, did this thing about a year ago called the Draghi Report, which is the report on European competitiveness. And he kind of outlined kind of in great detail all the ways that Europe was holding itself back. And part of it was overregulation in areas like AI. So they're trying to reverse out of that or making gestures. We'll see what happens. In the middle of all that, California sort of inexplicably decided to basically copycat the EU AI act and try to apply it to California, which might strike you as completely insane. To which I would say, yes, welcome to California. And it was basically this Sacramento political dynamic that kind of got crazy. It would have completely killed AI development in California. Fortunately, our governor vetoed it at the last minute. It did pass both houses of legislature that he vetoed at the last minute. Jen, to your point, it would have done a whole bunch of things that were ruinously bad, but one of the things it would have done is it would have assigned downstream liability to open source developers. And so, you know, we talked about, you know, the Chinese open source thing, okay, so you got Chinese out there with open source. Now you're going to have American companies that have open source AI. And by the way, you're also going to have American academics and just like independent people in their nights and weekends developing open source, you know, which is a key way that all this technology proliferates. And so this law would have assigned downstream liability to any misuse of open source to the original developer and the open source. And so, you know, you're an independent developer or you're an academic or you're a startup. You develop and release an AI model. The AI Model works fine. The day you release it, it's great. But like five years later it gets built into a nuclear power plant and then there's a meltdown of the nuclear power plant and then somebody says, oh, it's the fault of the AI. The, the, the, the legal liability for the nuclear meltdown or for anything, any other practical real world thing that would follow in the out years would then be assigned back to that open source developer. Of course this is completely insane. It would completely kill open source, it would completely kill startups doing open source, it would completely kill academic research, like in instance entirety, you know, anything in the field. And so you know that like that's the level of playing with fire, you know, kind of that these state level politicians have become enamored with. Like I said, I think the good news is the feds understand this. I suspect that this is going to get resolved, but it does need to get resolved because you know, just as a country, it just doesn't make any sense to let the states kind of operate suicidally like this. And so that's what we're doing. You know, we talk about this, we call this our little tech agenda. We're extremely focused on the freedom of startups to innovate. We are not trying to argue, you know, many other issues. We operate in a completely bipartisan fashion. We have extensive support, you know, on both sides of the aisle and for both sides of the aisle. So it's a truly bipartisan effort, very policy based and you know, I think very much aligned with the interests of the country broadly. And so that is what we're doing. And then, and then the other question we get, we, we get actually, you know, in some cases from lps, but in a lot of cases actually from employees is like, okay, why us? Right? Like, you know, you know, with any sort of policy question like this, there's always this collective action question which is this like tragedy of the commons, which is in theory like everybody, every venture firm, every tech company, whatever should be weighing in on these things. In practice, what happens is most of them just simply don't. And so at some point it falls on somebody's shoulders to fight these things. And Ben and I just basically concluded that the stakes here were just way too high. If we're going to be the industry leader, we just have to take responsibility for our own destiny. Destiny, you know, for better or for worse. I think that's the cost of doing business for being the leader in the field.

36:43

Speaker B

Right now, before we get off the topic of AI, I want to go back to one question that was submitted in. So do you think usage based or utility is the right way to price an AI compared to seats?

41:35

Speaker A

Ah, that is a fantastic question. So this is one of these giant, this is in my list of what I call the trillion dollar questions where you know, depending on how this is answered will drive, you know, trillions of dollars in market value. So yeah, so usage based pricing, it's actually, it's actually fairly amazing if you think about this from a startup standpoint, from a venture standpoint, it's actually fairly amazing what's happened. And I'm trying, I'm not really talking about this in public because I don't really, I guess I don't want it to stop. I think it's actually quite amazing which is you have these technology companies, you know, these big tech companies with these like incredible R and D capabilities that are building these big models, these big AI models with this incredible, you know, new kind of intelligence. And then it turns out that they were already in a way war. They were already in the cloud war, right? And so they were already in the war for kind of cloud services. And this is like AWS versus Azure versus Google Cloud, you know, and then all the, all these other, all these other cloud efforts. And so what actually happened was they sort of like there's an alternate universe in which they basically just kept all of their magic AI secret and captive and just used it in their own business or used it to just compete with more companies, you know, in more, in more categories. But instead what they've done is they've basically, basically commoditize is too strong a word. But they have proliferated their magic new technology through their cloud business, which is this business that just has these incredible scale components to it and sort of this hyper competition between the providers and these prices that come down very fast. And so you've got the most magic new technology in the world and then it's basically being served up by those companies as a cloud based business and made, made basically available to everybody on the planet to just click and use and for like relatively small amounts of money and then on, on a usage basis. Which means, and usage is great for startups because you, it means you can start easily, right? You, the, the, the, you know, there's very, you know, there's basically no fixed cost for a startup building an AI app. They don't have giant fixed costs because they can just tap into the OpenAI or Anthropic or Google or Microsoft or whatever you know, cloud you know, tokens by the drink, you know, intelligence tokens by the drink offering and just get going. And so it's, it's kind of this, this from, this from the startup standpoint, it's like this marvelous thing where like the most magical thing by the drink, you know, it's absolutely amazing. I, you know, and you know, that model, you know, by the way, that model's working and those companies are happy and they're growing really fast and they're, you know, happily reporting massive cloud revenue growth and you know, they're happy with the margins and so forth. And so, you know, I think generally it's working and those businesses are I think, likely to get much larger. And so I think, you know, generally that's going to work. But, but to the question, like that doesn't mean that the optimal pricing model for for example, all of the applications should be tokens by the drink. And in fact very much, I think, not the case. You know, we spend a lot of time working. We actually have, you know, dedicated, you know, experts on, on pricing in our firm. We spend a lot of time with our companies working on pricing because it's, you know, it's really this magical art and science that, that a lot of companies don't take, don't take seriously enough. So we spend a lot of time with our companies on this. And of course, you know, a core principle of pricing is you don't want to price by cost if you can avoid it. You want to price by value, right? Like you want to price, you have price where you're getting a percentage of the business value of, you know, especially when you're selling two businesses, you want to price as a percentage of the business value that you're getting. And so, so some AI startups that are, that are pricing by the drink for certain things that they're doing, but you have many others that are exploring other pricing models. You know, some that are just like replications of SaaS pricing models, but you also have other companies that are exploring pricing models, for example of, well, if the AI can actually do the job of a coder or the AI could do the job of a doctor or a nurse or a radiologist or a lawyer or a paralegal, right. Or whatever, or a teacher. You know, basically can you, can you price by value and can you get a percentage of the value of what, of what otherwise would have been know, would have been literally a person, you know, or, or by the way, equivalently, can you price by marginal productivity? So if you can take a a human doctor and make them much more productive because you give them AI, you know, can you price as a percentage of kind of the productivity uplift, you know, from the, from, from the, from the augmented, you know, the com, the symbiotic relationship between the, the human being and, and, and the AI. And so I, I think what we see in startup land is like a lot of experimentation happening on, on these pricing models. And I, and I, and I think again I, I think that's like super, super healthy. I, I, you know, I always give this little speech on this is like high prices are really underappreciated. High prices are often a favorite of the customer. It's actually really funny. A lot of like the naive view on pricing is the lower the price, the better it is for the customer. The, the more sophisticated way of looking at it is higher prices are often good for the customer because the higher price means that the vendor can make the product better faster. Right. Like you can actually, companies with higher prices, higher margins can actually invest more in R and D and they can actually make the product better. And you know, most people who buy things aren't just looking for the cheapest price. They want something that's really, that's going to work really well. Well. And so often high prices, you know, the customer doesn't ever say this, it'll never show up in a survey. But, but the high price can actually be a gift to the customer because it can make the vendor better, it can make the product better and ultimately make the customer better off. And so I'm very encouraged by the degree to which the AI entrepreneurs are willing to run these experiments. And I, you know, we'll have to see where it pans out. But at least so far I feel, I feel good about the, the, you know, at least the attitude in the industry about it.

41:45

Speaker B

Awesome. I actually, as you were going through it, I had probably 10 more follow up questions, but I'm actually going to go back to a topic you had briefly, the trillion dollar questions. Will open source or closed source win? Feels like we, we've come out on this, this debate or where do you, where do you put that?

46:44

Speaker A

No, I think this is still open. I, I think this is still very open. You know, like the, the, the closed source models keep getting better. By the way. If you generally, if you just like take the temperature of the people working at the big labs who work on the big proprietary models, like generally what they'll tell you is progress is continuing at a very rapid, rapid pace. You know, there's there's this, you know, there's this periodic concern that kind of shows up on online which is, or in the, in the market which is like, you know, maybe the capabilities, these models are topping out and you know, there's certain, there's, there's certain areas in which, you know, there's, there's people are working but like the people working at the big labs are like oh no, we have like 800 new ideas. Like we have tons of new ideas, we have tons of new ways of doing things. We might need to find new ways to scale but like we have a lot of ideas on how to do that. We know a lot of ways to make these things better. And you know, we're basically making new discoveries all the time. So like I would say, you know, generally the people working like across all the big labs are pretty optimistic. And so like I think the big models are going to continue to get better, you know, very quickly here and then you know, overall and then the open source models continue to get better. And like I said, you know, you know, every, every, every, I don't know, every month or something there's like another big release of like something like this give me thing where it's just like wow, like you know, that's amazing. And you know, wow, they really like shrunk that down and got that capability on a very small form factor. And so yeah, that's the case. And then you know, I maybe just the third kind of thing to bring up is the other really nice benefit of open source is that open source is the thing that's easy to learn from, right? And so if you're a, you know, computer science, if you're a computer science professor who wants to teach a class on, on CS on AI or if you're a computer science student that's trying to learn about it, or if you're just like a normal engineer in a normal company trying to learn this new thing thing or just somebody in your, you know, by the way, somebody in your basement at night with a startup idea. The existence of these, of these state of the art open source models is amazing because that's the education that you need. Like they actually, these open source models actually show you how to do everything right. And so like, and what that's leading to, right, is the proliferation of the knowledge about how to build AI is like expanding very fast again as compared to a counterfactual world in which it was all basically bottled up in two or three bits companies. And so you know, the open source thing is also just proliferating knowledge and then that knowledge is generating a lot of new people. And so I, I, you know, you know, say, as you guys have all seen sitting here today, AI researchers are at an enormous premium. You know, AI researchers today are getting paid more than professional athletes. Right. Like, you know, and that's, right, that's the supply demand imbalance there, there aren't enough of them to go around. But you know, again, shortages create gluts. The, the number of, the number of smart people in the world who are coming up to speed very quickly on how to build these things. I mean, some of the best AI people in the world are like 22, 23, 24. Like they, you know, kind of by definition they haven't been in the field that long. You know, you know, they, they can't have been experts their whole lives. Right? So, you know, they, they kind of have to have come up to speed over the course of the last four or five years. And, and if, if they, if they've been able to do that, then, then there's going to be a lot more in the future that are going to do that. And so just the, the, the, the sort of spread of the level of expertise on this technology is happening now very quickly. So I, yeah, I mean, I think it's still, like I said, I, I think it's, I think it's still a race. And, and by the way, you know, look, the, the long term answer may well just be both.

47:01

Speaker B

Both.

50:05

Speaker A

You know, like I said, if, if you, if you believe my pyramid industry structure, then there will, then there will certainly be a large business of whatever is the smartest thing, almost regardless of how, of how much it costs. And then there, but there will also be this just giant volume market of, of smaller models everywhere, which, which is what we're also seeing.

50:06

Speaker B

Yep, yep. The, another question you had posed at, at that point in time was will incumbents versus startups win? And at that point in time I think there was a mixed bag of where the incumbents were approaching AI. I think that's radically changed in the last two years. And then on the counter example, the blossoming of startups increasingly now maybe migrating into the incumbent category just how big they've become since that time. You want to take that question and give your assessment of where the state of the world is?

50:22

Speaker A

Yeah, so I mean, look, you know, big companies that are definitely playing hard, you know, Google's playing hard, Meta's playing hard, Amazon, Microsoft, you know, there's a bunch of these companies that Are, you know, that are kind of in there, you know, very aggressively. And then you've got these, you know, what we call the new incumbents, like an tropic and, and, and OpenAI. But you also have like, you know, even in the last two years you've had this birth of all of a sudden like brand new companies that are almost instant incumbents. And you know, you could say XAI is one of those Mistral. By the way, Mistral is the great outlier to my Europe thing from earlier. Like Mistral is actually doing very well as sort of the European kind of, you know, French national, European continental, you know, kind of AI champion, sort of the, you know, the exception that proves the rule. But you know, there's, there's a bunch of these now that are like, you know, doing quite well and are kind of becoming new incumbents. And then of course there's to startups by the way, and then there's, there's actual foundation model startups. Right? And so, you know, we funded, you know, we funded Ilya suskever out of OpenAI to do a new foundation model company. We funded Mira Murati also out of OpenAI. We funded faith Ali out of Stanford to do a world model foundation model company. And so you know, there's, there are new swings all, you know, all early but very promising for to kind of build, you know, new incumbents quickly. And so, you know, that's all happening and then, and then, you know, what's, and then on top of that there's just this giant explosion of AI application companies, right? And so there's basically companies that then usually startups that basically take the technology and then, you know, field it in a specific domain, whether that's law or medicine or education or you know, creativity or whatever. But again here it's just like, it's amazing kind of how, how sophisticated things are getting very quickly. So talk about the application companies for a moment. So like an application company, like classic examples like a cursor is like an application company. So they take the core AI kit capability which they purchase by the drink from you know, anthropic or OpenAI or Google, you know, tokens by the drink. And then they, they, they build a code, basically a code editor, what we used to call an IDE integrated development environment or basically like a, a software creation system. So they build like an AI coding system on, on top of the anthropic or OpenAI or whatever, you know, kind of kind of big models and feel that and the, the, the critique of Those companies in the industry has been, oh, those are what are called, called GPT wrappers is kind of the pej. And the idea basically being is well they're not actually like, they're not actually doing anything that's going to preserve value because the, the actual, the, the whole point of what they're doing is they're surfacing AI, but it's not their AI. The, the AI that's being surfaced is from somebody else. And so these are kind of these past pass through shell things that ultimately won't have value. It actually turns out what's happening is kind of the opposite of that which is the, the leading AI application companies like Cursor. I mean first of all what they're discovering is they, they're not just using a single AI model. They're actually, they actually, as these products get more sophisticated sophisticated, they actually end up using many different kinds of models that are kind of custom tailored to the specific aspects of how these products work. And so they may start out using one model but they end up using a dozen models and then in the fullness of time it might be 50 or 100 different models for different aspects of the product A and then B, they end up building a lot of their own models. And so a lot of these, the leading edge application companies are actually backward integrating and actually building their own AI models because they have the deepest understanding of their domain. They're able to build the model that's best suited to them. That and then by the way also AI, open source, they're also able to pick up and run on open source models. And so if they don't like the economics of buying intelligence, you know, by the drink from a, from, from a cloud service provider, you know, they can pick up one of these open source models and implement it instead, which you know, which these companies are also doing. And so the, the best of, the best of the AI application companies are actually, they are actually full fledged deep technology companies actually building their own AI. And so that, you know, that's I.

50:51

Speaker B

Think models though, right Mark? But when you think about God models versus small models, as you were describing that that would be small. Would you categorize that as a small.

54:27

Speaker A

Well, some of them, I mean we should, I will let them, I will let them announce, you know, whatever they're doing whenever it's appropriate. But some of them are now also doing big model development. And again this, this is also part of what, this is also part of the learning just in the last two years. Well, so like here's A big learning just in the last two years, which is very interesting, which is two years ago or three years ago for sure, you would have said, wow, OpenAI is like way out ahead and like it's probably going to be impossible for anybody to catch up. And then it's like, okay, well, anthropic caught up. And so, but you know, they came out of open and so they had all the secrets, you know, whatever, and so knew how to do it. And so okay, they caught up, but surely nobody can catch up after them. And then very quickly after that, there were a raft of other companies that caught up very fast. And XAI is maybe the best example of that, which is like, you know, Xai Elon's company. XAI is the company named Grok is the consumer product version of it. XAI basically caught up to, you know, state of the art OpenAI anthropic level in like less than 12 months from a standing start. Right? And so, and again, that, that kind of argues against any kind of permanent lead, right, by, by any one incumbent that's just going to be able to lock the entire market down, like if you can catch up like that. And then, and then as we, as we've discussed, the, you know, the China part is all new in the last year, right? The deep seek this, the deep seat moment I think was in January or February of this year, right? So less than 12 months ago. And so, and now you've got like four Chinese companies that have effectively caught up. And so, you know, so it's like, all right, I mean, again, this is, these are, these are trillion dollar questions, not answers. But it's just like, wow, okay. Like it's one of these things where once somebody proves that it's capable, it seems to not be that hard for other people to be able to catch up, even people with far less resources. And so, you know, I don't know what that does. Maybe it makes you slightly more skeptical in the long run economics of, of the big players. On the other hand, maybe it makes you like more bullish about the startup ecosystem. It certainly should make you more bullish about startup application companies, right, Being able to do interesting things, which is why we're so excited about that. You know, it should make you probably, you know, a bit more excited about, about certainly about China. On the other hand, the Chinese competition putting pressure on the American system to not screw itself up is very positive. So it should probably make you a little bit more bu on the US and so I, yeah, I think, you know, these are yeah, these are, yeah, these are our live dynamics. And I, I think we still need more time to pass before we know the exact answer. I should say this sometimes because sometimes I don't. Sometimes they freak people out when I say these are open questions. When a company is confronted with fundamentally open strategic or economic questions, it's often a big problem because a company needs to have a strategy and the strategy needs to be very specific. And a company has to make like very specific, specific, concrete choices about where it deploys investment dollars and personnel. And the strategy has to be logical and coherent or the company kind of collapses into chaos. And so companies need to answer these questions and if they get the answers wrong, they're really in trouble. Venture. We have our issues in venture, but a huge advantage that we have is we can bet on multiple strategies at the same time. Right. And we are doing this so we are betting on big models and small models and proprietary models and open source models. Right. And foundation models and applications and consumer and enterprise. And so the portfolio approach, the nature of it is like we are aggressively, basically we are aggressively investing behind every strategy that we've identified that we think has a plausible chance of working, even when that's contradictory to another strategy that we're investing in. And one is just like the world's messy and probably a bunch of things are going to work and so there's not going to be clean yes or no answers to a bunch of this. A lot of the answers to this I think are just going to be and answers. But the other is like if one of these strategies doesn't work, like, you know, we're not, we're not trying to hedge per se, but you know, we're going to have representation in the portfolio of the alternate strategy. And so we're going to have multiple ways to win. So anyway, that's, that's the goal. That's the theory of why we are, you know, kind of taking the approach in the space that we're taking. And that's why I have a big smile on my face when I say that there are these big open questions because I think that actually works to our advantage.

54:33

Speaker B

It's a good seg to a 16z questions because we've gotten a few in so far and we have a few that were send ins ahead as well. So, so I'll start one with a broad topic. What is something you and Ben disagree and commit on?

58:23

Speaker A

Disagree and commit. You know, we agree. I mean, we, as Ben and I was going to say, you know, we're an old married couple. So we argue constantly, but we've been.

58:39

Speaker B

Where the romance is dead.

58:47

Speaker A

The romance is long dead. Yes, yes, yes, yes. The fire has long since gone out. But yes, yes, we're in the park, squabbling all the time. Um, so, yeah, I mean, so look, we debate everything. We, we argue about everything. We. That, that said, like, you know, one of the things that's made our partnership work is like, we do, we do tend to come to the same conclusion. Like, each of us is open to being persuaded by the other one. And so we, we end up coming, you know, we end up coming to the same conclusion most of the time. Um, so I would say there, there aren't like a, there aren't. I said specifically, sitting here today, there are like zero issues where I'm sitting here and I'm like, I can't believe, you know, I just, I can't believe I'm, you know, I'm putting up with this crazy thing on, on his, on his part that he's doing that I really disagree with, but I feel like I have to commit to, or I, I don't think vice versa. And so, so we don't have any of those. You know, quite honestly, the biggest thing I say, the biggest thing that I, that he and I, the biggest thing that he and I discuss is this, by the way, this is not, this is not the most important thing we're doing, but it is a topic. Since somebody asked the question. The biggest thing he and I discuss where I, I don't know, maybe I'm always like second guessing myself or I, I, I never quite know where I should come out on it. That he and I talk about a lot is just like, basically the public footprint of the company. So like, our presence, our presence in the, our presence in the world in terms of, of like public statements, controversy, you know, how we vocalize and express our views on things. And I would just say there, like, you know, there's a real, there's a tension. There's a real, it's, you know, maybe obvious, but like a very important tension. Like, generally speaking, the more out there we are and the more outspoken we are and the more controversial we are, the better for the better for the business in the sense of the entrepreneurs. Love it. The founders want to work with, this is very clear at this point. The founders want to work with, work with people who basically are brave and controversial and take controversial stands and articulate things clearly. And they want that for a bunch of reasons. One is because it's a Demonstration of courage, which they appreciate. But the other is because it teaches them who we are before they even meet us. And that has just proven to be just like this incredible competitive advantage. Long term LPs will know. This is why we started with a very active marketing strategy since the very beginning. And it completely, completely worked. Like the, the whole thing was if we're able to broadcast our message and we're able to basically be very clear in what we believe, even to the point where it's controversial, like the best founders in the world are going to understand us before they even walk in the door, right? And they're gonna, they're gonna know us even before they've met us, as opposed to everybody else in Venture, at least at the time. That was basically just like keeping everything quiet where they, you know, the founder just has no idea who these people are and what they believe. And so that, that like worked incredibly well. It continues to work incredibly well. It's, by the way, it's, you know, it's generally true across the industry. It's like generally the case. On the other hand, there are externalities to being, you know, publicly visible and to being controversial on many fronts. We are, I would say this. We are, we're very much, we're trying very hard to thread this needle. So like, we're not backing off of generally being a company that does a lot of outbound. We, you know, we, Eric Torenberg and the team that he's built, you know, that we've talked to you guys about in the past, you know, is already off to the right races. You know, we're, we're going to, you know, we're tripling down on the idea of basically being the leaders and articulating the tech and business issues that matter, you know, the, you know, the issues for sure that people need to be able to understand. And that's proven to be very effective. By the way, a fair amount of our comms are actually aimed at Washington because again, it's like if you're a policymaker in Washington and you're sitting there 3,000 miles away and your entire information source is like east coast newspapers that hate Silicon Valley, like, that's bad. And so, you know, our ability to like broadcast, you know, inform points of view on technology, we just, we meet people in D.C. all the time who say, yeah, I, you know, most of what I know about this topic I learned from you guys because I listen to podcast, I read the articles, I watch the YouTube channel. And so, you know, we're, we're Going to continue to do that. And so we, you know, over, over, over. Overall we have a, you know, we're kind of on our front foot on that stuff. But yeah, he, he and I do, he and I do go back and forth a bit on exactly how. Yeah. How many third rail topics should we touch and, and how frequently. And I, I would say we're, we're, we're trying to, we are trying to moderate that.

58:49

Speaker B

As Elizabeth Taylor said, as long as I spell our name right, it's oftentimes could be good in most scenarios, particularly when it comes to little tech double S. And also I think embedded in that question is probably some degree of the relationship that you and Ben have, which is now going on 30 plus years at this point. So much so that Mark has become one person representing both. Some people refer to Mark as Andreessen. Who Horowitz. No, lost the Mark. Have combined just into one person. Yes, that's the result of 30 plus years working together. Okay, so it's been two years since you've reorganized around AI, launched AD. What do you think you got most right? And in hindsight, is there anything that you underestimated or missed in that decisioning process?

1:02:47

Speaker A

No, I mean, look, we made, we made plenty of mistakes. I think those were, I think those were the right calls. I mean AI was like I said, like, you know, the whole, the back up, the whole theory of venture, the whole theory of venture that we've had from the beginning is that, you know, many people before us have had as well. That's very correct. I think is both theories like the money adventure is made when there's like a fundamental architecture shift, like when there's like a fundamental change in the technology landscape. And that's been true for, you know, adventure basically forever. And the reason is because if you have a fundamental change in technology, then you have this period of creativity in which you can have basically aggressive, you know, very aggressive kind of people, you know, kind of start these new companies and, and they have this kind of shot to kind of come in and you kind of win categories before big companies can respond. If there's no fundamental change in technology, it's very hard to make startups work because the big companies just end up doing everything. And so you, so venture kind of, you know, kind of lives or dies on, on the basis of these, of these waves of these transitions. And so there's always, there's always this question. It's always this question. I mean, I would just say the best venture capital firms in history. I think are the ones that were the most important, aggressive at being able to navigate from wave to wave, right? And look, I was a beneficiary of this when I came to Silicon Valley in 1994. There was no venture firm in 1994 that was like the Internet venture capital firm. It just didn't exist. But there were a set of venture capital firms at the time, at the time, our firm Kleiner Perkins, that said, oh, this is a new architecture, this is a new technology change. It seems totally crazy. Everybody says you can't make money on it, whatever, whatever. These kids are nuts, but we're going to make the, those bets. And so they were willing to invest. And by the way, you know, KP in the, in the, in the 90s invested not only in US but also in Amazon and in Google. And like, you know, you know, company after company after company, they invest in at home, which basically made, made home broadband work. You know, they invested in a fleet of companies. And they were a venture capital firm that had started in the 1970s around, really around what was at the time called minicomputers, which was like a, you know, three generations of tech technology back and they had navigated from wave to wave. And, and you know, the same thing is true for Sequoia. The same thing's true for basically any successful venture firm has been a business business for 30 or 40 or 50 years. And so I think in this business, like of all businesses, you need to get onto the new thing. Quite honestly, it was, I think, pretty amazing that most of the venture ecosystem just decided to sit crypto out. And the number of VCs that we talked to between call it the release of the bitcoin white paper in 2009 to the beginning of the crypto war in 2021, who just basically said, oh, we're not going to do crypto. It was fair. It's. I, I, like, I don't, I, I never quite know what to do with a VC who says, oh, there's a new wave of technology and I'm very deliberately not going to participate in it. And I'm always like, like, is that not the job? Right? Like, so, so, so like I was fairly amazed by the VCs that didn't make the jump to crypto. You know, they, they looked briefly smart during the crypto wars, I would say of the last, you know, three or four years. And I think they, they probably look maybe a little bit less smart now. You know, AI is another one of these where there are certain firms that are are jumping all over it. And there are certain firms that are just kind of sitting back and letting it happen. And by the way, there were certain firms that never made it to the Internet. I mean, there were firms that were very well known in the 80s and very successful that just did not make the jump out of the Internet and basically just petered out anyway. Long winded way of saying, I think in this business, of all businesses, you have to jump on the new wave. And I think we got the magnitude of it right, that this is a fundamental transformation inside the firm. AD is, AD is doing great. AD itself, I believe, is also a beneficiary of AI. Right. Because in two ways. One is a lot of the kinds of products that ad companies build themselves benefit from AI. And then also AI is a driver of demand in other sectors of ad, like energy and materials. And so I think that generally is very consistent and is working well. By the way, crypto's back to being, I would say, an exciting industry as a consequence of all the policy changes. And then there's even going to be, I think, intersections, I think there's actually going to be quite a few intersections between AI and crypto. And then biotech, also bio and healthcare, I think are obviously going to be transformed by AI, both on the healthcare side and on the actual drug discovery side. And that's underway. And so anyway, so the individual efforts in the firm feel good and suitable for the time the interactions between the teams and the hybrid ideas, the companies that are coming at these things from multiple angles feels really good. Maybe the corollary question is what do we feel like we're missing right now? And I think the answer is really, really not. Like, I don't, I don't think like right now we're not missing a vertical. Like, I don't like as of right now, like there's not like a specific vertical of like, I don't know, whatever that, like where we just like, oh, we just need, you know, we need the equivalent of a new, of a new unit or the equivalent of a new, you know, new fund or whatever. I don't, I don't see that at the moment. I think it's more executing extremely well on the verticals that we have in front of us and, and then, you know, being the best possible partner to the, to the portfolio companies.

1:03:35

Speaker B

Yeah, actually on the point of 80, because AI is creating, there's a lot of talk around AI taking jobs, et cetera. Ironically enough, the jobs in AD sectors have never been more in demand in the Physical world related to energy, related obviously to data center build out, et cetera. So like the, the pendulum it seems like also is, is swinging from just an accelerant standpoint from, from a society point of view. You talked about the importance of society also needing to be ready for tech adoption. Like have you seen that accelerating of recently? What's your sentime to actually increase that just to also make sure the convergence of adoption also falls in line with how quickly tech is actually being implemented.

1:08:27

Speaker A

Yeah. So you know, look, we've talked about this before but you know, for a very long time tech was just not a very relevant. Look, if you go back over like whatever 300 years, like there's just like recurring waves of like total panic and freak out caused by new technology or even you go back 500 years, you go back to the printing press, you know, which basically was hand in hand with the sort of creation of Protestantism which really changed things. And then you know, you go back to, you know, there were just always kind of, you know, continuous panics there. You know, there have been, there have been multiple ways of automation panics for the last 200 years. You know, a lot of the foundational panic under Marxism was basically a fear of, of the elimination of jobs through the application of automation. You know, a lot of the same arguments you hear today about like AI is going to centralize all the wealth and a handful of a few people and everybody else is going to be poor. Immiserated like that basically is what Marx used to say, which I think was by the way wrong then and is wrong now. We can talk about. But you know, and then even like in the 1960s there was this whole panic around, around AI replacing all the jobs. There was this, there's this great, it's long, long forgotten but it was a big deal at the time during the Johnson administration. You read these AI pause letters today. You know, this one that just came out a few weeks ago that Prince Harry headlined of all people and you know he talks about AI is going to ruin everything. And it's like in 1964 there was basically a group of like the leading lights in academia, science and you know, kind of public affairs that there was this thing called the Triple Committee or the Committee for the Triple Revolution. If you do a Google search on it's like Committee for the Triple Revolution, Johnson White House or whatever, you'll this thing will pop up. And you know, it was a very similar kind of manifesto of like we need to stop the march of technology today or we're Going to ruin everything. And then, you know, even in the course of the last 20 years, there was like, like a big panic around actually outsourcing in the 2000s was going to take all the jobs. And then it was actually robo. It was actually robots, weirdly enough, in the 2010s, which is amazing because robots didn't even work in the 2010s, and they kind of, you know, still don't. But, you know, there's a panic around that and now there's kind of whatever level of AI panic. And so, like, you know, like I would just say, like, look, the, you know, the way I would describe it is, you know, we in Silicon Valley have always wanted the work that we do to matter. You know, we spend most of our time, quite honestly, with people telling us that everything that we're doing is, is stupid and won't work. Like that's the default position, you know, and then basically that flips at some point into panic about how it's going to ruin everything. You know, it's, it's easy sitting out here to be cynical about that, especially when you kind of see the patterns over time. I, you know, my view is we need to be actually very respectful of that and we need to be very aware of that. And basically that we, you know, I use the metaphor with the dog to caught the bus. Like, we always wanted to work on things that matter. We are working on things that matter. People in the rest of society actually really do care about these things. Things. And, you know, and it's our responsibility to think that all through very carefully and to do a good job, you know, both not just building the technology, but also explaining it. You know, look, you know, I think we have a real obligation to, you know, to really explain ourselves and engage on these issues in terms of how to measure how it's going. You know, it's sort of the classic social science question, which is like, okay, if you want to understand basically, you know, patterns of people, there's basically two ways to understand what people are doing and things thinking. One is to ask them, and then the other is to watch them. And like every social, every social scientist, like every sociologist will tell you this, which basically is you can ask people, right? And the way you do that, right, is like, you know, surveys, focus groups, polls, you know, what they think. But then you can watch them and you can do what's called reveal preferences. They're just observed behavior, which is you can actually watch their behavior. And what you often see in many areas of human activity including politics and many different aspects of society, society and culture over time is the answers that you get when you ask people are very different than the answers that you get when you watch them. And the reason is because, like, I mean you can have a bunch of theories as to why this is. The Marxists claim that people have false consciousness. The, the, the, the somewhat, the explanation, I believe is just people have opinions on all kinds of things, particularly when they're in a context where they get to express themselves. And they'll have a tendency to kind of express themselves in very heated ways. And then if you just watch their behavior, they're often a lot homer and a lot more measured and a lot more rational in what they do. And so that's playing out in AI right now, which is if you pull, if you run a survey or a poll of what for example, American voters think about AI, it's just like they're all in a total panic. It's like, oh my God, this is terrible, this is awful. It's going to kill all the jobs, it's going to ruin everything, the whole thing. If you watch the revealed preferences, they're all using AI, so they're like, they're downloading the apps, they're using ChatGPT in their job. They're, you know, having an argument. You see this online all the time now. I'm a having an argument with my boyfriend or girlfriend. I don't understand what's happening. I take the text exchange, I cut and paste it into chat GPT and I have chat GPT explain to me what my partner is thinking and tell me how I should answer so that he's, you know, he or she is not mad at me anymore. Right? So, or like, you know, I have this thing, you know, I have a skin, you know, I have a skin condition and doctors, you know, da, da, da. And I take a photo and I feed it and I'm finally like learning about my own health or I use it in my job. Like I, you know, I had to get this report ready for Monday morning and I ran out of time and like, you know, ChatGPT really saved my bank can. And so people in their daily lives are, I would, you know, just, you just look at the, just look at the data. Just like they are not only using this technology, they love this technology and they love it and they're adopting as fast as they possibly can. And so I, I tend to think we're going to pick with the public discussion. This is going to ping pong back and forth For a while, because there is this divergence between what people are saying and what people are doing. But, but I do think that the what people are doing part is, is obviously the part. The part ultimately that wins. And, and, and I think this, by the way, I think this technology is going to be exactly the same as every other other one, which is the thing that's going to happen here is this is just going to proliferate really broadly. It's going to freak everybody out. And then, you know, 20 years from now, everybody's going to be like, oh, thank God we've got it. Like, wouldn't life be miserable if we didn't have this? And. Or, you know, five years from now or one year from now, you know, people are going to reach that conclusion. So I'm, I'm very optimistic about where this lands. It's just that, you know, there will be turbulence along the way.

1:09:11

Speaker B

I'm smiling because I also witnessed that in the wild. Literally late last week, I was on the plane. The guy next to me was talking to his chatgpt. I could see him, and he was like, help me draft an escalation letter to United for the delay on this flight. I was like, sir, you are on the flight right now. Like, at least wait until it's over. It was very good, though. I'm sure he had a great email crafted as a, As a part of that. So. Okay, I'm going to switch gears to a few fun questions that were sent in that is intended to be a lightning ground. So. So what is something you've changed your mind on recently? Bonus points if it was someone you. Younger than you.

1:15:06

Speaker A

I mean, it's like every day, it's just like. It's just a constant. You know, it's. It's almost all like, what's in the realm of the possible. I, I'm terrible at specific examples, so I don't, I don't have one, like, ready at hand. But, like, like I said, it's just. It's, it's always. Yeah, no, it's. It's often somebody showing up. It's either something somebody writes or something somebody says. And yeah, it's almost all. Yeah, it's very frequently somebody who's very young and. Yeah, it's just like, I would say it's. It's a, It's a really routine experience.

1:15:43

Speaker B

Good way to stay young. Do you plan. Speaking of young, do you plan to be cryogenically frozen?

1:16:08

Speaker A

Not with current. Not with current cryogenic technology. The track record of that is not Great. And the stories are somewhat horrifying, but we still got some time.

1:16:16

Speaker B

How do you stay grounded when your influence itself may distort reality around you?

1:16:32

Speaker A

Yeah, so I was just saying the good news, you know, I would say the good news on several. So one is, look, the concern is real, and it's hard for me to. It's hard for me to talk about with sort of my Midwestern, you know, kind of, you know, Midwesterns, we either are very humble or we're really good at faking it, but, you know, it's hard to talk about, but requires some introspection. But yeah, I mean, look, the reality warping effect is definitely real. By the way, there is a very big advantage to the reality warping effect, which is being able to get people to do what you want them to do do, so that, you know, there is. There is another side to it. But it, you know, it is a concern in terms of, like, having an accurate understanding of what's happening. I guess I'd say two things. I would say one is, you know, I mean, one is just, you know, my partners, I think, are quite, you know, including Ben, are quite forthright in telling me when I'm wrong. But, you know, more generally, like, we're just. We are very exposed to reality. And so. And this. And again, you know, you mentioned, I don't know, it's a way to stay younger, make sure that our hair never grows back or whatever is just like, you know, we run these experiments, you know, because we make these decisions about whether to invest or not invest, and we work with these companies and all their things, and, like, you know, reality kicks in quickly. You know, the, The. The. The delusions don't last very long in this business because, like, you know, these. These things either work or they don't. And, you know, you have these, like, long, elaborate, you know, discussions about, you know, theories on this and that and the other thing. And then reality just, like, completely smacks you square in the face, you know, like, you idiot, right? You know, like, you know, what were you. You. You like, you know, this is like the, you know, the ultimate frustration of business, which is also very motivating, which is the number of times that you think that you've applied superior analysis and then you've either invested or not invested based on that analysis. And it turns out it was just. Your analysis is just completely wrong. Right? And, you know, you just, like, completely overrated your ability to epistemically, you know, kind of analyze these things. You just, you know, basically inflicted Harm, like I always question is always, you know, it's sort of, you know, any activity that we do, is it value add or is it actually value. You subtract. Right. And I think in this business, of all businesses, it's kind of like that. And that applies to all of my own contributions as well. So there is that. And then I would say maybe the final thing is just like I do have the entire Internet ready to tell me that I'm an idiot. So that also doesn't hurt. And it does on a regular basis.

1:16:37

Speaker B

On the point of your alluding to earlier about decisions investing in companies, my favorite line, I think it was from the cheeky pint interview that you did was, you know, when you invest in a company, it doesn't go well. At least it goes bankrupt, right? If it does, if it does well, and it does fantastically well, you hear about it every single fucking day for.

1:18:48

Speaker A

The rest of your life. Yeah, for the next, for the next 30 years, reality smacking you in the face, saying, you fool, you had it. It's literally, it's literally you had it in your office. All you had to do was say yes. And by the way, and this is the thing, like every great VC, this is the story the, you know, the VCs tell each other. Every great VC basically has this history of like, my God, I had. It was in my office. The thing was in my office. And I said no. And if I had just said yes. And so it's, yeah, it's very hard to. Yes. The constant reminders in the Wall Street Journal and on CNBC every day that you made a giant mistake is, yes, very good, very good for the, for the old humility factor.

1:19:08

Speaker B

Yeah, very, very humbling. Helps you stay grounded all the time. Last question. Do you plan to go to Mars if and when that opportunity present itself?

1:19:46

Speaker A

Probably not.

1:19:55

Speaker B

My subliminal zoom background wasn't sending you the positive vibes. This is what it.

1:20:00

Speaker A

Well, I'm not even willing to leave California. I'm barely willing to leave my house. So, yeah, I don't. Maybe by VR. Yeah. And then we'll see what happens. I mean, look, having said that, I think you is going to pull it off. And so I think, you know, I don't know. I don't know. I don't want to predict. This is not prediction. But I, you know, I would not be surprised if within a decade there's routine trips back and forth. So, yeah, we may, this, this may actually become a practical question. And, and by the way, I do know a lot of people who are.

1:20:05

Speaker B

Probably going to go, myself included, put me on that.

1:20:35

Speaker A

Oh fantastic.

1:20:38

Speaker B

The flights around the world have prepared me for the six month journey to Mars, so I will be just fine.

1:20:41

Speaker A

Thanks for listening to this episode of the A16Z podcast. If you like this episode, be sure to like, comment, subscribe, leave us a rating, or review and share it with your friends and family. For more episodes, go to YouTube, Apple Podcasts and Spotify. Follow us on X16Z and subscribe to our substack@A16Z substack.com thanks again again for listening and I'll see you in the next episode. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16zone.com forward slash disclosures.

1:20:49