AI + a16z

Feed Drop from The Generalist: Why a16z's Martin Casado believes the AI boom still has years to run

82 min
Dec 30, 20254 months ago
Listen to Episode
Summary

Martin Casado, a16z general partner, discusses why he believes the AI boom is still in its early stages (circa 1996), his market-first investing approach, and key AI investments including Cursor and World Labs. He covers the evolution of a16z from a small generalist firm to a specialized organization, concerns about Chinese dominance in open source AI models, and why AI coding could be a multitrillion-dollar opportunity.

Trends
AI market still in early stages with years of growth aheadShift from company-out to market-in investing approachAI coding tools creating multitrillion-dollar market opportunityFragmentation of AI markets during growth phaseChinese leadership in open source AI models3D content creation becoming accessible through AIIndividual prosumer AI adoption preceding enterprise deploymentTraditional moats still necessary for AI company defensibility
Full Transcript
Today we're replaying a conversation from the generalist with a 16Z general partner Martin Casado. Martin shares his perspective on the AI boom, why he believes we're still in the 1996 moment of the cycle, how a market first lens shapes his investing, and why he's skeptical of AGI centric framing. He also reflects on his path from game engines and simulations to pioneering software defined networking and investing at the frontier of AI and infrastructure. They close with why AI coding could be a multitrillion dollar opportunity, how A16Z evolved from a small generalist firm into a specialized organization, concerns about Chinese dominance in open source AI models, and how World Labs is tackling the 3D representation problem with implications for robotics in VR. If you ask me, what is the one area that AI has surprised you? It's encoding. I've been developing my whole life and I would never have guessed it'd be this good. You have mentioned that some of the energy that you're seeing in AI really reminds you of the 90s.com boom. This feels a lot like early 96, but I don't think we're anywhere close to a late 90s level bubble. No, I think that could come. The current technology wave is you can actually deploy capital and you can get revenue on the other side of it. And I think that is what the market is trying to normalize. But there's a true value being created in this AI and I think that if money's not following it, it's going to miss the greatest super cycle in the last 20 years. How would you describe your investing style today? What is your filter? I used to think from company out, I've stopped that now. I think only from markets in the reality is the market creates the company in most cases, not the other way round. And so I always start with what is the market? And then I ask the question, is this the right founder for this market? It's clearly not perfect and in fact you'll be wrong a lot of the time. But I would submit that if you invest in this way, you will be right in a way that's better than market norm. Hey, I Mario. I'm Mario and this is the Generalist podcast. As the saying goes, the future is already here, it's just not evenly distributed. Each week I sit down with the founders, investors and visionaries living in the future to help you see what's coming, understand it more clearly, and capitalize on it. Today I'm speaking with Martin Casado, a general partner at Andreessen Horowitz and leader of the firm's infrastructure practice, Martine has had one of the most fascinating journeys in Silicon Valley, from writing game engines for budget video Games in the 90s to selling his startup for approximately 1.3 billion in 2012, and now investing in the next generation of AI companies like Cursor and World Labs. In our conversation, we explore why Martin believes the AI boom has room to run, how he identifies market leaders before consensus forms, and what China's dominance in open source models means for American technological sovereignty. If you like today's discussion, I hope you'll consider subscribing and joining us for some of the incredible episodes we have coming up now. Here's my conversation with Martin. Awesome. Well, I've really been looking forward to this a ton. You have such an interesting background and have sort of charted a lot of these different cycles in technology as both the founder and investor. So excited to get into AI today in particular. But to start, I wanted to maybe begin with a part of your history that intrigued me, which is that in the early 2000s, as far as I could tell, you were spending a little bit of time at the Department of the Defense. Department of Defense, working on simulations. Tell me about that. Actually, it was Department of Energy, so I worked at Lawrence. Yeah, Lawrence Livermore National Lab. So actually I'm going to rewind it like just a couple of years. So I actually paid for a lot of undergrad writing game engines for video games. So that was kind of, you know. So back in like the 90s, you only really got into computers if you wanted to hack or make video games. Like, that was it. I mean, it like wasn't what it is now. And I kind of took the video game route. And so I did like a lot of, you know, game development. And in college I did a lot of engine development. And so what I was interested in was things like 3D engines and game physics and game mechanics. And that pushed me towards computational physics, like simulation. I mean, so the game industry is a very tough industry to do. And I was actually quite interested in science and I was quite interested in physics. And so that pushed me towards the national labs. And so, yeah, so my first job was doing basically computational physics, working on these large simulations at Lawrence Livermore National Labs. And I, I, I, I, I started interning in like 97, 98 timeframe. And then I took a full time role in 2000. Do you remember what games might have used some of the engines you were building? This is so funny. So I worked. The company probably doesn't exist anymore. But I worked with a. It was a contract outfit called Creative Carnage and they worked with the Budget division of. I think it was either Acclaim or Accolade and it was called Head Games. And I think we have the great distinction of having had the games with the lowest ever score on PC Gamer. So they would do games, they would do games. Like, I remember there was like Extreme Paint Brawl, a mountain biking game, like a skydiving game. And so this was like in very early days of like 3D engines and we didn't quite understand the game mechanics. And so it was like a super budget, you know, game shop. But these were games that you go to Walmart and buy. I mean, they were very legitimate games. And so that was kind of my shady entree into this. I love that. The Razzies of video games. Exactly. Yeah, yeah, yeah. Budget games. Yeah. This is, you know, off piste at this point. But do you, Are you still a gamer? Like, do you find yourself interested in that as a media form? So I've never been a big gamer as far as playing games, but I've always loved creating games and I still do. That's what I do in evenings now. So I love music, um, I love narratives, I love programming and I love games. And so actually if you track some of the. I mean, this is like not great work. This is all hobby work. But, you know, I worked with Yoko on AI Town. I've recreated a bunch of old 8 bit games using AI and so it's actually still like a big passion of mine. But again, I'm not. I'm not a big gamer. I don't like, sit down and play games. That's really cool. I knew you still remained, you know, kept your technical chops up, but didn't realize you were applying it in that way. That's super interesting, by the way. AI makes that a lot easier. I would almost certainly not be programming like I do now if it wasn't for AI for sure. Okay, well, we're definitely going to dig into that from a few different angles. You know, after Lawrence Livermore and Department of Energy, you started your PhD at Stanford and then sort of dropped out to start Nicira. And, you know, I wondered about that part of the journey, specifically because you've made a few big leaps in your professional life and that was maybe, you know, sounded like a rather significant one. Had you at that point imagined yourself being an academic indefinitely or had that been always something that you were interested in? You know, the idea of starting something? Yeah, so I actually didn't drop out. I finished my PhD. So I think it's, it was kind of funny. So the adage, the. It's kind of interesting. The adage at the time was the only way to be a successful founder is you have to drop out of your PhD, right? Because, you know, Sergey and Larry Page were in the floor above, above where I was in Gates. And mostly, almost all of the successful founders at the time were PhD dropouts where I'd actually completed. So, no, no, I actually didn't plan to be a founder at all. I actually had a faculty offer at Cornell at the time, and we're talking 2007 now. So my plan was, you know, I did this PhD work. I'd done a startup previously as a very small thing. It was called Illuminex Systems, which, you know, instead of raising money, we ended up selling it. And so I, I liked being a founder, but I thought this was kind of like, I was so naive. I was so naive. I thought, this is something that, you know, you can just start a company and do it for a couple of years and then sell it and go do something else. But, you know, I started the company in 2007, and then 2008 hit, and that was a hell of a reality check because, you know, this is this fork in the road. Like, do you do this, this company in the worst economic environment since the Great Depression, or do I go be an academic? And, you know, it forced me to really decide what I wanted to do, and I decided to do the company. Was that a difficult decision at the time? It was so hard. I mean, it sounds daunting given the environment, but, you know, in it, in your spirit, it was, it was so hard because, you know, I mean, especially because, you know, I mean, this is when Sequoia had released their slide deck. Rest in peace. Good times. Everybody was, you know, riffing their companies. I mean, the, the, the economy was tanks. It was very, very tough. And part of it was honestly just responsibility. I was just like, I, I, I convinced all my friends to join this company, and I would feel like such an asshole if I just, like, left. That was part of it. And another part of it is I just felt like there was work to be done that I hadn't finished. And, and I just am of the temperament that if I start something and I don't finish it, it'll bug me forever. And so I kind of didn't want 10 years. But I'll tell you, when I made the decision, I called my mom and she said, Martine, you're an idiot. So for what, for what it's worth, I was. I was pretty alone in the decision. Wow. No kidding. Well, it ended up, you know, being both technically or technologically an important company and, you know, having an incredible outcome. Yeah, it worked out. Yeah. And, you know, in. In sort of reading about part of that period, I was interested to see just how important you really became at the acquirer of VMware from sort of contemporary press. At the time, you'd really taken on a growing role and scaled the sort of team that you were leading to really a rather large size. So it seemed like that was also clearly an option for you. How did you make the choice to flip over from operating at a very, very high level to the investing side? Yeah, so, you know, I learned easily as much at VMware than I did in the startup, and it was a phenomenal experience. And, you know, it's one thing to do a startup and, you know, to do early founder sales and to build a team, it's an entirely different thing to get, you know, a business to a billion globally with all the partners, and especially within a large organization where, you know, you're overlaying with kind of an existing core team and other product teams, etc. I mean, it was a great experience. But one thing that's important to remember is I started the research for this in probably 2005 and 2006, right? And so by the time, you know, I was at VMware for three years, it had already been 10 years. So we got acquired. By 2012, it had already been 10 to 11 years that I've been working on the exactly the same thing. Thing. And so I've just found that my career goes in kind of decade epochs, right? So in my 20s, I was write papers, write code engineer, poorly dressed PhD student that knew nothing about business and nothing about anything. And it really was. That's what I did. I mean, I wrote a lot of papers, I built a lot of systems, and I love that. And then in my 30s, basically almost to the day, I mean, it was this journey which is like building products, building a business, building a team, and doing that globally. And I did think to myself, like, you know, I'm so enamored with technology and I'm so enamored with startups, and I love innovation. You know, you ask yourself, okay, so what do you do next? Right? And I like being close to, like, where things are being created. And so that means that you get involved in the startup ecosystem, but do I want to spend another 10 years doing a journey that I've already done, or do I want to zoom up one more level? And so I almost feel like my 20s. It was like the abstraction was a product or lines of code, and then I zoomed out a little bit, then the abstraction was one company. And then when you join a firm, you zoom out a little bit more, and then the abstraction is a company and you actually see the experiment in parallel. And I will tell you, from this vantage point, even though I had done two companies, I learned so much more than I ever would have if I would have done another company. So for me, it was the right, the right decision. Does that mean that the sort of glide path you're on is toward, I don't know, governor of California, the next abstraction layer, Mayor of San Francisco? I will never listen. I had a small taste of politics last year when I thought that there was nobody defending AI from a policy standpoint. Never realized. I will never, ever, ever, ever go into politics, man. As far as I can tell, everybody just lies to each other all the time. It is not for me. Yeah, I. It sounds like it would be infuriating. You know, Andreessen had invested in Nasira, and so you'd obviously built this relationship with Mark and Ben. But how did the, the sort of decision to come aboard actually come about? Were they, you know, pitching you? Were you pitching them? How did, how did you guys make the call? Yeah, it's kind of a funny story. So it's actually not a super public story. It's kind of a funny story. So, so, so Mark and Ben invested in Nicira's angel investors. You know, this is before the fund existed to begin with. And I mean, actually, the way that I, I met Ben was Andy Ratcliffe was on my board. So Andy Ratcliffe is the famous benchmark partner. You know, he's a professor at Stanford. And I was looking for a CEO because, you know, I was a very, know, technical cto, kind of co founder. I didn't know anything about enterprise sales. And he's like, you know, I, I know this guy. He's just coming out of hp, sold the company. His name is Ben Horowitz. And so I, I actually met Ben Horowitz to, to interview him for a CEO. And you know what he told me? He said, I'm too rich. You're like, all right, this guy's not, not the guy? No, he was. I mean, he was so great. I actually learned more from him in that 45 minute meetings than any other Advisor I talked to up to that point, which had been years. I mean it was the most eye opening thing ever. And so he said, listen, we'd love to angel invest. Mark and I are trying to figure out what we're going to do. They did some angel investing and when they started the firm, then we went and pitched and we raised. I mean at the time we called it a series B, but it was really a series A from them. And so, you know that we kind of have like a history before Ben joined the board. And so, you know, listen, I built the company under his guidance. It was very critical to basically every aspect of it. And so when I was thinking about what to do next, actually I reached out to, I reached out to Mark and I actually felt it would be better to reach out to Mark because like Ben was on my board. And so that relationship is, you know, it's kind of like, it's like your PhD advisor. You're never not their student. And you know, I think like with a board member, you're never not like the, the founder that they work for. And I said, hey listen Mark, you know, I'm interested in next, you know, the next steps. And, and one thing people I think don't appreciate about Mark and Ben is how good of operators they are. And so they, they took it very seriously. They themselves managed the conversation. I mean, I was still really trying to figure out the next thing to do and Mark was really texting me every single day. You know, they, they brought me in. I mean like the close process that these guys run is just ab world class. And of course I knew them very well, so it's not like that would have really been necessary. But you know, they knew what they, they, they wanted. They had an opening for an infrastructure. We had a long relationship, you know, and so, you know, in, in fairness, I, I didn't even really talk to anybody else. You know, I mean there was some kind of very early conversations, but I knew that, you know, that's where I wanted to land. And so it was kind of a mutual process that was pretty streamlined. Amazing. Thanks for sharing that. I have to jump back to when you say the 45 minutes with Ben taught you more than every other advisor. Do you remember anything from that meeting in particular that stood out? Yeah, yeah, a bunch of them. I mean, one of them is, you know, I was talking about pricing. By the way, anybody who works with me is going to realize that like half of what I say just steal from Ben because I say what I'm about to Tell you, I tell people all the time, but it's so true. And so I was asking a question about pricing and he says, I just want you to know this is the single most important decision you'll make in the history of the company. One decision. And really for your net worth as a human being, this is the most important decision. And let me describe why. Well, you own a bunch of the company. The valuation of the company is going to come down to growth and margins, growth and margins. The single most important decision on what impacts that is going to be pricing. And so everybody views pricing totally glibly or they kind of make it up, or they're ad hoc, but they don't understand how important that single decision is towards, you know, the health and ultimate valuation of the business. And then he actually broke all of that down. And at the time, software was going through a pricing change like it is today. So it was going from kind of on prem perpetual to recurring. And this had massive impacts in how you comp, your sales team had massive impacts on how you do go to market and massive impacts on, on what numbers meant to be a healthy business. And so he just walked through all of that from this very single discussion. And just so you know, we're seeing the same shift now as we go from basically recurring license to usage based billings. And so even, even, you know, this conversation I had in 2009 is still relevant today and I draw from it. So I think this is a good example of, of, you know, this deep insight that he was able to portray from his operational knowledge. Yeah, incredible. If you were to pick a VC firm that has changed the most since you joined A16Z in 2016, arguably you would pick your firm, the firm you work at, in terms of transformation. So much seems to have changed in that time period. And so I wonder, when you look back on it, what was the Andreessen of 2016 like? And where do you see the biggest differences? Oh yeah, it's totally different. I think it was the ninth general partner you may want to check on. It was like and ninth and, and when I joined probably 70 people at the firm on Mondays we could all sit around the same table. Everybody was kind of a generalist. You know, we didn't have a notion of a more senior investor below the GP ranks. Like we didn't have any sort of progression ladder. It was actually us, it was a specific tenant of the firm that you'd have, you know, relatively junior. We call them deal partners, DPs and they would only stay for two to four years. And the idea was is that like, you know, you get more network that comes in, you know, they're quite relevant. And then also you kind of spread the A6T network as they go join other firms. Yes. So it was very, very different. So now all of that's different. Right. Like GPS are specialized, we have multiple funds. You know, we have a clear progression ladder of, of investing partners were, you know, 600 some odd people, maybe more. You know, we invest in all sorts of different levels. There's a lot of process and methodology. And so I would say the, the primary motivator for all of the change is the question, how do you scale venture capital? Yes. You know, in, in some ways, and I've said this before, so I'm, you know, it's kind of this historical quirk that venture capital firms have the same partner model as like a legal firm or a dentist office or a doctor's office, which is this partnership model where everybody's kind of equal, etc. And it made sense when the market was a thousandth the size. Like if you think about, like when we create venture capital firms, the market was so small, but it's grown now and it's professionalized as it's matured a lot. And so now firms have to answer the question, so how do you scale deploying money, how do you scale aum, how do you scale decisions, how do you deal with conflicts, et cetera. And so that's been the prime motivator that has changed many of the shifts that we've made. At a 16z, you mentioned that one of the big shifts is this verticalization and you head up the infrastructure practice for someone that maybe wouldn't understand how to put the parameters around that. What falls in the bucket of infrastructure and what might fall beyond it, so to speak. So the, the roughest cut is if the buyer or user is technical, it is infrastructure. So it is the stuff to build. The stuff like apps are built on infrastructure now. And in particular, it's computer science infrastructure. Right. So like you could say infrastructure is, you know, construction and rebar and concrete. This is computer science infrastructure used to build software. And so it's the traditional computer network, storage, security, dev tools, frameworks, et cetera, et cetera, et cetera. Now if there's a piece of software and the user, the buyer is in marketing or in sales or in a flooring shop or in a veterinarian. That's not us, that's apps for us. All of the consumers, whether they're an Admin type, a developer, that's infrastructure. And you know, in looking at the team that you've built out, one of the sort of striking things is it's an extremely technical team. You know, I, you know, seeing folks talking about sort of building custom AI GPU setups and so on and so forth. You know, when you think about many of the great venture investors over the past however many years, pick a few decades, a lot of them are not super technical, right? Like you can look at Mike Moritz or John Doerr or you know, Peter Thiels maybe in between a little bit. But ultimately I would say probably not a, a technical person in, in the way that we're talking about it here. Why does it matter to, you know, why is it important to have that level of technical expertise to do this style of venture investing? So I think the, actually the, the, the, the bigger priority for hiring on our team is actually product experience, especially in infrastructure, enterprise and less pure technical prowess. Like nearly everybody on the team has either built a company or run a product team. There's very few that were like low level engineer or low level researcher. And so I would say that is the primary focus. And the reason is because we invest somewhere between the seed, let's call it an early C. And often you can't judge a company purely by financial metrics, but often there's enough to evaluate so it isn't just a bet on the founder. And so what are you left with if that's the case? What you're left with is market understanding. And I just think it's very tough to do market understanding and infrastructure if you don't have a product background, which by the way is way more important than the technical background. If you don't have a product background, you can evaluate the market. And then if you know in infrastructure you don't have some technical basis, I don't even think you can have the conversations that are important. And then of course to map any given company to that market, you have to have also that same understanding. I think it's a great point about, listen, I think some of the best infrastructure investors ever were not classically technical. Like Mike Volpe is phenomenal, Daglioni is phenomenal. Fented is phenomenal. These are the greats. And I think that a lot of this is because we've had almost a generational shift in the industry. Where before it was such a kind of obscure knowledge, understanding the people and the networks and where they came from was critically important. I think now it's matured to the point that you actually can take a bit more of a systemic knowledge based on the fundamentals in the industry rather than those. And so I think this is more of a testament to the maturity and the size of the market than us as us as investors. And I will also say many of the top investors right now in infrastructure are non technical and they're phenomenal. Right. There's many great folks out there. So this is just our approach. It's definitely not the only approach to being successful that makes sense. You talked about how your life has sort of fallen into these decades and it is almost a decade, I think, from when you joined a 16Z with the benefit of that decade of learning. How would you sort of describe your investing style today? What is your filter on this market look like? So I've kind of decided I just need to remove. We as investors need to remove ourselves from predicting the future. Which is a funny thing because we're supposed to be predicting the future. I think that's a mistake. And so our approach is very straightforward. We believe that the founder network, the founders themselves are smarter than customers. They see the future, not us. They're definitely smarter than investors. And so if there are three or four very good founders that are working on a space, we just assume that space is good because A, they're founders, B, they're doing the opportunity cost of doing it. You know, they're risking their time, you know, their family's wealth in order to do this. And so to first order we just say, okay, what are interesting spaces? And there's, you know, there's a whole methodology we use to do that. And if there's an interesting space, the next question we ask is who is the leader in that space and is it too early to determine? And if, you know, if it's too early, we wait and if we determine that one we think is the leader, then we try and make the investment. The thing about this approach is a, it kind of removes us from, you know, like this. There's so many aphorisms on investing like this is a great founder and the founder has grit and you know, like all of these things. But at the end of the day, all of that you have to kind of filter through yourself and your team and we're all very biased and, and none of it you can systematize where if you're simply asking the question, a, is this legit space? And B, is this the best company in the space? This is something you could actually throw work on and it's not, it's clearly not perfect and in fact it, you'll be wrong a lot of the time. But I, but I would submit that if, you know, if you, if you invest in this way, you will be right in a way that's, that's better than market norm. Do you try? I mean, you must actually to some extent still evaluate the founder. And I imagine you've had plenty of meetings where you've, you know, met a founder and felt sort of palpably. This is an extremely impressive person. Do you, it almost sounds like you distrust that emotional response in yourself or how do you sort of think about that? This is a great question. So if there's one thing that has shifted in me about how I think about investing and how I think about companies is I used to think from company out, right? So I'll look at the company. I'm like, the founder is great, the product is great, the technology is great, the go to market is great. I've stopped that now. I think only from markets in the reality is the market creates the company in most cases, not the other way around. And so I always start with like, what is the market? And then I ask the question, is this the right founder for this market? The answer to your question of like, is this a great founder or a not founder? I don't think that there's a single answer. It strongly, strongly depends on what they're setting out to do. Now I do weight a lot of things. I do weight things like earned knowledge. Like have you earned the knowledge to be in this market based on your experiences in the past? Like, were you at the bowels of Uber building out their storage system and now you're bringing it to the rest of the world? You know, I'm a very product focused investor and so I just tend to resonate with product focused founders that see the world in terms of what is the product we're going to create and how am I going to insert that into the market? As opposed to pure technologists which don't care about that and pure salespeople which also don't care about that. Right. So I'm a very product focused CEO, but I will say that my, my, my umbrella answer, my macro answer to you is almost all questions I ask about companies actually stem from the market I'm in. Really interesting. You mentioned that you're sort of happy to wait until a leader has emerged in a certain market. How do you determine when that's the case and you know, if it's sufficiently durable? Is it like true market share sort of you know, looking at it from that vantage or are you sort of making a few guesses of like, you know, maybe. Yeah, yeah, yeah, that's. I mean, that's the. Yeah, that's. That, that's the part of the job where it's an un. It's an underdetermined system. Right. There's way more variables than equations than. We just do our best and our analysis is multifarious. I know as investors, and probably fueled by things like X, we like to reduce VC to. Here are these five things. Here's our basic thesis. The reality is most investment decisions take a lot of work. You consider an awful lot of things and then at the very end, you kind of look at it and you make a judgment on that. So what are the things we look at? Like I mentioned, founder market fit is very important. Tactical approach is very important. The market itself, to me is incredibly important. Like, I've just learned that if you're selling into a market, market that's shrinking, life sucks. Even if it's a huge market. If it's, even. It's a huge, huge market. Let's say, like switching and routing is this huge market. But if it's only growing 3% or is flat or it's shrinking, you know, you're dealing with budgets that are contracting, people that are losing their jobs, like all of the, the incumbents are going to be fighting for their lives. It's just, you know, so I'm, I'm very sensitive to markets that are growing versus shrinking. You know, ability to hire, ability to fundraise. I mean, all of these things go. I mean, the final memos for investments tend to be fairly comprehensive. And so all of this also necessarily requires us to do a lot of work before companies are fundraising. And so like, there's a kind of a necessary part of this motion, which is you're constantly trying to like, enumerate the companies that are out there and then doing the analysis to determine, you know, who, who is, you know, in the lead and who is not. And then, and then you're right. At the end of the day, you just kind of be like, ah, okay. I mean, we did all of this work and we think that you can make this argument here and we get it wrong a lot. Right? There's. Nobody can predict the future. Yeah, that's the beauty of this asset class, right? Yeah, 100%. I mean, you just have to be comfortable knowing that even if a company looks like the leader now, anything can happen. They could get acquired the next day for an Acqui hire that they decide to do. New company can show up that didn't exist before. Anything could have. There could be a platform shift, et cetera. And so the entire goal is can you over a set of investments, you know, beat, you know, the upper quartile of, of the other venture capital firms. That is the goal. And you take the losses along the way. You know, we're talking about, you know, the, the importance of the entrepreneur or the executive on X. I saw you mentioned that you thought Hock Tan, the, the broadcast Broadcom CEO was, you know, one of the great CEOs of you know, the past decade plus. And that's not a name that I usually hear discussed in those, in, in that debate. Like can you tell me where that comes from and, and why you think that I'll make a stronger form to say. I think hock10 may be the best outside of maybe Jensen and a handful of others. He may be the best CEO the industry has ever seen in infrastructure. He's just unbelievable. You know, somebody should do like the, the Hawk Tan, you know, book or overview or portfolio or you know, focus piece or whatever. The employee retention is unbelievable. He's managed to do these incredibly complex acquisitions and I will say so. You know, normally when you buy a company, any company at all, like the team that you integrate the acquired company into is you normally you've got all these kind of lawyers and corp dev and biz dev and HR people running around. You've got this entire committee for integration. You know, when Hock acquires a company, even something like the size of like a, a VMware like, like the M committee is Hock Tan, the integration committee is Hock Tan. I mean the guy is just legendary on like how hard he works, how he runs his meetings. He knows everything about his business. He knows all of the numbers. And what's interesting, he's a business guy. He's not a technologist nor a product guy, you know, but you know, he has now he's stayed away from the limelight and you know, to his credit like he, you know, he just focuses on the business. But there's a lot we can all learn from what he has done and what he's going to do. I really do think he is, he's probably the most iconic CEO right now. Well, you've put a good, a good marker on my editorial calendar there. So I'm going to, I'm going to make sure to do some more research and get him on. I don't know if he's you should. I mean. Yeah, why not? Yeah, that's a great thought. You had another tweet that I thought was really interesting and caused a little bit of a stir in VC world, which it's so fun what things happen to cause a stir or not in these discussions. The tweet, for folks that didn't see it is the idea that non consensus investing is where the alpha is is actually quite dangerous in the early stage. There's a little bit after that, but that's sort of the meat of it. Why do you think that struck such a chord and you know, caused such, you know, not outrage but you know, discussion? Well, I think it just managed to piss everybody off. I think there was like every constituency found a reason to hate it. Right. And so the ideal tweet. Yeah, that's right. It's like, it's like the mother of all warshock tests. Right. And yeah, you know, I think for, you know, there's this sense outside of VC that VCs are just pattern matching and add no value. And so for those people it was a confirmation and so they're like, oh, I know it. VC is just consensus of S. You know, and now Martine is just acknowledging it, which I, I totally wasn't, but we can get into that. And then for the investors it was like an attack on their originality which was like, I don't do that. I'm a consensus. You know, you had many junior investors who don't know what they're talking about that they kind of said a bunch of random stuff, but you had some very senior investors that were like, oh, I do all these non consensus bets and like whatever, whatever. So everybody found like some reason to take umbrage for, by the way, which was like I hadn't even thought deeply about the tree because it's fairly innocuous thing I thought was just so obvious. I was like, I'll say some obvious thing on a Sunday morning and it just turns out to have been a lightning rod. What like prompted you to say it and what were you sort of trying to communicate that probably a lot of people maybe talked past the actual point, I think. Well, I, I work with a, a large team of investors and I'm often in the position of providing guidance and if you're not considering follow on capital, then you're not fully evaluating the opportunity set. And I've found that the cliche VC aphorism rule book is like everything must be alpha and this and that. So I, I just thought there's plenty of People talking about, you know, finding the diamond in the rough. There's plenty of people that are talking about finding the white space. But like there's this another side to it that isn't as represented, which is as you go later and later stages, VCs become more and more consensus driven. And that's exactly because they're putting more money in and they need more predictability. It follows naturally out of the system. So in a way, this is the most banal tweet you could ever imagine. It's just, it's actually totally obvious. I'm not saying I consensus, I've done tons of non consensus stuff. I'm just saying that if you don't consider this, it's dangerous. And so often we don't talk about that. So that, that was the, the genesis, which is a sorely banal tweet from a very obvious place. Well, it's always good to, to cause a little bit of a stir every once in a while, especially over something that is ultimately benign. I, I just feel like, I mean, I feel like, like X is like, it's just, it's just totally chaotic. Right. There's some tweets. I'm like, this is so deep and pithy and like they'll be ignores and others. One is like this kind of pointless thing. And so in a way, again, you know, just like looking at the market as opposed to the company, I think that like, like tweets are much more indicative of the people receiving it than the person actually tweeting it. Speaking of, well, quite consensus sectors at the moment, let's get into AI and this, you know, wild world we're living in at the moment, which you're spending a lot of time on. I, I know that you have mentioned that some of the energy that you're seeing in AI really reminds you of the 90s.com boom. Like what are those sort of symbols of that effervescence that you spotted? That, that, that did bring that to mind. Yeah. So let's see, I turned 20 in 96 and I, you know, I, I was interning at Livermore probably starting, I don't remember it was 97 or 98. But you know, so I was going back and forth for, you know, a few years. Then I, you know, I, I worked full time in Livermore in 2000. And I just remember this kind of slow boil that erupted during that time. Like when I started, you know, computer science as an Undergrad, let's say 95, you know, it was kind of this wonky Discipline, you know, it was actually kind of in a little bit of a slump, but the web was just starting and you could feel this excitement. And then by the time I graduated, I mean, I mean I went to Northern Arizona University. It was a school. My father was a professor in Flagstaff, Arizona. And even in this small mountain town school, we had students that were graduating, getting these crazy jobs in, you know, as programmers in, in all over the nation. And, you know, they were being actively recruited, you know, so like there was just kind of all of this excitement and then when I would go to the Bay Area, you know, I would kind of get kind of caught up in all the founder itis that was going on. And you had everything, you had all the parties, you had all. You know, I remember, I remember the first time I landed in Silicon Valley, I drove down the 101. I'm like, all these billboards are talking to me, right? And so, yeah, yeah, you know, there was just energy and it was in the streets and you, you know, you'd have like Linux conference and the Python conference was going on and everybody would show up and all these companies getting created and it was just, it was just optimism and chaos in every sector that you look at. And then it feels to me that things got a bit institutionalized, which is, it's just kind of like another day to do business for the last 20 years now I feel again, now you have a lot of the same type of energy, which is like, I mean, you know, the billboards we've had for a very long time, but again, you've got like these kind of cultural movements that follow it and you know, all the founders and all the investing going on. So I just feel like it has the same level of energy that we had in the late 90s. Do you think we're circa 96 or closer to circa 99? Early 2000? 96. 96. Really? You think we got, we got some, you got some room to run? I think people forget what a bubble looks like. I mean, every time valuations go up, people say bubble. I mean, you know, but like, listen, I mean, a bubble, a bubble is like when you get into like a car and the taxi driver is giving you stock tips, like that's a bubble. I mean, remember all of the crazy excesses and you know, all the crazy blow ups. It's totally, totally different. So I mean, this feels a lot like early 96. And the big difference is, is then companies weren't even making money. And it lasted so much, by the way. People were decrying bubble in 97 and 98. Yeah, I believe that. And 99 and 2000. You'll be right eventually. Yeah, people were saying it right. And then they actually had really legitimate concerns. You had WorldCom, which had $40 billion in debt, which is super levered, was like a single supplier that was underlying all of this stuff. You could IPO a company with basically no revenue, very little revenue. Many of these companies, these crazy values had no money, they were making nothing. Right. And so there was these very legitimate concerns and none of those really exist today. The companies that are bankrolling a lot of the infrastructure have hundreds of billions of dollars on the balance sheet. Google Meta, Microsoft, OpenAI has real revenue, Cursor has real revenue. And the valuations aren't totally out of whack with, with the revenue. So yes, you know, markets will oscillate for sure. And so they'll go up and down and you'll have pullbacks or whatever. But I don't think we're anywhere close to like a, you know, late 90s level bubble. No, I think that could come and you know, listen, when like. And probably will. Right, and it probably will. But like, I don't think we're anywhere close. I just think people forgot what a good bubble looks like. There are a lot of fun, man. So yeah, I promise, I think, you know, I don't know where we are in the cycle and you know, I didn't live through that period as an adult, so I can't, I can't compare. But I think we're at the stage of taxi drivers knowing these things very well. I do think, you know, the valuations are certainly getting spicy in some levels. Maybe they're not quite at the, at the peak. Yeah. But I mean, honest question for you, do you think right now it's out of whack with 2021? I, it's. I don't think it's 2021. Nope, I agree. I think we're not there yet, but I don't know, does it feel like 2019, mid 2018 to me? Like, yeah, that, that seems about. And so yeah, maybe we got another 18 months or two years. But I don't know if I'd love like how many, if I was writing big checks in, let's say 2019, I don't know how many of those I would have been thrilled about in 2022. Right. For sure. You have valuations waxing and waning. I think it's great to actually apply it to 2021. I mean, 2021, there was a lot of excitement, but it wasn't actually driven by real business usage. Right. It was like Covid, the flight to online and then just a bunch of private capital flooded in the market. Remember, like, you know, Tiger CO2 inside. All of these were deploying very heavily. And so in a way there was kind of this excitement and exuberance, but not for any sustainable business reason. It was really like a, it was like an influx of capital and then this kind of, you know, quirk of the macro that wasn't sustainable. But with AI, I mean, you know, we were, you know, three, four years in, it looks sustainable. We understand retention, we understand growth, we understand margins. Yeah. And much less of a tech revelation. You know, there was really. Yeah, that's right. So we actually have a foundation underlying it. So I would say. Yeah, I mean it kind of feels a little bit 2019 ish, but, but it's, but it's real. And so, you know, unlike, you know, the, the 2021, 22 collapse, I mean, you could argue that we're still early in cycle and yes, it's going to continue to oscillate, but I, I don't think we're anywhere near, near the top. Interesting. Yeah, I think that's, I mean, I want to think about it more, but I, I think you make a lot of very good cases there. We don't have the tigers coming in, but we do have a lot of sort of sovereign wealth fund money perhaps coming in and a lot of big corporate cash. Right? Totally, totally. That's a different level. This is, this is actually a very, I mean, maybe, you know, on this podcast, like we're not going to have the time to dig into it. This is very interesting construction about the current technology wave is you can actually deploy capital and you can get revenue on the other side of it. And these are very capital intensive businesses. Right. And I think that is what the market is trying to normalize. Like you can't even really enter the casino without a billion dollars for these foundation models, for example. And that is because. So I, I agree we're in a bit of like terra incognito as far as understanding, you know, what the capital structure is long time after you've raised this much money. But what we do know is you can actually convert it into revenue and into users. And so I think this is where we're going to see a lot of, of, of rationalization and normalization in the market. But again, I don't think it's basic, it isn't speculative. Right. And it is just trying to understand what the market is doing. I ultimately think markets are very efficient and so I think, you know, like we'll rationalize, but there's a true, true value being created in this AI. And I think that if, you know, money's not following it, it's going to miss the greatest super cycle in the last 20 years. Yeah, that's the, that's the, you know, the other side of it is like you could, you could really miss out. You know, you mentioned that there's really something obviously valuable being created and I fully agree. But I was interested in the fact that you see these studies. You know, MIT had their study not long ago that you know, said that, what was it, 95% of these enterprise deployments are not delivering value. Why is there that gap in what we're seeing? Is that like a measurement problem? Is it a, you know, a deployment problem? I think what are the problems with AI is that it's been around forever and so we have all these presuppositions on what it is. Right. So here's my view on AI right now. AI as it is is very much a individual prosumer type technology that's attached to individual behavior, right? It's like me using ChatGPT, me using Cursor, me using Ideogram. Right. Me using Mid Journey. And the value that organizations get is that their users are using ChatGPT, their users are using, you know, whatever. That's what it is. However, there are platform teams within the enterprise and their boards are like, we need more AI, go implement stuff. And so they're scrambling to do these AI projects and of course those are failing. Right? This is such a different technology and a different shift. So if you measure some internal effort to go ahead and do stuff by yourself without really, you know, then, you know, I would say the failure rate of course is going to be very high. That has nothing to do with the fact that, you know, now many tens of millions of users are using the testologies technologies, getting value from them and driving that value into whatever their workplaces is. And so I, I just think that when it comes to this wave of AI, we have to realize it's a very new thing. It's going to have a totally different adoption cycle. We've not yet cracked like the direct sales enterprise. You know, I would say for those enterprises that are listening, you know, rather than doing your own kind of project, for now, it's probably better to work with like a vendor or a product company. That's actually doing these things then over time, just like the Internet, by the way, the Internet was the same way. Just like the Internet, it will make its way into the enterprise in a way that we all understand. But. But it's not. It's just not there yet. What are the ways that you've ended up incorporating it into your life? Most would, would you say. And, and on the other side, are there areas in which you're, you know, especially protective of not using it, sort of to preserve your. Your thinking? I mean, like, like I mentioned, so I. I code with AI. So. So the reason I stopped coding is I just didn't want to learn the next framework. Right. I mean, the thing with developing in the late 90s is you'd sit down to your computer and you'd write code, you know, and, and it was all kind of there, and you didn't have to learn a lot of stuff. You'd mostly just write in code, you know, and then I, you know, through the 2000s, I did my PhD, and, you know, so then, you know, I kind of invested enough time to understand all the frameworks and whatever, but, you know, I step away because I'm building a business or I'm becoming an investor. When I go back to it, I just have to learn all of these new things, especially with all this web stuff. And, like, and you're not learning anything fundamental to computer science or anything foundational or anything that's useful outside of that context. You're learning, you know, whatever stupid design decision some random person that created the framework did. And so that's really what slowed me down from coding. And with AI, I don't have to deal with any of that. I'm like, you know, whatever. Give me boilerplate for an app so I can write a video game. And all of those decisions are made by AI. So I use AI coding very heavily. I do it almost every night. And it's really just been lovely. Yeah, yeah, it's kind of my. It's kind of my relaxing time, but it's really just lovely to be able to just kind of focus on. On code again. You know, another kind of just personal thing I like. So, you know, I love reading, you know, kind of, you know, historical, you know, books on historical figures that are closely tied to innovation or economics. And often I have a lot of questions. And so these days what I'll do is I'll read a chapter, and then when I walk my dog. This is silly. When I walk my dog, I use Grok audio mode and I actually have conversations about the chapter. And in a way I don't even care if like the questions I have are kind of analysis, synthesis questions, not fact based questions like make an argument of why the school of Salamanca in Spain in the 1300s was a progenitor to the Austrian School of Economics. Right. And so like, and you know, so I actually have these conversations about what I read and I just find that I think more deeply about it when I do it and I actually find it interesting and it's kind of more well rounded. So that's personally been great. I think I'm a bit OCD when it comes to writing. So I will not use AI for writing. I think writing is thinking and I use writing to think. And so if something did that for me, I wouldn't be thinking. And I think this for me has just been a lifelong tool. And so I, I don't think I've ever used AI to write a single. I mean, maybe that's not true. I don't mean to be too categorical, but I never, never use AI for writing. So that's the one area that I'm really trying to protect. Really interesting. I can't help but ask, you know, what, what some of those historical books that you have enjoyed might be. Yeah, so I've just been in, in Eisenhower lately. I've been going through a bunch of Eisenhower books. What's interesting about Eisenhower, right, He's conservative president, he was a moderate, but he was also, you know, it was under his watch that the Warren Court was, was created in the Warren Court, you know, I mean, very famously, you know, was the, the vanguard of the civil rights movement as far as, you know, overturning, you know, policy and kind of getting rid of a lot of the Jim Crow laws, et cetera, etcetera. And so a lot of, kind of my questions have been, you know, today we, you know, people, you know, criticize the court system and there's a lot of rhetoric on how like the court's being stacked, et cetera. And so I've actually been kind of having conversations with Grok comparing and contrasting the rhetoric around the war in court with the rhetoric around the current Supreme Court. And it's so interesting how similar the criticisms actually are. And so of course it's a different environment in a different era. But I, for me, I, I just feel much, much closer to what's going on now as being part of a historical, like a historical trend than some total aberration. I like to be part of, like the broader narrative Perhaps starting in Covid, or perhaps you would start earlier even. It feels very, very clear that we're living in history in a way that, you know, maybe wasn't as, as obvious, you know, a couple decades ago or something like that. Cool. Is to actually rewind the clock and listen to the rhetoric during the Vietnam War. And listen to the rhetoric during the dot com. Yeah, Boom. And listen to the rhetoric, you know, around the Warren Court and realize I don't think it's actually that much. You know, we always have this kind of. Yeah. We always tell our stories like, oh, these unprecedented times, we've never done this before and blah, blah, blah, blah, blah. But like they said those words back then too. They did. They were like, oh, this is unprecedented. We've never done this. It's the end of the nation. Like, you know, really. Wow. For me, anyways, it's nice to realize that this is a continuum. It's been going on for a long time. The US is anti fragile. It is the best country, you know, on the planet. You know, like, we have always have challenges to deal with and we do a good job of dealing with them. Are there parts of the AI world right now that you consider almost a mirage? You know, something that looks like it could be something, but for some fundamental reason it's unlikely to last. You know, I think folks have talked about prompt engineering, for example, is something that's maybe more of a transitory state of affairs. And I wonder. Yeah. How you, you know, what you might point to that has a similar quality. So I think that what we're seeing is we're seeing two pretty distinct paths that these AI model companies take. So one of those paths, the model just does more and more and more. Right. And so you basically have one model that does more and more and more. Right. Like if, like, you know, like these, these code CLI tools like Codex, for example. So you could be like, I'm going to make it super complicated and do all this prompt engineering and like have all this software or I could just expose the model to the user. And it seems in these situations you just expose the model to the user and it just does better because the model is smarter than whatever code you're going to write. And then it's just so hard to kind of interpret what the model is going to say anyway. So that's one path. Like, you know, we're also seeing this in the pixel space, which is, you know, instead of like having a model for image and a model for 3D and a model for music And a model for characters. I'm going to have one video model and it does everything. And oh by the way, I'm going to make it interactive. This is Genie 3, so it just does everything, right? So there's like one path which is the God model path. And the argument for that is the bitter lesson argument. That's, you know, you have all the data, you know, like the model is smart, et cetera. This is kind of, you know, and that's clearly a viable path. The other path is the composition of models paths, which is. Let's take the pixel case again. Actually, I just saw this amazing video this morning on X where somebody made a video and they're like, I used Midjourney to make the images. I use World labs for the 3D scenes, I use Suno for the music. And it's this composition of different models and you look and they have this just beautiful video that was created. And so the argument for the composition is if you have an opinion on what comes out, you'll just have a lot more control. Right. If I want fine grained camera movements and field of view, I'll need 3D maybe. I want very specific images and I want consistency across those. So I'll need an image model. I want the music a certain way. I may want to change it over time. So I want a separate thing for music. Right. And so I honestly believe we're going to see both of these paths. And I think the biggest mistake is people assume it's going to be one or the other. Right? Everything's going to be one model. But the problem with that is composition is just real. We've got an existing set of tool, we've got existing tool chains that use components of outputs that you're going to want to use. And so I think that's a mistake. And the other one is like, oh, these big models aren't going to be useful. You need a collection of small models. Clearly that's not true because the bitter lesson will continue to make these single models much more powerful. That's really interesting. You mentioned Codex there and you've talked about using a lot of these tools in your evenings. Which brings me on to cursor, which I know you're very involved with. When you were doing the analysis on AI code generation and thinking about, okay, who's the leader here? Was it just blazingly obvious that it was cursor? How did that sort of come about? Yeah, I mean, listen, I mean it depends on what you're doing. I'm a developer, we were looking at developer tools and the developer tool is the ide. Now listen, the coding space is enormous, right? There's repos, there's testing, there's PR management, et cetera. Right? But in the case of coding, Copilot had given us a glimmer of how powerful AI could be integrated in the development process. You know, the cursive team executed so exceptionally well. You know, half of our companies were using it and you know, and they were, they were just, you know, at the time just very, very focused on building out the IDE and being the leader. So it's just for that bet that was very clear. That didn't mean that like we didn't think CLIs were a good bet. It was just different, right? That's, that's, that's different. I mean you could give a, you know, and at the time there really wasn't as many approaches that were using like the PR for an interface to the developer, right? Like, like using GitHub as interface for the developer. But you know, it was also pretty clear that like Cursor's ambitions was to change all of code. It was also very clear that like code was evolving. And so, you know, from our perspective, you know, a very, very product focused team working on tools for developers that has this kind of broad vision was, you know, the right bet for developer tooling for us. And, and of course that, you know, that that's worked out quite well. It just so important I said it before, I want to say it again, that, that, that doesn't mean that all of these, like there isn't tons of value in all of these other areas, like coding models. Tons of value. CLI tools, tons of value. I mean this space is enormous if you just do rough math, right? Like let's say there's 30 million developers. There's more, but let's say it's 30 million. Let's say they get on average 100k a year. I mean, what is that, what, $30 trillion market or something? Yeah, you know, let's say you get 10%, it's $3 trillion. I mean we're talking about like in an inf. And if you ask me, like what is the one area that AI has surprised you? It's encoding. Listen, I've been developing my whole life and I would never have guessed it'd be this good. And so you've got an infinitely sized market that AI is very effective at going. And so I think we're going to see a bunch of super successful companies. What do you think? Will sort of dictate the winners and produce real defensibility here. Because obviously it's given the size of that market, you see lots of interest from large companies and insurgents to sort of take a piece of that space. So my general rule of thumb is while markets are accelerating in their growth, they will fragment. And that's a natural law of physics. And so everybody worries about defensibility on day zero, which is just dumb in my opinion. It doesn't matter until markets slow down or they consolidate. Right. And that it literally just kind of falls out of the like, listen, if I have to spend a dollar, if I'm in a company and I've got to spend a dollar, am I going to spend a dollar in an area where I don't have competition or I do have competition? Well, of course you're going to do it where you don't and where you're the leader anyways. That's why we've seen basically fragmentation in most of these domains. We've got companies growing in most of these domains. And so I think when it comes to code long term, what keeps them defensible. So here's my current view is I don't think there's any inherent defensibility in AI. I don't think that exists. I think that AI overcomes the bootstrap problem. So it's time to solve your customer acquisition problem because it's so magic and that won't be the case forever. But right now it's like somebody invented cold fusion and people show up for electricity. Right? So like it solves your customer acquisition problem. But from a defensibility standpoint you have to go to traditional moats, right? You, you have like, you know, we know how to do moats in this. You know, whether that's a two sided marketplace, it's an integration note, it's a workflow mode like whatever it is you still have as a company have to build that. The good news relative to incumbents, last point on this is when you have new behaviors, incumbents have a tough time executing. And we clearly have new behavior here, right? It's like it's an individual behavior, it's a new relationship. There's actually often an emotional component too like the shift between GPT 4 and 5. We saw that. And so I think new behaviors advantage challengers. I think we're seeing this play out. And so I worry much less about the incumbents. I think, you know, if you're a founder listening to this and you're doing an AI company, priority zero is finding that white space, not worrying about defensibility, in my opinion. You know, and then once you find that white space, you know, rely on traditional moats to. To protect it when the market slows down. There's another of your companies that I have been so interested in from, you know, only from the outside, that I'd love to hear, you know, your story with them. And, and for folks that maybe haven't come across them yet, for them to understand what they're building. And this is World Labs, which, I don't know, reliably. Anytime you go on X, there is now something really interesting, some sort of 3D model that, you know, World Labs is responsible for that is. Is quite. It's magic. It's the most magical. Yeah. Mesmerizing. So how did that come about? Yeah, this even goes back to my, you know, experience with writing 3D engines for video games. It's just a particular interest. So listen, World Labs is created by like, like, like the true pioneers in. In 3D. It was Fei Fei Li, you know, who did Imagenet, you know, super famous Fei Fei. It's Ben Mildenhall who created Nerf, which is the Neo Radiance field. It's Christoph Lassner, who was like Gaussian splats before they were cool. It's Justin Johnson, who's the style transfer guy. I mean, like, they've just got the most epic team. The easiest way to articulate what they're doing is they want to take a, you know, a 2D representation, like an image, and create a 3D representation from it, like a scene or a world. And it's a very, very tough problem because if you just have an image, you can't see everything, right? You can't see in the back of the table, you can't see behind you, et cetera, Right? So there's a ton of generative components to it. And it's also a tough problem because lots of, you know, the way that you train models is with data, and there just isn't a lot of 3D data. So it's just kind of this unsolved problem, but it turns out to be a very horizontal problem. Right? Like, so, for example, why would you need a 3D scene? Well, you could use it because you wanted to create a pure virtual environment that you want to interact with, right? You want to place a character with. You want to change the angle with it, you want to augment it, you want to step into it, like VR, right? You know, you could also use it for any sort of Kind of design, you know, like architecture, you could use it for ar. Right. Like, actually I just saw this great. This guy named Ian Curtis did this cool thing where like he, he had a 3D representation of his living room in his, in his Oculus. And then he was like overlaying like changes on it that he made with World Labs. And so he could like, he could like switch between like the virtual labs recreation and the real recreation. So he could like change furniture and change things like that. And then actually ultimately this is very relevant in robotics, right? So the problem with just 2D video is you actually, you actually don't have depth. You don't have, you can't see behind things. Right. And so you need to create some 3D representation if you want like a traditional program, let's say like a robotics brain to decide things like, you know, how far away is this, what might it look like on the other side? How do I plan around these things? So the more 3D representation you can create, create kind of the smarter, like an embodied AI would be. And so they're really trying to tackle this kind of holy grail of problems of I just have one view in the world that's 2D, which is what our brain does. And then how do I kind of recreate that in 3D so that I can kind of process it? The robotics piece was, you know, is the piece that I think is so interesting. Obviously, you know, the VR applications feel very obvious in a way, but obviously on the other hand, still not a market that is massive at this point. But the robotics piece feels like that could be, I don't know, truly a game changer. When you're combining some of these other developments we've seen in that industry over the past couple of years, like, it really feels like it's addressing one of the major sort of limiting factors. Well, let's go back to the, let's go back to the market size first. Just because I feel this is a mistake we keep making in the AI world, which is nobody would have said 2D image is a market. Nobody. Right. There's a whole class of companies which were like in the past, they were small acquisitions, they were never really profitable that were trying to build 2D images. And yet now we have companies like Midjourney, which famously was bootstrapped to hundreds of millions of ARR. We've got bfl, we've got Ideogram, very successful companies, et cetera. And so I think in general, when you bring the marginal cost of creation to zero, the market size Explodes marginal cost of image creation, of video creation, of music creation, etc. So, and then again, I know, I know that this wasn't the point of your, your question, but I think it's very important to touch on is if you bring the marginal cost of 3D content creation to zero, I think that that market is infinitely large. I mean, one of the reasons VR sucks is because there's no content. Like there's, I mean, I don't know if like, you know, I've got a Quest 3, I love it. But like, I go, I spend 24 bucks and I get like the stupidest, like little thing. And so I would say a lot of metaverse, I hate the term, but I mean, just to get us all on the same page, VR, online gaming, et cetera, is really gated on content. It is so hard to build 3D scenes. It is so expensive. And so I think that markets that weren't markets before can become markets. That said, I agree with you that long term, if we're going to have embodied ag, I'm not an AGI guy, but I'm going to say if we're going to have embodied AI, embodied AGI that looks at the world and then creates a re representation of that world and decides how to interact with that world, somewhere, somehow you're going to need to recreate that world in 3D, right? You can't do it with link, right? Like, the description I like to say is like, let's say I put you, I blindfold you and I put you in a room. The lights are off and I try to describe the room so you can navigate it or like pick or do any task. Like the words are just not going to be accurate enough, right? I'll be like, there's a cup in front of you. It's about three feet. That won't work. On the other hand, if, you know, I give you a camera and then you can kind of recreate the 3D and you're positioned in that 3D, of course, you can now navigate the room. And so there's something very fundamental to this, this solution space for embodied, embodied AI. You said a few things there that were super interesting to me. One of them, I, I, I do want to dig into the VR piece. It's true that there's not enough content, but isn't the real constraint there, the hardware, like, functionally there's more than enough content for us to live almost infinity lives for us to be in it, but until it actually feels sufficiently high fidelity, it's Just not enjoyable enough. Right? Maybe for some people. I mean, listen, I love VR. I have, every time a new VR thing comes, I buy it. And my problem is, unlike a video game, which are deeply immersive and, you know, you've got a ton of content. Like, I walk a plank and then I'm done, right? I shoot a zombie and then I'm, I just feel like you don't have enough immersive content. And so if there's probably somewhere in the middle. If you look at a lot of, of, of online purely virtual experiences, the gating factor is like, how do you build these very, very large world. It takes years, it takes teams of people years to build kind of these levels and these worlds and these 3D environments. And what's very interesting, I think this is such an important point, it was very interesting is I work very close with World Labs. I go in on, on Wednesdays, I work with a team, I write code. Like, you know, I mean, you know, I mean, it's all silly, you know, like, I'm like, yeah, I'm like, like a beta user, right? Like, I kind of, I kind of, you know, some do some kind of silly things, but I'm very, very close. And they work with a lot of artists. And these are traditional, true artists that have backgrounds in 3Ds, and they make these beautiful worlds and whatever, they spend a ton of time on it, they'll spend tens of hours making it. And so what you end up with is you end up with a very detailed, very rich virtual world that would have taken maybe a year. You know, if you had a team of humans, they can read like one person can do it with less time, but it still requires a ton of craft and a ton of work from an artist. And so I think that technology like this is going to increase the amount of virtual scenes and worlds that are there for us to kind of view and explore. And I think as a result, any market that requires these is just going to grow because you can produce more, better quality and faster. Really interesting. And then you said you're not an AGI guy. Tell us why. I think at the theory foundations, I, I don't think we have figured out how the human brain works. And I don't think, you know, I think maybe a language model or something is a small subset of it. But I, I, I tend to agree with Yan Leon, which is, you know, we'll get to AGI at some point in time and we keep chipping off pieces of it, but, like, there isn't a straight path from where we are now. It's not like you just add compute and data to the existing models and then we have AGI. I think, I think that we just keep chipping off each pieces of the problem. And so for me, using AGI as some goal or measuring stick or destination, all it does is encourage very sloppy thinking because it ends up becoming the place that you put all of your expectations and all of your fears. And right now it's not even a real place. And so I really try and force people not to use the term aga, not to trick in terms of it, because it's very hard to have a conversation because it's such a holding place for magic and magic fears. And so I like to talk about concrete problems, solutions, products, technologies, technology trends, technology directions. And then maybe at some point in time we will know the architecture that will provide human level intelligence with all the flexibility that can learn just as fast, et cetera, and then we can start talking about AGI. But until that time, it just erodes conversational quality. It does not enhance it. I fully agree. It feels like it. Yeah. It obscures meaning much more than it reveals anything. Yeah, it just doesn't help in a conversation. Right. It really encourages lazy thinking. It quickly becomes almost entirely semantic where you're like, well, actually, what do you mean by AGI? Oh, well, this is what I mean. Okay, well then, you know, this is how we sort of, you know, it also, it becomes a universal justification without having to actually have a justification. Well, why, you know, why, why is this, what, why is the marginal risk for AI greater than traditional computer systems? Oh, AGI, that doesn't mean anything. It's not a statement. Right. Why is this going to put n people out of a job? Oh, AGI, that doesn't. Now both of these are great questions. The, the, the, the labor question is an important question. The marginal risk question is an important question. We should have those discussions. We have those discussions not in terms of AGI, because that's not a thing. We should do it in terms of like what's actually happening now. And in my experience, every time you say AGI, this is what people use to justify whatever their fear is, whatever their concern is or whatever their most optimistic hope is. And the problems when you dig into it, it's like this kind of belief that like there's this magic thing that will provide it. And so it did for me. It's conversational and discourse quality. That's the problem with the term AGI, not the fact that like someday we will have computers that are smart as humans. Of course we will, but right now that's not helpful. You mentioned that, you know, compute and data is not going to be enough for us to sort of have a straight shot to, you know, AGI, whatever we might call it, let's say a brain equivalent in every way to a human. How does that impact how you think about the progression of this from an investment perspective? Do you expect continuous large leaps in the capability of these models over the next few years? Do you think we're sort of to expect maybe more incremental improvements from now on? I think we're part of the long march of technology to solving all problems. And even if we stopped AI research right now, there's been enough that's been unlocked to create a tremendous amount of value and there's going to be new things that are unlocked. And so I just view this as the same continuum that we were on 10 years ago and 20 years ago and 30 years ago. And you know, we're going to continue to have to unlock new things. And you know, I, I just feel because these things are so startling, startlingly impressive, that sometimes we kind of don't view this as part of a continuum that has to go and like, we've already solved it. Now we just have to sit back and wait for it to happen. I don't believe that. I believe, like, listen, the way that I view investing now is the same as I did five years ago and 10 years ago, and we need to have more improvements. But what I do acknowledge is that we've unlocked a ton. And so now is a great time to productize and to turn into real businesses work that's been done. You've talked before, I think, maybe tweeted about the fact that a lot of US companies end up using Chinese open source models. Do you think that there is maybe more awareness of why that might not be the best thing and that that is primed to change? Or is it something that you're currently quite worried about? No, I, I think it's something we should all be concerned with. You know, it's kind of funny. This is, this is the reason why I got so involved in the political discussion, which I'll never do again, just because it's such a terrible space to be in. But you know, you had VCs who should know better and who should be pro innovation talking against open source. Academia was entirely silent. And so it's like the United States just decided that it wasn't going to invest in the number one thing for proliferating technology, the way that we see it and I think largely because of that, like the proliferation of open source has been pretty muted in the United States. And I do think that, you know, China really answered the call. They've done a phenomenal job. I would say many of the best, you know, AI teams are in China. Their models are many of the best models and they're being used all over the place. And so I think in some ways, you know, we had the wrong approach as a nation and as an industry. Now that is being rectified. I think that's being understood. But I think now we have a lot of catch up to do. I think that, you know, like our models aren't the best. And you know, honestly a lot of it just comes down to policy questions, right? Like there's a lot of risk to release something open source if somebody else is going to try to fight something, you know, copyrighted in it and then sue you for. Right. There's a lot, I mean there's a lot of spurious litigation around these things. And then we have these, we have these policy proposals that would be disastrous. Like, like, you know, SB 1047 from Scott Wiener. I mean part of that was actually developer liability, right? So that means that if somebody uses this in a way that caused a mass casualty event, which like let's say a car crash, right, you could sue the developers. And so I think from the, from, from the United States standpoint, we've, we've not done what we've done in the past. We've used the precautionary principle, we've changed the way that we approach technology from a policy standpoint historically. And we've done it in a way that slowed down innovation and as a result we're on our back footprint and being on our back foot, you know, with China with respect to technology, I don't think is in the national interest. And so I'm, I'm, listen, I, I've been very encouraged what the current administration has done with regards to AI. I think their kind of policy recommendations have been fantastic. And so I am cautiously optimistic that things are changing, but we're not there. We've got a lot of work to do. Amazing. Well, as a, as a few wrap up questions for you as we move into our sort of final, final few minutes here, I always like to ask a few philosophical ones. One for you is if you had unlimited resources and no operational constraints, what is an experiment you would like to run? Do I have ethical constraints? I'd say no, I'd say for the, you know, no people were harmed in the making of this thought experiment. Yeah, 100%. Nature versus nurture. I would like go to space. I would clone a whole bunch of people. I would like have a whole bunch of controls. I would like play out their lives. Can I, can I live forever too? Sure, yeah, why not? No time constraints. Okay, so no ethical constraints. No time constraints. Unlimited resources. Yeah, a hundred percent. Nature versus nurture. Yeah. And, and you can imagine how I do it, right? I'd just clone, I'd have a whole bunch of people. I'd clone a whole bunch of people and I'd mindedly tweak these things and I'd let them live out their entire lives. I'd simulate entire worlds for them and I'd answer the question, what is innate and what is not? And then ultimately that'll be the question on free will too, right? Yes, that's right. You probably have a few things that fall out of that so that, you know the title of the like, you know, one like, you know, after like 300 years of doing this experiment, like the title of the report will be what does it Mean to be Human? There you go. Excellent. That's a great answer. What do you think is a tradition or practice from either another culture or time period that you think we should adopt more widely today? Oh, siestas easy. That's a layup. This is your, your Spanish heritage, I assume? Yeah. And unfortunately these days it seems only southern span. I come from the most backwards part of Spain. Right. From, from what is that right? Spain. And like, you know, like sies is a God given, right? I think everybody should, should take a nap. There you go. Agreed. Final question. If you had the power to assign a book to everyone on earth to read and understand, what would you want to put on their, on their reading list? The Weirdest People in the World. Hmm, that's a good one. You know David Deutsch's Beginnings of Infinity, of course. Taleb's the Consequences of Fat Tails, I've never even heard of. I mean, it's a technical, the statistical, you know, the statistical consequence of fat tails. And then hamming's how to Be an Engineer. Huh. Could you tell me a little bit about the weirdest People in the world and the final one? I think I, I know the weirdest people in the world, but there's a, you know, there's a sort of, I don't know how to describe it, a bit of wordplay in that title that reveals something about what it's really about. Yeah, yeah. I mean, I mean it basically says, listen, the Protestant revolution has changed the way that we associate with ourselves and with each other. And it basically, we used to be very, very tribal. And so that had certain impacts on like trust and the Protestant revolution kind of forced nuclear families and forced separation. And that required us to be pro social. And then it also has a second thesis on how free markets also produce pro social behavior. The reason that I would include it there is. Listen, I think humanity, if you just take the long arc here because we're being philosophical like, you know, like our, the, like the ultimate enemy is entropy. It never goes away. I don't think any single tribe solves that. I think you need pro social behavior to actually to do planetary level innovation and understanding how we work around trusts and coordination and cooperation is, is very critical. So listen, I don't think it's the ultimate book. I think it's greater that. By the way, one more book I'd add is, is the End of History and the Last Man. Is that what it is? I don't know. I don't, I don't think I've heard of that. Yeah, Fukuyama. Yeah, that's a great book. Of course. The End of History. Yes. Yeah, the End of History. I've never read that, but I've, you know, it's phenomenal. It's interesting because historically he's actually recanted on that. But it's like this Hegelian view of humans and like his conclusion is like liberal democracy is the end of history. And I think that's been questioned right now. Now. But he does such a great job of, of taking the Hegelian view that. Right, like there is this dialectic, there is this evolution of humans. We are continuing to get better. And then listen, he thought maybe we'd arrived. I think the conclusion now is that we haven't arrived. But I love this idea that we as a species are improving how we interact, how we have policies, how we socialize. And so all of these kind of have this general theme of, listen, we as a, we as a species are going to continue to solve problems. We're going to continue to have to work together, we're going to continue to have to cooperate and like, ultimately, listen, it'll be us versus entropy. No better place than that to end. Thank you so much, Martin. I really, really enjoy. That was a lot of fun. Thanks so much. Thanks for listening to the A16Z podcast. If you enjoyed the episode, let us know by leaving a review@ratethispodcast.com a16z. We've got more great conversations coming your way. See you next time. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.