Uncapped with Jack Altman

Uncapped #35 | Trae Stephens from Founders Fund

52 min
Dec 3, 20255 months ago
Listen to Episode
Summary

Trae Stephens from Founders Fund discusses choosing meaningful quests in the AI era, the challenges of building defense technology at scale, and how Founders Fund maintains contrarian investing discipline. He explores the tension between easy AI applications and hard problems worth solving, the importance of manufacturing capabilities, and the role of faith in informing ethical decision-making around defense technology.

Insights
  • AI's primary distortion isn't enabling hard things—it's making uninteresting, unhard things trivially easy, creating opportunity cost problems for top talent
  • Defense tech success is 30% product, 70% business/government navigation—building something great means nothing without understanding procurement and policy
  • Concentration of capital into top performers (40-50% of fund into 5-6 companies) matters more than initial check size; avoiding mediocre follow-ons is critical
  • Origin story quality is a reliable signal for founder quality and mimetic vs. genuine motivation—best companies have compelling founding narratives
  • Just war theory, rooted in Christian intellectual tradition, provides the ethical framework underlying Western defense law and policy
Trends
Shift toward theological revival and traditional faith in tech, particularly post-COVID, as alternative to Eastern mysticism and ayahuasca spiritualityDefense tech becoming viable venture category through software-defined, hardware-enabled primes rather than pure software playsManufacturing and production becoming the new venture frontier—scaling from hundreds to tens of thousands requires vertically integrated capabilitiesLow-cost autonomy replacing traditional force projection (aircraft carriers, manned combat) across all military domainsPolicy development following technology deployment rather than preceding it—regulatory guardrails emerge after societal boundaries are crossedFounder-operator model gaining legitimacy in venture as alternative to pure financial VC approachConsensus AI categories becoming commoditized; differentiation requires founder quality rather than market positioningSpace warfare and domain autonomy becoming critical to great power competition strategyKamikaze rounds (massive checks at inflated valuations) destroying companies despite founder appeal—capital discipline becoming competitive advantage
Topics
AI Opportunity Cost and Talent AllocationDefense Technology Procurement and Government RelationsManufacturing and Production ScalingJust War Theory and Defense EthicsFounders Fund Investment PhilosophyContrarian Venture Capital StrategyFounder-Operator Model in VCAI Regulation and Policy DevelopmentAutonomous Systems and Military DoctrineCapital Concentration in Venture FundsFounder Selection and Origin StoriesFaith and Technology in Silicon ValleyConsensus vs. Contrarian Market CategoriesLow-Cost Autonomy in WarfareInterpersonal Relationships and AI Companions
Companies
Anduril Industries
Trae's defense technology company; software-defined, hardware-enabled prime building autonomous systems and manufactu...
Founders Fund
Venture capital firm where Trae is a partner; discussed investment philosophy, concentration strategy, and founder-fo...
Palantir Technologies
Major Founders Fund investment; Trae worked there for 6 years before joining Founders Fund; example of software-focus...
SpaceX
Founders Fund portfolio company; cited as example of verticalized supply chain, manufacturing excellence, and founder...
Tesla
Referenced for manufacturing and production scaling challenges from prototyping to mass production
Lockheed Martin
Incumbent defense contractor; example of how government bids out to established primes rather than new entrants
Cognition
AI company Founders Fund invested in despite competitive market; example of founder-quality bet overriding category c...
Google
Withdrew from Project Maven defense contract due to employee pressure; cultural moment showing Silicon Valley resista...
Flextronics
Contract manufacturer model that Anduril is emulating for design-for-manufacturing and production scaling
General Matter
Nuclear fuels company founded by Founders Fund partner Scott Nolan; example of founder-operator model
Varda
In-space manufacturing company founded by Founders Fund partner Deliana Esperohoff; example of founder-operator model
People
Trae Stephens
Founder of Anduril Industries and partner at Founders Fund; discusses defense tech, manufacturing, and venture philos...
Jack Altman
Host of Uncapped podcast; conducts interview with Trae about quests, AI, and venture capital strategy
Peter Thiel
Founder of Founders Fund; cited for open debate culture, contrarian philosophy, and willingness to change positions o...
Elon Musk
Referenced for depth across multiple domains (SpaceX, Tesla), metallurgical expertise, and operational involvement at...
Scott Nolan
Founders Fund partner who founded General Matter nuclear fuels company; example of founder-operator model
Deliana Esperohoff
Founders Fund partner who founded Varda in-space manufacturing company; example of founder-operator model
St. Augustine
Christian theologian who originated just war theory, foundational to Western defense law and ethics
Martin Luther
Theologian who disagreed with St. Augustine on defense ethics; cited for theological debate tradition
Quotes
"This is not the field of dreams. If you build it, they do not come. That's not how it works."
Trae StephensOpening remarks on defense tech
"The distorting characteristics of AI have less to do with the ability to do interesting, hard things. And it has much more to do with how easy it is to do uninteresting, unhard things."
Trae StephensOn AI opportunity cost
"If we take all of our level 100 players and we put them on AI slot companies, what does that mean for all of the things that aren't being done at the same time?"
Trae StephensOn talent allocation
"It's 30% product. The product matters. You have to build something that works and that serves the warfighter's needs and is better than the alternative. You also have to know what you're doing on the business side of the equation."
Trae StephensOn defense tech success factors
"We're a fund for founders, which means that if we believe that we could run a company better than a founder, like we want to board see, we want to help you. We would probably just start the companies ourselves."
Trae StephensOn Founders Fund philosophy
Full Transcript
I always tell defense tech companies this, it's like, this is not the field of dreams. If you build it, they do not come. That's not how it works. Like, if you build a perpetual motion machine, and the point of the perpetual motion machine is like powering forward operating bases, you go to the Department of War and you say, I have built you a perpetual motion machine and I will sell it to you for a million dollars. They would, okay, we're gonna bid this out to market and we're gonna have Lockheed Martin, a $100 billion contract to rebuild from white paper your perpetual motion machine. That's how it works. Trey, I'm really happy to be doing this with you. Thanks for making time for it. Great to be here. I want to start with, you wrote a blog post a couple of years ago, I think, Good Hard Quests or Choose Good Quests. That really stuck with me. And I think in many ways, we're in a moment in time right now where people are sort of grappling with what that means in the world of AI. And I think both because of the technology itself, which creates all these weird, what does it mean to be human? What part of the experience actually matters? it like creates like a lot of odd questions. Then you also get these like gold rush dynamics where like money's flowing and you can start a company quickly. And there's like a lot of get rich quick ideas. And so I think for a lot of reasons, this blog post is like super relevant right now. And I would just be curious to hear how you think about this concept of like choosing good quests, like when we're in the middle of something like AI. Yeah, you know, I think the distorting characteristics of AI have less to do with the ability to do interesting, hard things. And it has much more to do with how easy it is to do uninteresting, unhard things. You know, there's the constant conversation that comes up about like the first company to a billion dollar valuation with one employee. It is possible to do that. But what that means is that all of these people that are coming out of college or that have aspirations to be founders, you know, they're doing like whiteboard founding. You know, they're walking up to a whiteboard, they're writing down a hundred different ideas. Many of them are just like using the LLM APIs to do some highly specific task or something like that. And then as they enter the market, there's dozens of competitors and it's kind of like a battle to the death in a really highly consensus category. And that's kind of what I would say is the opposite of a good quest, like going into something because you can, trying to generate wealth as your primary motivation, not super interesting. It's sort of akin to like a celebrity starting a tequila company where they're just monetizing their own personal brand. One of the sort of frames I've heard just talking to founders about how they think about this is that we're like in a moment where there are a lot of ideas that both seem good and are good as a result of kind of like there's a big new technology. And so you have a lot of these like blue ocean areas, which are like, you know, kind of rare to have that. And so then you have all these people thinking, well, yeah, like it's kind of consensus, but no one's done it yet because like the why now just happened. And so I see people grappling with that and they're like, it's not exactly bad for anybody. It's like a thing that ought to exist. But to your point, there's like 19 of them going at the same time. It's a weird sort of dynamic. Yeah. And, you know, maybe some of these things are worth building and people are going to make a bunch of money on it. And that's not bad in and of itself. The question is like an opportunity cost question in my mind. It's like for our most talented people, you know, if you think of like every person in the tech ecosystem as like a Dungeons and Dragons character. And they have like a skill set and they have ratings for different characteristics. If we take all of our level 100 players and we put them on AI slot companies, what does that mean for all of the things that aren't being done at the same time? I would really like our level 100 wizards to be fabricating semiconductors or, you know, something that's going to move the needle for humanity. And so it has sort of this distracting distortion effect rather than necessarily being bad. This kind of happened during like crypto booms, you know, like when people just sort of like get lost and sort of like, oh my God, there's so much money being made here that it's very, it's very tempting. And you get a lot of people that go there. Yeah. And then over time, that kind of worked itself out in crypto, I would say, but it's less clear that that's what would happen in the AI moment. There's also a genre with AI where it's less this area of like, it's a straightforward thing with 20 companies working on the same problem. But there's these types of AI companies that I think like get close to people's sense of like, where are the moral boundaries? Like, what does it mean to be human? And, you know, like, I think there's these things that people have a lot of very negative visceral reactions to. You know, I recently saw like, you know, a post about a company that sort of brings sort of like, you know, deceased loved ones to sort of life in some kind of form. And on one hand, you're like, I guess that kind of makes sense. That could be comforting for people. On another hand, like there was a lot of kind of, you know, sort of like a visceral disgust reaction that people were having that it sort of like crossed the line for people. How do you think about these things that sort of like that feel bad on first glance, but are sort of exploring the edges? Like, are those quests that are good that look bad? it? Or how do you know? The key question with companies like this, which there's like literally a Black Mirror episode that exists about this topic of bringing back deceased loved ones. I lost my dad in 2013 to Alzheimer's. And, you know, I would love to talk to my dad. That would be awesome. Like every time I buy a car, it's like, who do I talk to? I end up talking to like my co-founders at Anderil about it. But I would love to talk to my dad. That would be super cool. But I think the challenge behind these things is like what happens to interpersonal relationships? And this has started long before AI. If you look at online dating as an example of this, long before AI, we realized as a society that the way that we pair off is going to look super different. Like in, you know, decades past, it was done locally and like a local minima. It was like, you know, you met people at school or you met people at church or you met people professionally in a workplace. And you were able to measure, you know, the fidelity of a match across a number of characteristics. Like this person is funny. This person is charming. This person is romantic. This person is talented, ambitious, smart. And now, if you look at the way that the sorting is happening, it's done on the shallowest possible measures. It's like if men over six foot two get 80% of the matches, I'm sure that there's a lot of really talented, charming, well-matched, matchable people that are less than six foot two, but they don't get any of the right swipes. And so we've created this entire social demographic of incels that historically, if you look back over the course of human civilization, have been the powder keg for implosions of societies. You don't want to have a bunch of young 20-something single men that are underemployed running around in your economy. It doesn't bode well. and you add AI on top of that now where you have like people that can talk with their deceased loved ones, but also they can talk with AI companions. And they find that they have this very low risk, very low probability of disagreement companion that basically acknowledges and affirms whatever they want them to acknowledge and affirm. Well, that becomes much easier than the complicated dynamics of a real human relationship. And then what does that do to the way that our society, you know, sustains a birth rate at a very basic level. But beyond that, like what happens when you have unmatched people at numbers that society has never seen before? Like what impact does that have? It's sort of interesting when you think about this, like, you know, I like your framing of like dungeons and dragons because it's like it makes us all feel like we're on one team. And it's like, how do you want to deploy resources? How would you think about like there's a one category of like company idea that's like, let's bring back deceased loved ones where you're like, oh, that's weird. I don't know what to make of that, but it's new and it's weird. And like there's that. And then there's like maybe a category that like more or less everybody would agree is bad. And, you know, like there's things that you could probably profit off of that just like are not unclear like that, but are like definitively bad. Do you have any way that you think about sort of like what ought to be explored, even if it feels uncomfortable? Well, yeah, I mean, I traditionally have thought about this as like on a two by two matrix where We have like feels bad, feels good on one axis and then is good, is bad on the other. Like everyone kind of agrees in the diagonal of feels good, is good. It's like do good or stuff, ed tech, health tech, whatever. That's like super obvious. And then the feels bad, is bad, like things that are illegal, like murder, theft, cheating, whatever. Yeah, we can all kind of agree on that. It's the other diagonal that's interesting. Yeah. And the feels bad, is good, I would argue is kind of where Anderle lives. It's this duty and responsibility, like things that you have to do, not because you want to glorify those things, but because they're really important for a functioning society. That's like everything from disciplining children to law and order inside of a society or defense of a nation or something like that. Then you have that other category, the quadrant of feels good is bad. And this is like sort of the edgy stuff that you're talking about, like gambling, pornography, sort of what I would call like hedonism. And there's always arguments you can make about a libertarian society, like we should allow people, you know, free expression, we should allow drug use to some extent. But then you can also look at the negative impacts on a society of these things and have to make hard decisions about how you're going to build policies around whether or not these things should exist. Because a lot of them are highly addictive and like that those can be really good businesses. And is policy basically the only answer to that quadrant? People are going to build companies that they can build that are profitable within the rules. And the vices are always going to make money. 100%. Yeah. I mean, you can look at the most extreme example of this is like, if fentanyl was legal in the United States, there would be a massive company that just like sold very low cost fentanyl. For sure. It'd be a huge company. It would be a huge company. So yeah, policy is probably the only way that you can really address those problems. And then, you know, it's interesting, like some of the things that look like gambling, you know, you don't know whether things are in the really bad quadrant or actually they're in this sort of like new exploratory quadrant like prediction markets like i was talking to blad you know on uncapped recently and he's like making good points about how the prediction markets actually also allow you to like bet on specific things that are narrow they like allow for like you know market sort of like true price discovery to happen on not not to mention also underwriting yeah there's a lot that you can benefit from big to some of those yeah so some of these newer things that like are teetering on the edge it's like you do want some of that stuff to get explored too. Well, I mean, the nature of policy development is always like, you know, the thing develops in real time and then you build policy to constrain and control the negative societal impacts. And this is why, you know, the way that AI companies engage with Capitol Hill, it's not like, you know, Capitol Hill is going to be the smartest people on AI and they're going to walk in and they're going to lay all the groundwork for how it's going to be. But it's not going to work like that. Like, I think there are two people in all of Congress that have computer science degrees. And they all got those degrees long before AI was a thing that they would have been studying. Most of these are like lifelong career civil servants, have no idea how technology works. Realistically, what's going to happen is AI is going to build, build, build. Boundaries will be crossed that we realize as a society have negative implications. And then in a well-functioning democracy, you build guardrails for those things. Now, the well-functioning democracy thing we could have a whole conversation about, but that is ideally how it should work. I mean, well, you're working with DC a lot at this point through Andruil. I mean, I guess implicitly what you're saying is the right way for regulation to go is like, let the technology develop to some degree and then start regulating once you're observing things, right? How has that played out for Andruil? What is that sort of experience like? Because obviously there's a ton of regulation going on there. What is the way a new prime ought to get built? Yeah, I mean, luckily, defense policy is the most bipartisan of really any category, which is funny because in Silicon Valley, most people wouldn't think of it that way. they'd be like, wow, this is really edgy stuff. But then the National Defense Authorization Act passes with a supermajority. For sure. Like it's like the only bill that passes. I do always think that like people think it's partisan, but like at the end of the day, like people are reasonable in government and everybody wants a safe country. Yes, exactly. Which is one thing I very much appreciate. Yeah, there's not a whole lot of partisan debate on these topics. But that also means that they're usually way out ahead of regulations. Like a lot of the existing law in the books around autonomy and artificial intelligence actually started in the defense community. There have been commissions on studying how AI is going to be leveraged in active warfare. You know, we've had autonomous systems for decades. There's a turret system called SeaWiz, close-in support for naval vessels, that is basically like a machine gun that shoots inbound threats to vessels, to ships. And there's no human controlling this, aiming at the aerial targets or anything. It's just, it identifies a threat. You say engage the threat and then CWIS takes over and will shoot down the missile or the aircraft or whatever it is So like autonomy has existed for a long time The law behind how that stuff is supposed to be utilized has existed you know inside of the defense community throughout that entire period. So I think, you know, Anderil has the benefit of kind of being on the edge of that and helping our lawmakers make decisions about how we're going to leverage these new technologies, but not in a way that's like defensive. There's not a conversation that we're having. There's like a really toxic debate going on about these things because a lot of that law has been bubbling for a while. You were talking about how like, you know, Andrew in many ways is in the like seems bad is good quadrant. But, you know, obviously government didn't feel that way about it. Like government seems good is good. Maybe in like Silicon Valley, it was, you know, in the other quadrant. Like what was your experience? I think this is like maybe flipped now, but like five, six years ago, maybe like the tone was different. And so like, were you sort of like having to convince a lot of people in Silicon Valley that like, actually, this is a good thing? Yeah, I mean, you probably remember Google pulled out of a federal contract called Project Maven. And there was thousands of people signed a letter criticizing their employer for supporting this. And it was kind of a cultural moment where it was really not OK to be working with the Defense Department. You know, there's some like partisan reasons for this. At the time Trump was president, I think people were mostly upset about Trump being president rather than any particular specific policy that was being employed. But Andrew will never really encountered any of those headwinds in Washington. All of those headwinds were like recruiting conversations. Like, why is this important? Are we able to articulate how what we're doing is like an advancement of just war theory, not a destruction of just war theory? And we've been very open since the beginning and engaging in that conversation proactively. It's always like an interesting thing. Like sometimes I'll like see a clip of some like, you know, somebody in the military, like active duty, like talking about like how they know that like a lot of the people they're protecting, like hate what they're doing, but like they're like my job is to protect anyway. It's like a weird thing where you're like, you know, that like tension of people who sort of like love what they get out of the military, but don't like it. It's a, that's like a weird thing to grapple with. I think this is like the longstanding nature of the professional soldier is this idea that in the United States, they're protecting your First Amendment right to not like what they're doing. And it turns out that you can only be a pacifist inside of a geopolitical system where the government has monopoly on power, on the use of violence. Yeah. Like you can't be a pacifist in a like realpolitik sense. Yeah. If the government doesn't have a monopoly on violence. Totally. It doesn't work. I find this like this perspective, like it's purely frustrating. But now I'm thinking a lot of like, is there is there like a reframe I should have that it's actually like, like, do we want a certain percentage of the population to like dislike the military in the war? Does that do anything valuable? I think it's great. I mean, I think it's really important to test hypotheses, to test ethics, to test our politicians, to make sure that they're thinking about hard issues, ethical issues. You know, anytime there's some debate about how the Department of War, like, executed some policy strategy, whether it's towards, like, you know, Venezuelan drug boats or taking out known terrorists in foreign countries with, you know, predators and AIM-9Xs and things like that. I want that debate to happen because that's how we get to good policy. If that was absent and everybody was just like, let's throw in our cowboy hats and swing our lassos in the air and go take on the world, that's how you get to Team America World Police. And I don't think that's what any of us really want at our deep down. So I love the debate. I think it's really important. What does Andrew need to do from where it is now to getting to the scale that you want to be to? It seems like from at least the outside, you have de-risked and proven so much. But I'm sure there's still a lot in front of you to get up to the size and scale that you could be operating at. So what's now? Yeah, it's all about production. That's like the end of the day is that we've gone from building dozens of things to hundreds of things to thousands of things. And now we need to make tens of thousands of things. And the only way that you can do that is by developing a muscle around manufacturing that most venture-backed startups don't have. That's not how we think about building things. And so if you look at the way that like a contract manufacturer, like Flextronics works, where, you know, a company comes to them and says, we have a design, we have some sense of design for manufacturing, how this is going to be built. And then Flextronics figures out how to do that at massive scale, how to reduce cost. They work directly with the engineers at the company to design for manufacturing. That's kind of what we need to do at this point. We need to get to the point where we are building at scale. And that means that we're making design decisions when we build things that are enabling scaling up to tens of thousands of units. So last year, we announced that we're building out a facility in Ohio that we're calling Arsenal One, which is a over the next few years will be a five million square foot manufacturing campus. The first about 800,000 square feet will come online in the first quarter of 2026. and you know we're building out that team and starting to kind of take very seriously the challenges that are going to be a big part of scaling from where we are now to that point it seems like tesla and spacex are like a couple of the only companies that have like really done this at scale yeah i mean we should talk about spacex a little bit because obviously you guys are close but can you like explain why it is so hard like what is what are the capabilities that are like extremely hard to acquire to like produce cars at scale or military equipment at scale, rockets at scale. Like, can you like, you know, articulate why it's so hard to do 20,000 instead of 200? There are people that could do it better than I can, but I'll do my best. I mean, the Tesla example is a good one. Like the skill set that was required to build the first roadster, like you can spend basically an infinite amount of money. You know, you can be highly exquisite in your approach to every problem that you're trying to solve. Very different than we're going to make a million cars. And like, if we're going to make a million cars, what is the most efficient way to weld door panels? What's the most efficient way to acquire the materials that we need to manufacture batteries? Yeah, actually, you know, it's like when you say like a million cars, you're like, okay, like how many is that a day? And then you're like, geez, it's crazy. Like the scale is just absolutely crazy. Yeah. It's just a different mentality that you need internally to do that um spacex is a little different because it's still highly exquisite aerospace kind of stuff yeah um but even in that case you see that you know elon and team has uh they've verticalized the supply chain yeah so they're like making things in house there's still a lot of volume of really complicated exactly and so they figured out like we can't actually go to suppliers for a lot of these parts because the cost uh up to the business the risk to the business of that part failing is catastrophic. And so they end up having to verticalize a lot of this. In the annual case, it's like, you know, if we're building missiles, like right now, like a Patriot missile is two and a quarter million dollars. It's a lot of money for one missile. What if you were to build a Patriot equivalent or better missile for one tenth the cost? A lot of the changes that you would make are literally manufacturing decisions. It's like, how are you molding the exterior of it? How are you making solid rocket motors? What is the, like, what materials are being used in the seekers? Do we have the ability to acquire enough of those materials that we can manufacture at scale at lower cost? And so there's obviously engineering design decisions, but a lot of that is also, can you manufacture at a price that is attractive? It's interesting. Like, you see, like, a lot, you know, just hearing you talk about it, it's like, you see a lot of, you know, which is amazing, new companies that are, like, working on, like a particular weapon or product or something in like the defense space. And then like, I wonder like how much of the battle is that versus government relationships, production capabilities, like capital access, like, you know, relative to like in a software company, we're like building a great software product is like, let's say it's like 70% of the battle. Like, I wonder what the equivalent is. Yeah. I always tell defense tech companies, it's like, this is not the field of dreams. If you build it, they do not come. That's not how it works. Like if you build a perpetual motion machine and the point of the perpetual motion machine is like powering forward operating bases, you go to the Department of War and you say, I have built you a perpetual motion machine and I will sell it to you for a million dollars. They would, okay, we're going to bid this out to market and we're going to have Lockheed Martin, a $100 billion contract to rebuild from white paper, your perpetual motion machine. That's how works. The challenge is really, do you understand well enough how the government works that you can navigate a strategy so that they're going to buy from a non-traditional player? And that's like, you flip it. It's like not 70% product. It's like 30% product. The product matters. You have to build something that works and that serves the warfighter's needs and is better than the alternative, all of that. You also have to know what you're doing on the business side of the equation. One last question, because we were just talking about SpaceX. you know, there was Space Force announced and, you know, years ago, people kind of made fun. It seems to me like space warfare is actually like super important. Is there like a sense that you have that like, as time goes on, more and more of the way war is going to happen between countries will happen, like maybe without people on land, will a lot of it be in space? Like, what's the future of like the way that like powers are going to struggle? Yeah. I mean, space is going to be really important without a doubt, but all of it, like every domain is going to have its own version of low cost autonomy. The era of putting 5,000 people on a $15 billion aircraft carrier and using that for force projection is over. Like, you know, you can fire a, you know, single digit millions hypersonic carrier killer missile and take that asset out. It's like, you really, that's not the way that you're going to do that. And so you need- Are aircraft carriers still important? I mean, not in great power conflicts. Maybe they're important from like a logistics perspective, but because of this like cost calculus, it's crazy because not that long ago, they were like these just machines of dominance. Right. Well, I think in the in the types of conflicts we were engaging in, in, you know, the era of counterterrorism or something very different. Like the non-state actors, terrorist organizations, they're not going to be taking out aircraft carriers by and large. That is it's just like a different kind of risk calculus. So what will it be now? Like, well, we have like a lot left, like human to human combat is going to go down, I assume. Well, human to human combat has been going down for a while. I mean, you look at the exception to that, of course, is like in a conflict like between Ukraine and Russia, you still have a lot of casualties, like a lot of casualties. I don't think people, most people fully recognize just how many casualties there have been in that conflict. But the way that the United States Armed Forces has been engaging internationally is very heavily driven by special forces, which I think is going to be the case for a long time. you're going to have highly trained, highly skilled operators that do super risky missions and things like that. But most of the other engagement is very likely going to be autonomy. It's like low cost undersea, low cost surface vessels, low cost ground vehicles, low cost aerial vehicles, and doing that in a way that takes the human out of the dullest, dirtiest, most dangerous jobs and empowers our ability to execute a strategy in a way that reduces potential for casualties and loss of human life. So I just was thinking as we're talking, I'm like, you know, talking to a general founder, but like, we're also like, we're here at Founders Fund, like you're an investor too. And so you really are doing both, you know, both things at once, but like really doing both things at once. And you know, many people at Founders Fund do it that way. What's like the ethos for you, maybe for Founders Fund in general, but I guess just for you, like, what is this sort of like founder investor? Like, do you think about it? Does it matter? Are you just doing what feels most important at any given time? What is the mindset behind the way that you work like this? Yeah, it really is the latter. Just kind of doing whatever feels the most important at that moment. I didn't start Androll because I was like, man, I really want to be a slashy, like an actor model from Zoolander. I really want to do investor operator. That was not my thought process. I didn't even think that was an option. I had been looking for you know what we did were large investors in Palantir and SpaceX the question is is there another one that we should be looking for I have been kind of poking around didn't find anything and went back to the team and I said what I really think we need is a software defined but hardware enabled defense prime rather than just like a software company like Palantir and I was like oh it's going to be really hard SpaceX and Palantir are both founded by billionaires I'm not really sure how this company is going to exist but if it does exist we will know about it because it's it going to have to be pretty high profile And to my surprise the team was like well who better to do it You should just go and build the team to do that So it was kind of accidental But then like from a day perspective, my chief of staff is awesome. She's been with me for the last two years here. We just kind of divide and conquer. And she sets the priorities for, you know, here are the andrel things that really have to happen today. Here are the founders fund things that really have to happened today. And we context switch a lot. One of the things that I would find probably interesting in your shoes is the relationships with Palantir, SpaceX, you know, Anduril, like they're so deep that, you know, you're also like out, you know, you might be meeting a seed or series A or B company or whatever. And you've also got these relationships where you can and do just continue investing in these companies that you are helping build, you know, very well, you have crazy close relationships. So like, how do you internally think about comparing, deploying more capital in these things where you're spending tons of time, you know everything versus like you meet a new company, the regular VC way? I've still loved the kind of curiosity that venture capital brings to the table. Like I love learning about new things. There's energy that I get from the VC side of things that is super fun. You know, one of the things that I noticed, I've been in venture now for just about 12 years. One of the first things I noticed when I joined Founders Fund is that BCs make for incredible dinner table conversations because we know like six inches of depth across the widest possible breadth of sectors. But I also found that to be like sort of boring over time. And Anderil has been an amazing antidote to that. So it's like, I can go a mile deep and then I can come out and do the six inches of depth thing again. There's actually a funny thing with these podcasts where I will, you know, like an accomplished CEO has just succeeded wildly. That has a smaller TAM audience than a VC who can just talk about a bunch of things, which I sort of get like as a listener or just as like a dinner guest. I understand that. But I think the really interesting meat usually is deep somewhere. Totally, yeah. I mean, this is the most remarkable thing about a person like Elon is that- He's deep in so many places. He's deep in so many places, yeah. And the closer that I'm not Elon and I'll never be Elon, But the closer I've gotten to that, as I've gotten deep in multiple things, I've really respected and appreciated even more how insanely hard this is. I was with him a few months back as we were transitioning between two meetings that we were going to. He was on a conference call with SpaceX talking about the literal metallurgical materials that were going to be used for this specific part of the rocket. And I was like, how is it possible that you're that involved at that level? And it's not like the learning support from like, it's not like Tesla needs those materials. So it's like you learned it for one purpose. For one purpose. Yeah. I don't feel like I'm even remotely close to that, but there is, there's something intellectually much more meaty for me on the operations side, for sure. Is that something that is like cultural to Founders Fund? Since I was like, you know, there's a lot of other companies that get like built by partners here. Is there something about the attraction to people that have depth in them that like, is that necessary? The Founders Fund? Well, you know, I think that the name of the fund can be interpreted in two ways. You know, why are we called Founders Fund? The most straightforward way that most people think is what you just said, which is that we're like founders. Like most of us, or at least many of us have started companies before. We kind of have an operational engagement with the world that a lot of other more finance-oriented VCs don't have. the real answer to the question is that we're a fund for founders, which means that if we believe that we could run a company better than a founder, like we want to board see, we want to help you, we want to give all the strategic guidance. Like we would probably just start the companies ourselves, but we're investing in these companies because we believe that the founder is the right person to run that specific business. And so it kind of works in both ways. But I do think because of that second piece, because we are not trying to get super operationally hands-on with our portfolio, it means that we have time and space and the permission from the fund to go and explore the things that we think are most interesting. So you end up with Scott Nolan starting a nuclear fuels company, General Matter, and Deliana Esperohoff starting Varda, Hypersonics, and in-space manufacturing company. Peter started Palantir. I started Andrel. There's no intentional incubation strategy. It's literally just the output, the exhaust of a system that rewards and incentivizes people on going deep. You must have companies that are asking for things though, from the team, of course, like even without board seats, I'm sure people are still asking. Oh yeah. We are very good at being reactively supportive. Right. Like someone calls to pick up right away, I'm sure. Totally. Totally. We're the first people to go out, make introductions, help with business strategy, help with pricing, whatever it is, we're very engaged at that level. But, you know, we don't want board seats. Like we don't have 20 board seats like a lot of VCs. I think I have like four board seats right now, which is so different. Yeah. Taking aside the incubations, although that's obviously part of it, or there's building companies being a founder, the investing itself has just been like extremely good. And I'd be curious if you could try to pick apart or point to some of the things that have mattered for that. Maybe I'll start by prompting like what I have sort of observed that seems particularly good. One is like the ability to concentrate over and over again in aggressive ways within a fund, across a fund. That seems very important. There's obviously calling trends early and like not hopping into things that are already hot, but like doing things before they're hot. There seems to be something about like the way the partnership operates and like the type of dynamics that you have with one another and how ideas get debated, but I'm curious, like you're obviously living it. Like, what would you point to as the things that have made Founders Fund as high performance as it has been? If you divided this into access, evaluation of companies and like follow on, there's probably an answer in each of those categories. On the access front, it's like, I've been here 12 years. The fund has been around for 20 years. Most of the access edge that we have happened before my time. You know, it's like the PayPal mafia coming together, Peter's brand. There's like a long history of people leaving storied VC funds to start their own firms and stuff like that. Like I have literally zero interest in doing that. I am just riding the wave of Peter Thiel's genius. That's awesome. It's a great strategy. So that's like that edge. The middle one on evaluation, I think that like, you know, this is also sort of a Peter culture thing, which is open debate. There are no sacred cows. Hierarchy doesn't exist. It's like, you know, whether it's me, Peter, Napoleon, which are like the GP or the newest person into the fund, we want everyone to be fighting and advocating for what they believe we should be investing in. And because of that, you get this really vibrant culture of debate. You know, it's not like a single hegemonic figure coming in and, you know, beating down the junior people and telling them how to do this. A friend of mine who used to work here was saying that like, somebody asked him if like the fact that people like debate hard was like, one of the tough things and partner meeting. And he's like, no, that was like amazing. It was like, there was enough trust and respect between people to like fight it out for real. Totally. Which I think doesn't exist at like a lot of places. One of the most shocking things to me when I first joined, I had worked with Peter for a while when I was at Palantir. I was at Palantir for six years before I came here. But I kind of had this expectation that Peter would be this sort of immovable intellectual force. And actually, that's not- You can convince him all stuff. You can convince him all stuff all the time. You have to construct a good argument. And that's how you're going to gain respect with him is by winning an argument. But I was shocked at how he would take a position. We would, you know, litigate the position. And if he felt like there was a better argument being made, he would say, yeah, I think I agree with XYZ person that made that argument instead. And so that is very core to who we are as a fund. And I think that served us well. And so that's on picking. That's on picking the evaluation. Is there anything else besides just like, you know, so like there's the debating, but then there's the contents of what's being debated? Like, are there any ways of thinking or mindsets that are in there? Because it's like you want to like you want, you know, good ideas are winning, obviously. It's like, what's in that? Yeah, I mean, there's there's a bunch of different ideological pieces to this. Many of them are described in Peter's book Zero to One. We really don't like consensus categories anywhere where you find a bunch of competition. We try to stay away from. I think chapter six of Peter's book is literally titled Competition is for Losers. We don't love competitive market spaces. Just to click on that, like if there's a space, let's say there's like a AI customer support, there's something where you're like, this is both good and competitive. Like you're not going to not invest just because there's competition. So how, like, what would the conversation be like if you're looking at something in code or support or one of these places that are both competitive, but obviously valuable? Yeah. I mean, we're huge investors in cognition, for example. This was literally the tenor of the debate internally when we looked at writing that first check into cognition. It's like, do we believe that the founder, that like atomic element of the company is so good that we have to get over our philosophical, nearly dogmatic hesitance to go into these competitive consensus markets? And in that case, the answer was yes. But in many cases, the answer is no. Like you have to have an absolute killer to feel really good about making that bet. But then it's really not a company bet. It's a founder bet. And so you're not betting in the category. You're investing in the person at that stage. You know, we also only invest in founder CEOs. So like once a company has a professional CEO, we're out. We're just not going to do it. We also are much more willing to take tech risk than a lot of other funds. We're happy to write sizable checks into companies that are a ways out from having a product in market. And, you know, there's kind of the famous moniker of Founders Fund from, you know, over a decade ago is we wanted flying cars. And instead we got 140 characters. And so we're very open to it. And that's just a pure founder bet too? Yeah, you're just, you're trying to figure out, is this the type of person that's going to be able to manifest this reality? How much do you talk about like markets in these early stage investments? Or is it always about the founder? Early stage, it's about the founder. Obviously later stage, you have a lot more data and that stuff comes in. But early stage, like something is probably binary. It's like, it's either going to work because the founder wills it into existence or it's not. And it's probably going to be because the team wasn't able to pull it off, not because there was a fundamental TAM constraint or anything like that. What are the other like big tenants or principles that like often are getting, you know, harkened back to in these debates? I mean, a lot of it is sort of philosophical about like, do we believe that this needs to exist in the world? And do we believe that us coming in with capital is going to help the founder bring this thing into existence? And, you know, some of our best bets have been predicated on exactly that, whether that's SpaceX or Anderle. These are really complicated companies to pull off. And it takes kind of that like hope and vision type of investment out of the earliest stages. Do you buy in at all on like high brand VC kingmaking as a thing? And I'm sort of asking this from the perspective of, I think a lot of other brands that are sort of in your same echelon smartly sometimes use like what I would see as kind of like, I don't want to say this sounds negative. I was going to say king making tactics, which sounds negative. I don't mean it that way. But like where I see a decision where I'm like, I think what they're doing is calculating that if by doing this round, this company is going to do well. And I'm like, I'm guessing that was the conversation that's happening. And I think it is sometimes true. But I'm curious if that's ever how you think about it. Or are you just like, we don't buy into that? Because like, I don't see Founders Fund flexing its brand and stature and these sort of dynamics the way that I think you could. And I wonder if that's because there's like a distaste for it or if it's because you think it doesn't work. Going back to the point about consensus competitive businesses is that because we don't do a whole lot of those, we're in less of a position to do the kingmaking thing than a firm would be that's going directly at like crowded enterprise SaaS categories or something. So that's kind of half the answer to the question. The other half of the answer is we have debates about this all the time internally. You know, I published a blog post a couple of months ago about kamikaze rounds, which is basically what I would call like king making or like a venture fund comes in and they're like, you know, I want to invest this huge check, 100, 200, 500 million dollars into the company. And obviously, in order for that to be like minimally dilutive, that means that you're going to have to accept a valuation that's way higher than the current status for the business. And founders can very easily be convinced to take huge checks at high valuations. But that's like an anchor around your neck that oftentimes destroys companies. And there countless examples of these And it always funny like publicly talking about kamikaze rounds people ask for names and it like the community says small it like do I really want to burn relationships by naming names But everyone knows who I talking about when I say that this happens There's this sort of view that it's the VC's fault, that like the VC is mispricing and the company has no responsibility. But like as a founder, you're accountable for ensuring the long-term viability of your business. And capital raising is part of that strategy that you have to roll out. And so, you know, there's kind of a conversation that you have to have with companies as they're going through this process. It's like, yes, you could take more money at a higher price. You could also take less money at a lower price. And that might actually be the better decision for you as a company. This was famously trolled in the HBO show, Silicon Valley, where this guy's company failed and the protagonist was talking to him over drinks at a bar and he was like, you could have raised less. And he was like, no one told me I could have raised less. So I think king making in some cases, especially in highly competitive markets, might actually work. But there's tremendous downside risk as well. Yeah. And what about on the concentrating part? Yeah. So that's like the third category. So like first category is access. Second category is, you know, analysis, evaluation. And then that third category is the follow on. One quick question there. Out of all of your dollars, how many, like roughly speaking, what percentage are an initiation check versus like a fault, like out of a hundred dollars that go out of the door, like how important is this third bucket? Well, we have a venture fund and a growth fund, which is going to be wildly different. Obviously, like the growth fund is writing massive checks right up front and the venture fund, ideally the top, call it five or six companies in each vintage and each venture fund should represent like middle size, double digit millions of, or percentages rather of the fund. So like I want like 40, 50% of the fund if we can concentrated into our top positions. So that bucket's huge. It's like as important as the first bucket. Totally. Yeah. Okay. So how do you do that? There's like the proactive version of this and there's the non-proactive version. The proactive version of it is you have to constantly be asking yourself, what are the best companies we've invested in so far in this venture fund? And can we get out in front of another round, giving them more capital, figuring out some way to intentionally create concentration? That's like half of it. The other half of it is not not concentrating. And that's just like, don't do a bunch of stupid single digit million dollar deals. Yeah. You're saying in the first place or in the follow-ons of them? Both. If you have a billion dollar venture fund, like Founders Fund, we've pretty consistently raised billion dollar venture funds. And you spend, you know, 10, 20, 10, 15, 20% of the fund writing seed checks, even a really well-performing seed fund of a hundred million dollar size, like maybe you return your fund one X, but like that it's not really going to move the needle. You need to write larger checks. So that's kind of like the lesson. We really don't want to be writing a bunch of a hundred K, 250 K checks, especially because we now have a signaling anchor around those companies that they're going to come back and they're going to really need us to participate in order to raise their Series A. But in addition to that, if we go out and we write five, $10 million checks into Series A companies like we should be doing at that first bet, when they come back and they're kind of mediocre performance, maybe some other VC came in with a really high price, are you coming in and doing an auto pro rata? I think that's like sort of the bias for the industry is like, we should just do our pro rata if somebody else marks up the company. But there's this weird reverse correlation between like companies that need your signaling and companies that ask you to not invest your parada. It's like if a company asks you to not invest parada, you should be fighting for parada. But if a company comes to you and says, I really need your... That's a good founder tip for people who want to get parada. Yeah, exactly. And so I think part of the concentration thing is like, I just don't to play those games. If it's not at the next investment opportunity, if it's not a deal that we would have done X the initial investment, like we would have passed, well, then you should pass or you should minimize the check size to the company. So there's a bit of a trade-off there. I mean, part of it also, I think the whole thing relies on having companies that can be concentrated into. And like there's good software companies that are never going to need $5 billion of capital and are never going to be worth $200 billion. And so some of this... And when it is the case, it means you need to write as large of a check as you can up front as early as you can. Yeah, exactly. Because you're like, I'm never going to buy ownership. Totally. As you look around now, like late 2025, and you like see what's happening in the landscape around and you've got like a lot of firms getting really, really big. You know, a lot of these things that we're talking about are things, you know, you've been saying for a lot of years, but now there's, you know, a lot of firms that I think are sort of starting to sort of mimetically, you know, think in some, ways, etc. But you've got a lot of interesting stuff going on. Hard tech is working really well with a lot of your companies. Obviously not broadly speaking, but there's enough of that. There's a lot of really good AI stuff. And so when you kind of smush it all together, I'm just kind of curious where you think there's alpha in venture right now. I think venture is just really hard. The Founder's Fund strategy has been stay true to our strategy. And if that means we go slow and we deploy less quickly than everyone else in the space, fine. I'm not going to stress about that. There are going to be good deals in every vintage, even through the dot-com crash. If you were a Google investor, you're fine. In the 08, 09 recession, if you're a Facebook investor, you're fine. There's always going to be at a micro level good deals. You just have to make sure you're in those deals and not get overly exuberant about chasing hype. And given the long-term strategy of Founders Fund, we're kind of immunized against hype chasing. From the outside, I imagine Founders Fund is like a no FOMO place, whereas I think most venture firms are like FOMO central. Yeah, we don't do the momentum hype chasing. It's like, are you guys never like, oh, we passed on that round and now Sequoia is doing... Does Does that conversation just never happen here? Basically never. If someone did that, you're pissed at them. I mean, there have been cases where it's like your anti-portfolio or whatever. Like I met with this company and I wish I had invested. But like that actually happens very rarely. There's not a whole lot of cases where we feel a lot of regret about. Yeah. I mean, I think it like requires a whole lot of like different mindsets than in general. Like I feel like, you know, just from talking to you, I can tell you also are not thinking in categories, whereas like most people are like, oh, that's a great category. Seems like you don't think that way at all. No, I don't think categories should exist. It's like once a category exists, it's too late. If you're a social media investor and didn't invest in Facebook, probably last money. If you're a space investor and didn't invest in SpaceX, probably last money. You want to be in the first deal, not in the second, third, fourth one. Can you build contrarianism or is all you can do shut off the mimesis? It's not even necessarily shutting off mimesis. It's more like being aware of mimesis is the important thing. We always ask founders what the origin story for the business is. That's like a really important indicator of mimetic kind of rivalry. It's like someone, they should have a reason for why they started the business. What does that story look like when it's good? Well, at a minimum, it's a story. You'd be surprised. Maybe you've done this as well. Usually there's not even a story. People are like, well, my business school classmate and I, we really wanted to found a company. And we did 100 interviews. And this is the thing that we thought would be the fastest to be a F. I mean, there's probably some examples in the course of history. I heard that Travis did whiteboard his way to Uber. I don't know if that's true. not. Maybe this is a great exception. Yeah, it's a great exception to the rule. But I think generally speaking, the best companies have incredible stories. Just trying to figure that out is like kind of a quick and easy cheat code for avoiding that. If you're meeting a founder, is that like the main thing you're trying to do? I mean, it's the first question I always ask. It's like, tell me about yourself and the origin story of the business. And then everything else flows from there. Yeah. Maybe that's like a good segue into like the last thing I want to talk about, which is just like a little bit of like your own story and just like getting into like you as a person a little bit. One of the things I really appreciate is like that you've been talking about like your like religious faith for like a long time. And I think that's been something that like, I think probably a lot of people in tech, obviously by the numbers a lot have, not many people talk about it, or at least they haven't in the past. So like, why is it something that you've wanted to talk about? Well, I think like the kind of weird thing about this is that I haven't been super intentional about it. It just so happens to come up all the time and I'm not afraid to talk about it. And then it turns out that like a lot of people resonate with it because a lot of people... Yeah, and then it comes up in conversation. There's been an interesting kind of shift that we've seen over the last 10 years, but really even since COVID, where you're starting to see this sort of theological revival with people being much more interested in kind of traditional faith. I think part of that might just be like, you know, we went through this decade plus of like Burning Man spiritual kind of stuff, like Eastern mysticism. And a lot of people came out of that being like, man, I've done like 12 ayahuasca journeys and I still have a sense of emptiness. There's something about like the rootedness of like Judeo-Christian kind of beliefs that are really interesting to the current moment for one reason or another. And so I find that there are a lot of people that just come up to me and ask, like, I heard that you're a Christian. Can you explain this to me? You know, a lot of people traditionally in Silicon Valley think of Christians as sort of these backwards anti-intellectuals or something. And then they find out that, you know, me or Peter or, you know, someone that they otherwise respect intellectually is a Christian. And they're like, okay, this doesn't make any sense. I'm just curious. Like, let's talk about it. And so it ends up coming up all the time, which is fine. It's actually, essentially, because like, you know, you said Judeo-Christian, then for some reason, I'm like, I'm Jewish. For some reason, I feel like Judaism doesn't get that reaction from people of the like, oh, is this like some backwards thing? Even though like there's a lot in, you know, it's both the Bible and everything like that. What is that? What do you think it is that has like, is that schism to do with the fact that like, are the beliefs in Christianity like a bit more concrete in a certain way than, you know, Judaism, they're a bit more like floaty or something? Well, I think that there's probably much more of a cultural kind of part to Judaism that feels like less tethered to the theology, whereas like that doesn't exist so much, especially like in, you know, coastal America, like maybe in the Midwest or the South, there's like more of a cultural Christian expectation. But, you know, if you're in tech in San Francisco, you have to actually believe in order to be outwardly talking about this stuff. I think that's probably one difference. Maybe another difference is that we have less of an experience in the United States of Orthodox Judaism than maybe they do in Israel, but we have much more of an experience with Orthodox Christianity. And so we have more of like a cultural, societal kind of debate about the topics that exist. I'm sure if there were as many Orthodox Jews in America as there are Orthodox Christians, you would have a similar level of debate about the topics. I wonder if like Christianity somehow like ties in with the way that you think about your work at Andrel. Like I would imagine that there's actually like quite a lot in terms of the way you'd think about like morality and what makes, you know, for like a just fight or not. And, you know, how to think about people's lives and the stack rank of ethics and things like that. Well, the entirety of just war tradition, which is like the root of the Western law of war, like treaties, like all of this stuff is rooted in just war theory, which is a tradition started by St. Augustine. It was all Christian intellectual thought from the very beginning. We benefit a lot from the impact of Christianity on the development of the West. And so, yeah, I think that informs a lot of it. That said, there's also a lot of debate theologically about these things inside of Christian circles. You know, like St. Augustine, Martin Luther disagreed on a bunch of different stuff. And so I think the robustness of the tradition enables us to engage in, you know, really, you know, clarifying ways about how we reach the beliefs that we have about the importance of protecting people who can't protect themselves and how you think about, you know, avoiding civilian casualties, how you think about the quality of persons. You know, this is all Christian thought. Totally. It's funny because it's like, you know, if you want to not grapple with some framework for morality and some sort of way to think about like defense, you might feel good sort of avoiding those topics, but then you end up in a place you don't want to be, I think. Unless you live in a society with a strong monopoly on violence where you don't have to think about these things because someone else has done it for you. Right. Yeah. All right. Well, we'll end it there. Trey, thanks a bunch for doing this. I really enjoyed it. Great. Thanks, Jack. Appreciate it.