All-In with Chamath, Jason, Sacks & Friedberg

Jensen Huang LIVE: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

67 min
Mar 19, 2026about 1 month ago
Listen to Episode
Summary

Jensen Huang discusses Nvidia's evolution from a GPU company to an AI factory company, covering the shift from generative AI to agentic systems and the massive compute requirements this transition demands. He addresses AI regulation concerns, emphasizes the positive potential of AI technology, and outlines Nvidia's strategy across physical AI, robotics, and global market expansion.

Insights
  • The transition from generative AI to reasoning to agentic systems requires 10,000x more compute, creating massive market expansion opportunities
  • AI agents represent a fundamental shift in computing paradigm, essentially creating personal AI computers that can manage resources and execute tasks
  • The AI industry's PR problem stems from excessive doom-mongering rather than balanced discussion of technology capabilities and limitations
  • Physical AI represents a $50 trillion market opportunity as technology finally addresses industries largely void of automation
  • Enterprise software companies will become value-added resellers of AI tokens, creating logarithmic market expansion beyond current forecasts
Trends
Disaggregated inference computing architecture spreading across heterogeneous chip typesAgentic AI systems replacing traditional software interfaces and workflowsOpen source AI models gaining significant market share as second most popular categoryPhysical AI and robotics approaching commercial viability within 3-5 yearsAI token consumption becoming a key productivity metric for knowledge workersDecentralized AI training becoming technically feasible through distributed computingHealthcare AI agents integrating into medical instruments and diagnostic toolsAutonomous vehicle reasoning systems enabling safer self-driving capabilitiesSpace-based data centers emerging as viable infrastructure optionAI-powered biological research accelerating drug discovery timelines
Companies
Nvidia
Primary focus as Jensen Huang discusses company's evolution from GPU to AI factory company
OpenAI
Discussed as leading AI model provider and example of successful AI revenue scaling
Anthropic
Featured prominently for Claude AI and projected to exceed revenue forecasts significantly
Groq
Recently acquired by Nvidia for specialized inference processing capabilities
Tesla
Mentioned as customer buying Nvidia training computers for autonomous vehicle development
Meta
Cited as major customer moving AI workloads to Nvidia infrastructure
Amazon Web Services
Announced plans to purchase one million Nvidia chips over next couple years
Google
Mentioned for TPU development and as both customer and competitor to Nvidia
Microsoft
Referenced in context of major cloud providers using Nvidia infrastructure
Uber
Partnership announced for autonomous vehicle deployment using Nvidia technology
Mercedes
Automotive partner working with Nvidia on self-driving car technology
BYD
Chinese automotive manufacturer added as new Nvidia autonomous vehicle partner
Intel
Used as comparison point for Nvidia's data center revenue growth trajectory
AMD
Mentioned as competitor in AI chip market with lower-cost alternatives
Waymo
Cited as potential iOS-like closed platform competitor in autonomous vehicles
People
Jensen Huang
Nvidia CEO and primary interview subject discussing company strategy and AI future
Dario Amodei
Anthropic CEO whose revenue projections Huang believes are conservative
Elon Musk
Referenced for Tesla's AI work and robotics predictions of one robot per human
Donald Trump
Mentioned for supporting American AI industry leadership and global expansion
Peter Steinberger
Developer of OpenClaw open-source AI agent system highlighted by Huang
David Friedberg
All-In podcast host mentioned for agricultural AI work at Ohalo
Chamath Palihapitiya
All-In podcast host referenced humorously regarding Groq acquisition
Jason Calacanis
All-In podcast host conducting the interview with technical questions
Brad Gerstner
Referenced for policy work and asking questions about AI regulation
Quotes
"If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed."
Jensen Huang
"You're not going to lose your job to AI, you're going to lose your job to somebody using AI."
Jensen Huang
"We're making software guys. We have done this before. We have invented categories and industries before."
Jensen Huang
"Even when the chips are free, it's not cheap enough if you can't keep up with the state of the technology."
Jensen Huang
"I believe every single enterprise software company will also be a reseller value added reseller of anthropic tokens."
Jensen Huang
Full Transcript
5 Speakers
Speaker A

Special episode this week we've preempted the weekly show and there's only three people we preempt the show for President Trump, Jesus and Jensen. And I'll let you pick which order we do that. But what an amazing run you've had and a great event.

0:00

Speaker B

Every industry is here. Every tech company is here. Every AI company is here. Incredible.

0:18

Speaker C

Incredible.

0:24

Speaker A

If you were building a global financial system from first principles today, you wouldn't be build it on 50 year old legacy rails. You'd build air wallets, one AI native platform for global accounts, cards and payments. It's designed to make the entire world feel like a local market. Others are bolting AI onto broken infrastructure. But Airwallocks was built for the intelligent era from day one. Stop paying the legacy tax and start building The Future at airwallocks.com Allin Airwallocks Build the future. And one of the great announcements of the past year has been Grok. When you made the purchase of Grok, did you realize how insufferable Chath Chamath would become?

0:27

Speaker B

I had an inkling that we're his

1:10

Speaker A

friends, we have to deal with him every week.

1:14

Speaker B

I know it.

1:16

Speaker D

You had to deal with him for

1:16

Speaker A

the six week close.

1:17

Speaker E

I know it's like two weeks.

1:19

Speaker B

Two weeks. It's all coming back to me now. It's making me rather uncomfortable. The thing is, many of our strategies are presented in broad daylight at GTC years in advance of when we do it. Two and a half years ago, I introduced the operating system of the AI factory and it's called Dynamo. Dynamo, as you know, is a piece of instrument, a machine that was created by Siemens to turn essentially water into electricity. And Dynamo powered the factory of the last industrial revolution. So I thought it was the perfect name for the operating system of the next industrial revolution, the factory of that. And so inside Dynamo, the fundamental technology is disaggregated inference. Jason, I know you're super technical. Absolutely, I know it.

1:20

Speaker A

I'll let you take this one. Go ahead and define it for the audience. I don't want to step on you.

2:15

Speaker B

Yeah, thank you. I knew you wanted to jump in there for a second. But it's disaggregating inference, which means the pipeline, the processing pipeline of inference is extremely complicated. In fact, it is the most complicated computing problem today. Incredible scale, lots of mathematics of different shapes and sizes. And we came up with the idea that you would change, you would disaggregate parts of the processing such that some of it can run on some GPUs and rest of it can run on different GPUs. And that led to us realizing that maybe even disaggregated computing could make sense, that we could have different heterogeneous nature of computing. That same sensibility led us to Mellanox. You know, today Nvidia's computing is spread across GPU CPUs, switches, scale up switches, scale out switches, networking processors. And now we're going to add Groq to that and we're going to put the right workload on the right chips. You know, we just really evolved from a GPU company to an AI factory company.

2:19

Speaker E

I mean, I think that was probably the biggest takeaway that I had. You're seeing this fundamental disaggregation where we've gone from a GPU and now you have this complexion of all these different options that will eventually exist. The thing that you guys said on stage or you said on stage was I would like to the high value inference people to take a listen to this. And 25% of your data center space you said should be allocated to this Grok LPU GPU combo.

3:24

Speaker B

We have Grok to about 25% of the Vera Rubins in the data center.

3:50

Speaker E

So can you tell us about how the industry looks at this idea of now basically creating this next generation form of disaggregated pre filled decode disag and how people do you think will react to it?

3:55

Speaker B

Yeah, and take a step back. And at the time that we added this, we went from large language model processing to agentic processing. Now when you're running an agent, you're accessing working memory, you're accessing long term memory, you're using tools, you're really beating up on storage really hard. You have agents working with other agents. So some of the agents are very large models, some of them are smaller models, some of them are diffusion models, some of them are autoregressive models. And so there's all kinds of different types of models inside this data center. We created Vera Rubin to be able to run this extraordinarily diverse workload. My sense is, and so we added what used to be a one rack company, we now added four more racks. So Nvidia's tam, if you will, increased from whatever, whatever it was to probably something call it, you know, 33%, 50% higher. Now part of that 33% or 50%, a lot of it's going to be storage processors. It's called Bluefield. Some of it will be. A lot of it I'm hoping will be GROK processors and some of it will be CPUs and they're all good. And a lot of it's going to be networking processors. And so all of this is going to be running basically the computer of the AI revolution called agents, the operating system of modern industry.

4:06

Speaker D

What about embedded applications? So my daughter's teddy bear at home wants to talk to her. What goes in there? Is it a custom ASIC or does there end up becoming much more a broader set of TAM with developing tools that are maybe different for different use cases at the edge and an embedded application?

5:34

Speaker B

We think that there's three computers in the problem at the largest scale. When you take a step back, there's one computer that's really about training the AI model, developing, creating the AI, another computer for evaluating it. Depending on the type of problem you're having, like for example, you look around, there's all kinds of robots and cars and things like that. You have to evaluate these robots inside a virtual gym that represents the physical world. So it has to be software that obeys the laws of physics. And that's a second computer. We call that omniverse. The third computer is the computer at the edge, the robotics computer. That robotics computer. One of them could be self driving car, another one's a robot, another one could be a teddy bear, little tiny one for a teddy bear. One of the most important ones is one that we're working on that basically turns the telecommunications base stations into part of the AI infrastructure. So now all of the it's a $2 trillion industry. All of that in time will be transformed into an extension of the AI infrastructure. And so radios will become edge devices, factories, warehouses, you name it. And so there are these three basic computers. All of them are going to be necessary.

5:52

Speaker C

Jensen, last year I think you were ahead of the rest of the world in saying inference isn't going to 1000 years.

7:11

Speaker B

Just last year, yes, I had to hurt my feelings.

7:18

Speaker C

Is it going to 1,000,000 x? Is it going to 1,000,000,000 x?

7:21

Speaker B

Yeah, right.

7:24

Speaker C

And I think people at the time thought it was pretty hyperbolic because the world was still focused on pre scaling, on training. Here we are now, inference has exploded. We're inference constrained. You announced an inference factory that I think is leading edge, that's going to be 10x better in terms of throughput to the next factory. But yet if I listen to what the chatter is out there is that your inference factory is going to cost 40 or 50 billion. And the alternatives, the custom ASICs, AMD others are going to cost 25 to 30 billion and you're going to lose share. So why don't you talk to us? What are you seeing? How do you think about share? And does it make sense for all these folks to pay something that's a 2x premium to what others are marketing?

7:25

Speaker B

The big takeaway, the big idea is that you should not equate the price of the factory and the price of the tokens. The cost of the tokens. It is very likely that the $50 billion factory, and in fact I can prove it, that the $50 billion factory will generate for you the lowest cost tokens. And the reason for that is because we produce these tokens at extraordinary efficiency. Ten times, you know, the difference between 50 billion. Now it turns out 20 billion is just land, power and shell. Right?

8:07

Speaker E

Right.

8:47

Speaker B

And then on top of that you have storage anyways, networking anyways. You got CPUs anyways, you got servers anyways, you got cooling anyways. The difference between that GPU being 1x price or halfx price is not between 50 billion and 30 billion. Pick your favorite number, but let's say between 50 billion and 40 billion. That is not a large percentage when the $50 billion data center is actually 10 times the throughput.

8:47

Speaker E

Right.

9:14

Speaker B

That's the reason why I said that. Even for most chips, if you can't keep up with the state of the technology and the pace that we're running, even when the chips are free, it's not cheap enough.

9:16

Speaker E

Can I just ask a general strategy question? I mean, you're running the most valuable company in the world. This thing is gonna do 350 plus billion of revenue next year, 200 billion of free cash flow. It's compounding at these crazy rates. How do you decide what to do? Like, how do you actually get the information? I mean, it's famous now, these sort of emails that people are meant to send you. But how do you really decide to get an intuition of how to shape the market, where to really double down, where to maybe pull back, where to actually go into a greenfield? How does that information get to you? How do you decide these things?

9:27

Speaker B

In a final analysis, that's the job of the CEO. And our job is to define the strategy, define the vision, define the strategy. We're informed, of course, by amazing computer scientists, amazing technologists, great people all over the company. But we have to shape that future. Well, part of it has to do with is this something that's insanely hard to do. If it's not hard to do, we should back away from it and the reason for that, if, if it's easy to do, obviously lots of competitors, a lot of competitors. Is this something that has never been done before, that's insanely hard to do and that somehow taps into the special superpowers of our company. And so I have to find this confluence of things that meets the standard. And in the end we also know that a lot of pain and suffering is going to go into it. There are no great things that are invented because it was just easy to do and just like first try, here we are. And so if it's super hard to do, nobody's ever done it before, it's very likely that you're going to have a lot of pain and suffering and so you better enjoy it.

10:01

Speaker E

So can you, can you just look at maybe three or four of the more long tail things you announced and just talk about the long term viability of whether it's the data centers in space or whether it's what you're trying to do with adas and autos or, you know, what you're trying to do on the biology side, just give us a sense of like how you see some of these curves inflecting upwards and some of these longer tail. Business.

11:00

Speaker B

Excellent. Physical AI, large category, we believe and I just mentioned we have three computing systems, all the software platforms on top of it. Physical AI as a large category. It's technology industry's first opportunity to address a $50 trillion industry that has largely been void of technology until now. And so we need to invent all of the technology necessary to do that. I felt that that was a 10 year journey. We started 10 years ago. We're seeing inflecting now. It is a multibillion dollar business for us. It's close to $10 billion a year now. And so it's a big business and it's growing exponentially. And so that's number one. I think in the case of digital biology, I think we are literally near the ChatGPT moment of digital biology. We're about to understand how to represent genes, proteins, cells. We already know how to understand chemicals. And so the ability for us to represent and understand the dynamics of the building blocks of biology, that's a couple, two, three, five years from now, in five years time. I completely believe that the healthcare industry, where digital biology is going to inflect. And so these are a couple of the really great ones. And you could see them all around us. Agriculture, Agriculture, collecting now, no question. Yeah.

11:20

Speaker A

Jensen, I want to take you from the data center to the desktop. The company Was built in large part on hobbyists, video gamers and all those graphic cards in the beginning. And you mentioned in front of I think 10,000 people here, just Claude Openclaw, Claude Code and what a revolution agents have become. And specifically the hobbyists who are really where a lot of energy. We see a lot of the innovation breaks. Want desktops. You announced one here. I believe it's the Dell 6800. This is a very powerful workstation to run local models, 750 gigs of RAM. Obviously the Mac studio sold out everywhere. In my company we're moving to Open Claw everything Freebird just got claw pilled. You got claw pilled, I understand. And you're obsessed with these. What is this from the streets movement of creating open source agents and using open source on the desktop mean to you.

12:40

Speaker B

So great.

13:40

Speaker A

Where is that going?

13:40

Speaker B

Yeah, so great. First of all, let's take a step back. In the last two years we saw basically three inflection points. The first one was generative. ChatGPT brought AI to the common everybody to our awareness. But the fact of the matter is the technology sat in plain sight months before GPT. It wasn't until ChatGPT put a user interface around it, made it easy for us to use, that generative AI took off. Now generative AI, as you know, generates tokens for internal consumption as well as external consumption. Internal consumption is thinking. Which led to reasoning 01 and 03 continue. That wave of ChatGPT grounded information made AI not only answer questions, but answer questions in a more grounded way useful. We started seeing the revenues and the economic model of OpenAI start to inflect. Then the third one was only inside the industry that we saw cloud code, the first agentic system that was very useful, really revolutionary stuff. But claw code was only available for enterprises. Most people outside never saw anything about Claude code until openclaw. Openclaw basically put into the popular consciousness what an AI agent can do. That's the reason why Open Claw is so important from a cultural perspective. Now the second reason why it's so important is that openclaw is opened, but it formulates, it structures a type of computing model that is basically reinventing computer altogether. It has a memory system, is a short term memory file system. It has scales. Did you say skills or scales?

13:41

Speaker A

Skills.

15:36

Speaker B

Oh, skills.

15:37

Speaker A

You have scales? Yes, theoretically, yeah, yeah, skills.

15:38

Speaker B

So the first thing it, you know, it has resources, it manages resources, it does scheduling. Right. And cron jobs. It could spawn off agents. It could decompose a task and cause and solve problems. As does scheduling. It has IO subsystems. It could input, it has output, it connect to WhatsApp. And also it has a API that allows it to run multiple types of applications called skills.

15:40

Speaker A

Yeah.

16:09

Speaker B

These four elements fundamentally define a computer.

16:09

Speaker E

Yeah.

16:13

Speaker B

And therefore, what do we have? We have a personal artificial intelligence computer for the very first time.

16:14

Speaker E

Open source.

16:23

Speaker B

It's open source. It runs literally everywhere. And so this is now the. This is the op. This is basically the blueprint, the operating system of modern computing.

16:24

Speaker A

Yeah.

16:33

Speaker B

And it's going to run literally everywhere. Now, of course, one of the things that we had to help it do is whenever you have Agentix software, you have to make sure that an Agentix software has access to sensitive information. It can execute code, it could communicate externally. We have to make sure that all of it has to be governed, all of it has to be secure, and that we have policies that gives these agents two of the three things, but not all three things at the same time. And so the governance part of it we contributed to Peter. Peter Steinberger was here. And so we've got a mound of great engineers working with him to help secure and keep that thing so that it could protect our privacy, protect our security.

16:34

Speaker D

Jensen, that paradigm shift makes some of the AI legislation that has passed around the country to regulate AI and a lot of the proposed legislation effectively moot, doesn't it? Can you just comment for a second on how quickly the paradigm shift kind of obviates a lot of the models for regulatory oversight of AI, which is becoming a very hot topic in politics right now?

17:12

Speaker B

Well, this is the part that with policymakers, we need to always get in front of them. And Brad, you do a great job doing this. We had to get in front of them and inform them about the state of the technology, what it is, what it is not. It is not a biological being. It is not alien, it is not conscious. It is computer software.

17:34

Speaker E

Yeah, exactly.

18:00

Speaker B

And it is not something that we say things like, we don't understand it at all. It is not true. We don't understand at all. We understand a lot of things about this technology. And so I think, one, we have to make sure that we continue to inform the policymakers and not affect, not allowed doomerism and extremism to affect how policymakers think and understand about this technology. However, we still have to recognize that technology is moving really fast and don't get policy ahead of the technology too quickly. And the risk that we run as a nation, our greatest source of national security concern with respect to AI is that other Countries adopt this technology while we are so angry at it or afraid of it or somehow paranoid of it, that our industries, our society don't take advantage of AI. So I'm just mostly worried about the diffusion of AI here in the United States.

18:01

Speaker E

Can you just double click if you were in the seat in the boardroom of Anthropic over that whole scuttlebutt with the Department of War, it sort of builds on this idea of people didn't know what to think. It's sort of added to this layer of either resentment or fear or just general mistrust that people have sometimes at the software levels of AI. What do you think you would have told Dario and that team to do, maybe differently, to try to change some of this outcome and some of this perception?

18:57

Speaker B

The first thing that I would say about Anthropic is, first of all, the technology is incredible. We are a large consumer of anthropic technology, really admire their focus on security, really admires their focus on safety. The culture by which they went about it, the technology excellence by which they went about it, really fantastic. I would say that the desire to warn people about the capability, the technology is also really terrific. We just have to make sure that we understand that the world has a spectrum and that that warning is good, scaring is less good because this technology is too important to us, right? And I think that it is fine to predict the future, but we need to be a little bit more circumspect. We need to have a little bit more humility that in fact we can't completely predict the future and the ability and to say things that. That are quite extreme, quite catastrophic, that there's no evidence of it happening, could be more damaging than people think. And of course, we are technology leaders. There was a time when nobody listened to us, but now, because technology is so important in the social fabric, such an important industry, so important to national security, our words do matter. And I think we have to be much more circumspect. We have to be more moderate, we have to be more balanced. We have to be more. More thoughtful.

19:25

Speaker C

Well, you know, I would nominate you. I think the industry's got to get together. 17% popularity of AI in the United States. I mean, we see what happened to nuclear, right? We basically shut down the entire nuclear industry. And now we have 100 fission reactors being built in China and zero in the United States. We hear about moratoriums on data centers. So I think we have to be a lot more proactive about that. But I want to Go back to this agentic explosion that you're seeing inside your company, the efficiencies, the productivity gains inside your company. There's a lot of debate whether or not we're seeing roi. Right? And you and I, entering into this year, the big question was, are the revenues going to show up? Are the revenues going to scale like intelligence? And then we had this kind of Oppenheimer moment. A 5, $6 billion month by anthropic in February. Do you think as you look ahead, you announced a trillion dollar, you know, visibility into a trillion dollars of just Blackwell and Vera Rubin over the course of the next couple years. When you see this happening at Anthropic and OpenAI, do you think we're on that curve now where we're going to see revenues scale in the way that intelligence is scaling?

20:59

Speaker B

When you look around, I'll answer this a couple different ways. When you look around this audience, you will see that anthropic and OpenAI is represented here. But in fact, every but 99% of everything that is here is all AI, and it's not anthropic and open AI.

22:07

Speaker E

Right, right.

22:21

Speaker B

And the reason for that is because AI is very diverse. I would say that the second most popular model as a category is open models. Number one is, yeah, open, open source, open weights, open source. OpenAI is number one, open source is number two. Very distant third is anthropic. And that tells you something about the scale of all of the AI companies that are here. And so it's important to recognize that. Let me come back and say a couple of things. One, when we went from generative to reasoning, the amount of computation we needed was about 100 times. When we went from reasoning to agentic, the computation is probably another hundred times. Now we're looking at, in just two years, computation went up by a fact, 10,000x. Meanwhile, people pay for information, but people mostly pay for work. Yes, talking to a chatbot and getting an answer is super great. Helping me do some research, unbelievable. But getting work done I'll pay for. And so that's where we are. Agentix Systems get work done. They're helping our software engineers get work done. And so then you take that, you got 10,000x more compute, you get probably at this point, 100x more consumption now, and we haven't even started scaling yet. We are absolutely at a million X,

22:21

Speaker A

which is, I think, a great place to talk about the number of 20,

23:54

Speaker B

30,000 at the company, something we have 43,000 employees. You know, I would say 38,000 are engineers.

23:59

Speaker A

The conversation we've had on the pod a number of times is, oh, my God, look at the token usage in our companies. It is growing massively. And some people are asking, hey, when I join a company, how many tokens do I get? Because I want to be an effective employee. And you postulated, I believe, during your two and a half hour keynote. Pretty long keynote.

24:06

Speaker B

Well done. That you were spending. It was well done. It would be shorter. Yeah.

24:27

Speaker A

You didn't have time to do it.

24:32

Speaker B

Yeah. So you guys. So you guys know. So you guys know. There is no practice. And so it's a grip and rip.

24:33

Speaker E

And rip.

24:41

Speaker B

Yeah. Yeah. So. So I just want to let you know I was writing the speech while I was giving the speech. Okay. So you never know.

24:42

Speaker A

But does that mean if we do

24:49

Speaker B

back, I apologize back in the envelope math. Yeah.

24:51

Speaker A

75,000 in tokens for each engineer or something like that. So are you spending in Nvidia A billion. $2 billion on tokens for your engineering team right now?

24:54

Speaker B

We're trying to. Let me give you a thought experiment. Let's say you have a software engineer or AI researcher and you pay them $500,000 a year. We do that all the time. Okay. This is happening all of the time. That $500,000 engineer at the end of the year, I'm going to ask him, how much did you spend in tokens? And that person said, $5,000. I will go ape. Something else.

25:02

Speaker E

Yes.

25:24

Speaker D

Right.

25:25

Speaker B

If that. If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed. Okay. And this is no different than one of our chip designers who says, guess what? I'm just going to use paper and pencil. I don't think I'm going to need any CAD tools.

25:25

Speaker A

This is a real paradigm shift to start thinking about these All Star employees. It almost reminds me of what we Learned in the NBA when LeBron James started spending a million dollars a year just on his health, of his body, like in maintaining it.

25:45

Speaker B

That's right.

25:58

Speaker A

Here he is at age 41, still playing. It really is. Hey, if these are incredible knowledge workers, why wouldn't we give them superhuman abilities?

25:58

Speaker B

That's exactly.

26:07

Speaker A

Where does that go? If we, if we extrapolate out two or three years from now, what is the efficiency of that All Star at Nvidia and what they're able to accomplish? What do they look like?

26:08

Speaker B

Well, first of all, things that, wow, this is too hard. That thought is gone. This is going to Take a long time. That thought is gone. We're going to need a lot of people. That thought is gone. This is no different than in the last industrial revolution. Somebody goes, boy, that building really looks heavy. Nobody says that. Wow, that mountain looks too big. Nobody says that. Everything that's too big, too heavy, takes too long. Those ideas are all gone.

26:19

Speaker E

You're reduced to creativity.

26:48

Speaker B

That's right.

26:49

Speaker E

What can you come up with?

26:50

Speaker B

Exactly. Which means now the question is, how do you work with these agents? Well, it's just a new way of doing computer programming. In the past, we code. In the future, we're going to write ideas, architectures, specifications, we're going to organize teams, we're going to give you, we're going to help them define how to evaluate the definition of good versus bad. What does it look like when something is a great outcome. How to iterate with you, how to brainstorm. That's really what you're looking for. And I think that every engineer is going to have 100 agents.

26:51

Speaker A

Back to the PR problem the industry has right now. You have executives like David Freeberg with Ohalo, who's looking at literally taking through the use of technology, your technology and AI, the number of calories produced and making high quality calories. What is the factor? You think you can bring the cost down, Friedberg. And what impact does this vision have

27:25

Speaker D

for what you're doing? Zero shot genomic modeling and it works.

27:50

Speaker B

Yeah.

27:54

Speaker D

And you have that moment and you're like, holy. Honestly, like. And that's after people are replacing entire enterprise software stacks in a night. I did something in 90 minutes. I was telling the guys about replace the whole software stack and like a whole bunch of workload. 90 minutes on Claude ran this Agentix system, built the whole thing, deployed it and we got.

27:54

Speaker E

On a Sunday night.

28:15

Speaker D

On a Sunday night, 10pm I was done. At 11:30 I went to bed as

28:15

Speaker A

the CEO you replaced.

28:18

Speaker D

Yeah. And everyone on my management team had to do a similar exercise over the weekend. What we saw on Monday, I was like, it's over. But the technical stuff, the science stuff, we did something in 30 minutes using auto research and I'd love your view on auto research and what that tells us about how far we still have to go in terms of efficiency. But using auto research and a chunk of data, something was published internally that we said, oh my God. And that would normally be a PhD thesis that would take seven years. It would be one of the most celebrated PhD theses we've ever seen in this field. And it would be in The Journal of Science and it was done in 30 minutes on a desktop computer running on auto research with all the data we just ingested. We got it on Friday and we're like, hey, let's try it, boot it up, go into GitHub, downloaded auto research and ran it and you see everyone's face just go like. And then the potential of what this is unlocking for us is like the kind of thing that would take seven years and it happened in 30 minutes. And we're experiencing it in genomics and we're like, this is unbelievable. So I think like the acceleration is widening the aperture for every everyone in a way that you didn't imagine a few years ago. But just going back to the auto research point, can you just comment on what you think about the fact that this thing got published with 600 lines of code in a weekend and the capacity that it has to run locally and achieve what it can achieve with all of these diverse data sets and what that tells us about the early stages we are in terms of optimization on algorithms and hardware.

28:20

Speaker B

The fundamental reason why openclaw is so incredible, number one, is its confluence, its timing with the breakthroughs in large language model. Yeah, its timing was perfect, it was impeccable. Now in a lot of ways, Peter wouldn't have come up with it probably if not for the fact that Claude and GPT and ChatGPT have reached a level that is really very good. It is also a new capability that allows these models to tool use the tools that we've created over time, Web browsers and Excel spreadsheets, and in the case of chip design, Synopsis and Cadence and Omniverse and Blender and Autodesk, and all of these tools are going to continue to be used. Some people say that the enterprise IT software industry is going to get destroyed. Let me give you the alternative view. The enterprise software industry is limited by butts and seats. It's about to get 100 times more agents banging on those tools. There are going to be agents banging on SQL, they're going to be agents banging on vector databases, agents banging on blender, agents banging on Photoshop. And the reason for that is because those tools first of all do a very good job. Second, those tools are the conduit between us. In the final analysis, when the work is done, it has to be represented back to me in a way that I can control and I know how to control those tools. And so I need everything to be put back into synopsys. I want everything to be put back into Cadence because that's how I control it, that's how I've ground truth.

29:46

Speaker E

Let me ask you a question about open source. So we have these closed source models, they're excellent. We have these open weight models. Many of the Chinese models are incredible. Absolutely incredible. Two days ago you may not have seen this because you were busy on stage, but there was a training run that happened in this crypto project called Bittensor Subnet 3. They managed to train a 4 billion parameter llama model, totally distributed with a bunch of people contributing excess compute. But they were able to do it statefully and manage a training run, which I thought was like a pretty crazy technical accomplishment. Yeah, because it's like random people and each person gets a little share.

31:24

Speaker B

Our modern version of folding at home. Exactly, yeah.

32:04

Speaker E

So what do you think about the end state of open source? Do you see this decentralization of architecture as well and decentralization of compute to support open weights and a totally open source approach to making sure AI is broadly available to everyone.

32:07

Speaker B

I believe we fundamentally need models as a first class product, proprietary product, as well as models as open source. These two things are not A or B, it's A and B, there's no question about it. And the reason for that is because models is a technology, not a product. Model is a technology, not a service for the vast majority of consumers. The horizontal layer, the general intelligence. I would really, really love not to go fine tune my own. I would really love to keep using ChatGPT. I love to use Claude, I love to use Gemini, I love to use X. And they all have their own personalities, as you know, which is kind of depends on my mood and depends on what problem I'm trying to solve. You know, I might do it on X or I might do it on ChatGPT. And so that that segment of the industry is thriving, it's going to be great. However, all these industries, their domain expertise, their specialization has to be channeled, has to be captured in a way that they can control and that can only come from open models. The open model industry we're contributing tremendously to it is near the frontier. And quite frankly, even if it reaches the frontier, I think that products as a service, world class products as a models as a product is going to continue to thrive.

32:22

Speaker A

Every startup we're investing in now is open source first and then going to the proprietary model.

33:46

Speaker B

Yeah, and the beautiful thing is because you have a great router, you connect the two on first day, every single day, you're going to have access to the world's best model. And then it gives you time to cost reduce and fine tune and specialize. And so you're going to have world class capabilities out to shoot every single time.

33:53

Speaker C

Can I? Of course, nobody wants the US to win the global AI race more than you. But a year ago, the Biden era diffusion rule really was an anti American diffusion of AI around the world. So here we are a year into the new administration. Give us a grade. Where are we in terms of global diffusion and the rate at which we're spreading US AI technology around the world? Are we an A? Are we a B? Are we a C? What's working? What's not working?

34:12

Speaker B

Well, first of all, President Trump wants American industry to lead. He wants American technology industry to lead. He wants American technology industry to win. He wants us to spread American technology around the world. He wants the United States to be the wealthiest country in the world. He wants all of that. At the current moment, as we speak, Nvidia gave up a 95% market share in the second largest market in the world. And we're at 0%. That's right. President Trump wants us to get back in there and the first thing is to get licensed for the companies that we're going to be able to sell to. We've got many companies who have requested for licenses, we've applied for licenses for them and we've got approved licenses from Secretary Lutnick. Now we've informed the Chinese companies and many of them have given us purchase orders. And so we're going to, we're going to. We're in the process of cranking up our supply chain again to go ship. I think at the highest level, Brad, I think one of the things that we should acknowledge is this. Our national security is diminished when we don't have access to miniature motors, rare earth minerals, it's diminished. When we don't control our telecommunications networks, it's diminished. When we can't provide for sustainable energy for our country, it is fundamentally diminished. Every single one of these industries is an example of what I don't want the AI industry to be.

34:44

Speaker E

Right.

36:20

Speaker B

When we look forward in time and we say, what do we want? What is that? What does it look like when American technology industry, American AI industry leads the world? We can all acknowledge that there is no way that AI models is one universally. We can all acknowledge that that is an outcome that makes no sense. However, we can all imagine that the American tech stack, from chips to computing systems to the platforms, are used broadly by the world where they build their own AI they use public AI, they use private AI, whatever, and they can build their applications in their society. I would love that. The American tech stack is 90% of the world. I would love that. The alternative, if it looks like solar, rare earth, magnets, motors, telecommunications, I consider that a very bad outcome for national security.

36:21

Speaker D

How much are you monitoring the situation with the conflicts around the world right now and how much does it worry you, Jensen? So China and Taiwan and then helium availability coming out of the Middle East, I understand can be a supply chain risk to semiconductor manufacturing. How much do these situations worry you? How much are you spending on them?

37:22

Speaker B

Well, first of all, I think in Middle East, I have, we have 6,000 families there. Yeah, we have a lot of Iranians at Nvidia and their families are still in Iran. And so, so we have, we have a lot of families there. The first thing is, is they're quite anxious, they're quite concerned, quite scared. We're thinking about them all the time. We're monitoring and keeping an eye on them all the time. They have 100% of our support. I've been asked several times, are we still considering being in Israel? We are 100% in Israel. We are 100% behind the families there. We are 100% in the Middle East. I was also asked, you know, given what's happening in the Middle east, is that an area where we believe that we can expand artificial intelligence too? I believe that there's a reason we went to war. And I believe at the end of the war, Middle east will be more stable than before. And so if we were there, if we were considering it before, we should absolutely be considering it after. And so I'm 100% in on that. With respect to Taiwan, we have to do three things. One, we have to make sure that we re industrialize the United States as fast as we can. And whether it's the chip manufacturing plants, the computer manufacturing plants or the AI,

37:40

Speaker A

how are we doing on that?

38:55

Speaker B

We're doing excellent with, by, by gaining the strategic support, by gaining the friendship of the supply chain of Taiwan, by gaining their friendship, by gaining their support, we were able to build Arizona and Texas, California at incredible rates. They are genuinely a strategic partner. We really. They deserve our support, they deserve our friendship, they deserve our generosity. And they're doing everything they can to accelerate the manufacturing process for us. And so I think that's number one. Number two, we ought to diversify the manufacturing supply chain. And whether it's South Korea, whether it's Japan, it's European, we ought to Diversify the supply chain, make it more resilient. And number three, let's demonstrate restraint. And while we're reducing, increasing our diversity and resilience, let's not press push unnecessary.

38:56

Speaker A

We need to be patient.

40:04

Speaker D

Is helium a problem? A lot of reports, you know, I

40:05

Speaker B

think helium could be a problem, but it's also the case that the supply chain probably has a lot of buffer in it. These kind of things tend to have a lot of buffer.

40:08

Speaker A

But you know, you've made massive progress in self driving. You made a big announcement, you've added many more partners including byd. There was just a video of you driving around in a Mercedes and huge announcement with Uber that you're going to have a number of cars on the road from many different manufacturers. You, your bet is I believe that there's going to be an Android type open source platform that you're going to play a major part in with dozens of car providers. And then maybe on the other side there could be an iOS with Tesla or Waymo. What's your strategy thinking there and how that chessboard emerges? Because it feels like you have a pretty deep stack and in some ways you're competing and in other places you're collaborative.

40:17

Speaker B

Yeah, it's taking a step back. We believe that everything that moves will be autonomous, completely or partly, someday, number one. Number two, we don't want to build self driving cars, but we want to enable every car company in the world to build self driving cars. And so we built all three computers. The training computer, the simulation computer, the evaluation computer, as well as the car computer. We developed the world's safest driving operating system. We also created the world's first reasoning autonomous vehicle so that it could decompose complicated scenarios into simpler scenarios that it knows how to navigate through, just like us reasoning systems. And so that reasoning system called Alpamayo, has enabled us to achieve incredible results. We open this, we vertical optimization, we horizontally innovate and we let everybody decide. Do you want to buy one computer from us? In the case of Elon and Tesla, they buy our training computers. Do they want to buy our training computer and our simulation computers or do you want to let us work with us to do all three and even put the car computer in your car? So we, you know, our attitude is we want to solve the problem, we're not the solution provider and we're delighted however you work with us.

41:08

Speaker E

Let me build on this question because I think it's so fascinating. You actually do create this platform. A thousand flowers are blooming but it's also true that some of those flowers want to now go back down in the stack and try to compete with you a little bit. Google has tpu, Amazon has Inferentia and Trainium. Everybody's sort of spinning up their own version of I think I can out Nvidia. Nvidia. Even though they also tend to be huge customers, how do you navigate that and what do you think happens over time, and where do those things play in the complexion of this kind of video?

42:32

Speaker B

Yeah, really great. You know, first of all, we're the only AI company. We're an AI company. We build foundation models. We're at the frontier in many different domains. We build every single. Every single layer, every single stack. We're the only AI company in the world that works with every AI company in the world. They never show me what they're building, and I always show them exactly what I'm building.

43:06

Speaker E

Right?

43:27

Speaker B

Yeah. And so the confidence comes from this one. We are delighted to compete on what is the best technology. And to the extent that we can continue to run fast, I believe that buying from Nvidia still is one of the most economic things they could do. And that's just incredible confidence there, number one. Number two, we're the only architecture that could be in every cloud, and that gives us some fundamental advantages. We're the only architecture you could take from a cloud and put into. On prem. In the car, in any region. Space that's right, in space. And so there's a whole, whole part of our market, about 40% of our. Of our business. Most people don't realize this. 40% of our business, unless you have the Cuda stack, unless you can build an entire AI factory, you have, the customers don't know what to do with you. They're not trying to build chips, they're not trying to buy chips. They're trying to build AI infrastructure. And so they want you to come in with the full stack, and we've got the whole stack. And so, surprisingly, Nvidia is gaining market share. If you look at where we are today, we're gaining share.

43:28

Speaker E

Do you think what happens is these guys try and they realize, oh, my God, it's too much, and then they come back. Is that why the share grows?

44:32

Speaker B

Well, we're gaining share for several reasons. One, our velocity has gone. We help people realize it's not about building the chip, it's about building the system. And that system's really hard to build. And so their business with us is increasing. In the case of aws, I Think they just announced, I think it was yesterday, that they're gonna buy a million chips in the next couple years. I mean, that's a lot of chips from aws and that's on top of all the chips they've already bought. And so we're delighted to do that. But number one, we're gaining share this last couple years because we now have anthropic coming to Nvidia. Metasl is coming to Nvidia, and the growth of open models is incredible. And that's all on Nvidia. And so we're growing in share because of the number of models. We're also growing in share because all of these companies are outside of the cloud and they're growing regionally in enterprise and industries at the edge. And that entire segment of growth is really hard to do if it's just building an asic.

44:38

Speaker C

Brad, related to that. And not to get in the weeds on the numbers, but analysts don't seem to believe. So if you look at the consensus forecast, you said compute could 1 million x. Right. And yet they have you growing next year at 30%, the year after that at 20% and in 2029, which is supposed to be a monster year at 7%. Right. So if you just, if you take your TAM and you apply their growth numbers, it suggests that your share will plummet. Do you see anything in your future order book that would make that correct?

45:43

Speaker B

Yeah, first of all, they just don't understand the scale and the breadth of AI.

46:18

Speaker C

Yes.

46:23

Speaker E

Yeah, yeah, I think that's true.

46:24

Speaker B

Most people think that AI is in the top five hyperscalers.

46:25

Speaker E

Right? That's right. There's also an orthodoxy around these law of large numbers where, you know, they have to go back to their investment banking risk committee and show some model. They're not going to believe in their minds that 5 trillion goes to 15 trillion. They're like, it can go to 7.

46:29

Speaker C

Or they don't have to have a $10 trillion company.

46:46

Speaker E

It's all just CYA stuff that I

46:48

Speaker D

think never happened before. So you can't say it will.

46:50

Speaker B

And because you have to redefine what it is that you do. There was somebody who made an observation recently that Nvidia Jensen, how can you be larger than intel in servers? And the reason for that is because the CPU market of the entire data center was about $25 billion a year.

46:52

Speaker E

Right.

47:11

Speaker B

We do $25 billion a year, as you guys know, in a very. In the time that we were sitting here. And so obviously, obviously that was a joke.

47:12

Speaker A

No, it's.

47:22

Speaker B

But it's all in podcast.

47:23

Speaker D

Don't worry. Everything on this show is roughly.

47:25

Speaker B

That was not guidance. But anyhow, anyhow. The point is how big you can be depends on what is it that you make. Nvidia is not making chips. Number one, making chips does not help you solve the AI infrastructure problem anymore. It's too complicated. Number three, most people think that AI is narrowly. In the things that they talk about and hear and see, AI is much OpenAI is incredible. They're going to be enormous. Anthropic is incredible. They're going to be enormous. But AI is going to be much, much bigger than that. How do we address that segment?

47:31

Speaker E

Tell us about data centers in space for a second.

48:06

Speaker B

Yep, we're already in space.

48:08

Speaker E

How should the layman think about what that business is versus when you hear about these big data center buildouts that's happening on the ground?

48:11

Speaker B

Well, we should definitely work on the ground first because we're already here. And number one, number two, we should prepare to be out in space. And obviously there's a lot of energy in space. The challenge, of course, is that cooling, you can't take advantage of conduction and convection and so you can only use radiation. And radiation requires very large surfaces. And so now that's not an impossible thing to solve. And there's a lot of space in space. But nonetheless, the expense is still quite there. Is there? We're going to go explore it. We're already there. We're already radiation hardened. We have, we have CUDA in satellites around the world. They're doing imaging, image processing, AI imaging, and that kind of stuff ought to be done in space. Instead of sending all the data back here and do imaging down here, we ought to just do imaging out in space. And so there's a lot of things that we ought to do in space. And in the meantime, we're going to explore what does the architecture of data centers look like in space. And it'll take years. It's okay. I got plenty of time.

48:21

Speaker A

I wanted to double click on healthcare. I know you've got a big effort there. We're all of a certain age where we're thinking about lifespan, healthspan. I mean, we all look great.

49:24

Speaker E

I think some better than others.

49:32

Speaker A

I think some better than others. I don't know what your secret is, Jensen.

49:34

Speaker E

Pretty good.

49:37

Speaker A

I mean, what are you taking Asian? What's off the menu? You got to talk to me when we're backstage.

49:38

Speaker B

I want to know in the green room what you got going on Squats and push ups and sit ups. Perfect.

49:43

Speaker A

Okay. But what, you know, in terms of the build out in healthcare, where is that going and what kind of progress are we making? I was just using Claude to do some analysis and saying like where are all these billing codes? We spend twice as much money in the US we get, seem to get half as much. It seemed like 15 to 25% of the dollars spent were on these first GP visits. And I think we all know like ChatGPT and a large language model does a better job more consistently today at a first visit. So what has to happen there to kind of break through all that regulation and have AI have a true impact on the healthcare system?

49:47

Speaker B

There's several areas that we're involved in healthcare. One is AI physics or AI biology. Using AI to understand, represent, predict biology, behavior, biological behavior. And so that's one that's very important in drug discovery. There's second which is AI agents and that's where the assistance in helping diagnosis and things like that. Open evidence is a really good example. Hippocratics is a really good example. Love working with those companies. I really think that this is an area where agentic technology is going to revolutionize how we interact with doctors and how do we interact for healthcare. The third part that we're involved in is physical AI. The first one is AI physics, using AI to predict physics. The second one is physical AI. AI that understand the properties of the laws of physics and that's used for robotic surgery. Huge amounts of activities there. Every single instrument, whether it's ultrasound or you know, CT or whatever instrument we interact with in a hospital in the future will be agentic. Yeah, you know, open claw in a safe version will be inside every single instrument. And so in a lot of ways that instrument is going to be interacting with patients and nurses and doctors in very unique way.

50:29

Speaker A

Yeah, I mean there's so much investment in AI weapons. It would be wonderful to see some investment in AI EMTs and paramedics and saving lives, not just taking them, which I think is a great segue into robotics. You've got dozens of partners. We, we had this very weird, I don't know, I want to call it lost decade or 20 years of Boston Dynamics. Google bought a bunch of companies, they then wound up selling them and spinning them out where people just thought robotics is just not ready for primetime. And now here we have the world's greatest entrepreneur at this time tied with you, Elon Musk doing well. That was a good save, I hope. Optimus pretty impressive. And Then other companies in China. How close is that to actually being in our lives where we might see a chef, robotic chef, a robotic nurse, a robotic housekeeper, you know, this humanoid factor, actually working in the real world, knowing what, you know, with those partners and the fidelity, especially in China, where they seem to be doing as good a job as we're doing here, or maybe better.

51:48

Speaker B

We invented the industry, largely America invented. You could argue we got into it too soon.

52:54

Speaker E

Yeah.

53:00

Speaker B

And we got exhausted. We got tired. About five years before the enabling technology appeared.

53:01

Speaker A

Yes, the brain.

53:08

Speaker B

Yeah, yeah. And we just got tired of it. Just a little too soon. Okay, that's number one. But it's here now. Now the question is, how much longer? From the point of high functioning existence proof, high functioning existence proof to reasonable products, technology never takes more than a couple, two, three cycles. And so a couple, two, three cycles would basically be somewhere around three years to five years. That's it. Three years to five years. We're gonna have robots all over the place. I think China is formidable. And the reason for that is because they're microelectronics, their motors, their rare earth, their magnets, which is foundational to robotics. They are the world's best. And so in a lot of ways, our robotics industry relies deeply on their ecosystem and their supply chain. And they're obviously moving very quickly. We're going to. Our robotics industry will have to rely a lot on it. The world's robotics industry will have to rely a lot on it. And so. So I think you're going to see some fast, fast movements here.

53:09

Speaker A

Ultimately, one for one. Elon seems to think we're going to have one robot for every human. Seven billion for seven billion. Eight billion for eight billion.

54:16

Speaker B

Well, I'm hoping more. Yeah, I'm hoping more, yeah. Well, first of all, there's a whole bunch of robots that are going to be in factories working around the clock. There's going to be a whole bunch of factory robots that don't move. They move a little bit. Almost everything will be robotic.

54:23

Speaker A

What does the world look like?

54:38

Speaker D

Sorry, let me say, I think, like, this is one of the Robotics for me, is one of the pieces that I think unlocks economic mobility opportunities for every individual. Everyone. Now, like when everyone got a car, they could now go and do a lot of different jobs. When everyone gets a robot, their robot can do a lot of work for them. They can stand up an Etsy store or a Shopify store. They can create anything they want with their robot. They could do things that they independently cannot do. I think the robot is going to end up being the greatest unlock for prosperity for more people on Earth than we've ever seen with any technology before.

54:39

Speaker B

Yeah, no doubt. I mean, just the simple math at the moment is we're millions of people short in labor today.

55:12

Speaker E

Right.

55:19

Speaker B

We're actually really desperate in need of robotics and so that all of these companies could grow more if they had more labor. I mean, number one, some of the things that you mentioned are super fun. I mean, because of robots, we'll have virtual presence. You know, I'll be able to go into the robot of my house and virtually operate it. I'm on a business trip.

55:20

Speaker E

Right. Telling.

55:45

Speaker B

Walk around the house, Walk the dog. Yeah, walk the dog.

55:46

Speaker A

Break the leaves.

55:49

Speaker B

Yeah, exactly.

55:50

Speaker D

Break out the dog.

55:50

Speaker B

Maybe not quite that, but just, you know, just, you know, wander around and just see what's going on in the house, you know, chat with the dog, with the kids.

55:51

Speaker D

Yeah, yeah.

55:58

Speaker B

Time travel is also. We're going to be able to travel at the speed of light, you know, and so, you know, clearly we're going to send our robots ahead of us.

56:00

Speaker D

Yeah.

56:07

Speaker B

Not going to send myself. I'm going to send a robot, you

56:08

Speaker E

know, check it out.

56:10

Speaker B

Yeah, yeah. And then I'm going to upload my AI.

56:11

Speaker D

Well, it's inevitable. It unlocks the moon and it unlocks Mars as targets for colonization, which gives us infinite resources. Getting back from the moon is effectively zero energy cost to move material back because you can use solar and accelerate. So you could have factories that make everything the world needs on the moon. And the robots are going to be the unlock for enable.

56:13

Speaker B

That's right. This distance no longer matters.

56:32

Speaker D

Distance doesn't matter. Yeah, yeah.

56:34

Speaker C

The more, the more revenue we get out of models and agents, the more we can invest in building the infrastructure, which then unlocks more capabilities on models and agents. Dario on Dwarkesh's podcast recently said, By 2728, we'll have hundreds of billions of dollars of revenue out of the model companies and the agent companies. And he forecasts $1 trillion by 2030. Right. This is non infrastructure AI revenue.

56:35

Speaker B

I think he. I think he, he's being very conservative. I believe Dario. And Anthropic is going to do way better than that.

57:01

Speaker C

Wow.

57:08

Speaker B

Way better than that.

57:08

Speaker E

Wow.

57:09

Speaker A

30 billion to a trillion.

57:10

Speaker B

Yeah. And not. And the reason for that is the one part that he hasn't considered is that I believe every single enterprise software company will also be a reseller value added reseller of anthropic code. Anthropics Tokens, value added, Reseller, OpenAI. That's right. And they're gonna that part of their

57:11

Speaker E

get this logarithmic expansion.

57:34

Speaker B

Yes.

57:35

Speaker D

Yeah.

57:35

Speaker B

Their go to market is going to expand tremendously this year.

57:36

Speaker E

What do you think in that world is the moat? What's left over? I mean, you have some moats that are frankly, I think as this scales, almost insurmountable. The best one that nobody talks about is probably cuda, which is just like an incredible strategic advantage. But in the future, if a model can be used to create something incredible, then the next spin of a model can be used to maybe disrupt it. Sort of in your mind, what do you think for these companies that are building at that application layer, what's their moat like? How do they differentiate themselves?

57:41

Speaker B

Deep specialization. Deep specialization. I believe that these models, they're going to have general models that are connected into the software company's agentic system. Many of those models are cloud models and proprietary models, but many of those models are specialized sub agents that they've trained on their own.

58:12

Speaker E

Right. So the call to arms for you, for entrepreneurs is look, know your vertical.

58:37

Speaker B

That's right.

58:41

Speaker E

Know it as deep and as better than everybody else.

58:42

Speaker B

That's right.

58:45

Speaker E

And then wait for these tools because they're catching up to you and now you can imbue it with your knowledge.

58:45

Speaker B

That's right. And the sooner you connect your agent, the sooner you connect your agent with customers, that flywheel is going to cause your agent to get hyper.

58:49

Speaker E

It very much is an inversion of what we do today because today we build a piece of software and we say what generalizes? And then let's try to sell it as broadly as possible and then sell the customization around it and we trap and in fact.

58:59

Speaker B

In fact, exactly right. We, we create a horizontal. But notice there are all these GSI's and all of these consultants who are specialists who then take your horizontal platform and specializes it into.

59:09

Speaker E

Exactly.

59:23

Speaker A

And that's arguably a five or six time bigger industry is the customization.

59:24

Speaker B

It is, absolutely. Yeah, it very much is. That's right. So I think that these platform companies have an opportunity to become that specialist, to become that vertical.

59:27

Speaker E

Right, yeah.

59:37

Speaker B

Domain expert.

59:38

Speaker A

You know, I just want to give you your flowers. I think it was three years ago you said you're not going to lose your job to AI, you're going to lose your job to somebody using AI. And here we are. The entire conversation has revolved around this concept of agents making people superhuman and the business opportunity expanding in entrepreneurship. Expanding. You actually saw it pretty clearly changed your view. Well, I do have no. You can hold space for. I think two ideas. One is there are going to be

59:38

Speaker E

a lot that's viral. JC no, no, but that's just because

1:00:09

Speaker B

he doesn't hang out with me enough.

1:00:12

Speaker A

I mean, we've hung a little bit.

1:00:15

Speaker E

Be careful what you're not talking about.

1:00:16

Speaker D

He will show up at.

1:00:18

Speaker E

He'll follow you around.

1:00:19

Speaker B

I'm not asking for it. I'm going to follow you around. I'm not asking for it.

1:00:21

Speaker A

You can come with me and Tucker. We ski in Japan every January.

1:00:24

Speaker E

Love it.

1:00:28

Speaker A

Tucker will go road trip. There is going to be job displacement. And then the question becomes, you know, do those people have the fortitude, the resolve to then go embrace these, you know, technologies? We're going to see 100% of driving go away by humans. That's just, it's. That's a beautiful thing in the lives saved. But we have to recognize that's 15 million people in the United States, 10 to 15 million who are employed in that way. And so that is going to happen.

1:00:28

Speaker B

Yes, I think that jobs will change. For example, there are many chauffeurs today. Who drives the car? I believe that many of those chauffeurs will actually be in the car sitting behind the steering wheel while the car is driving by itself. And the reason for that is because remember what a chauffeur does in the end. These chauffeurs, they're helping you. They're your assistants. They're helping you with your luggage. They're helping you. I mean, they're helping you with a lot of things. And so I wouldn't be surprised, actually, if the chauffeurs of the future become your mobility assistant and they are helping you do on a whole bunch of other stuff to the hotel and the car's driving by itself.

1:00:56

Speaker C

The autopilot and planes created a lot more pilots and didn't take any of the pilots out of the cockpit, even though the autopilot is flying the plane 90% of the time.

1:01:36

Speaker D

And by the way, while that car is driving itself, that chauffeur is going to be doing a bunch of other work on his phone and he's going

1:01:45

Speaker B

to be arranging, for example, doing other things, coordinating a bunch of things for you. Getting.

1:01:51

Speaker D

It's all the pie just grows in

1:01:56

Speaker B

a way that one of the things that, yes, every job will be transformed. Some jobs will be eliminated. However, we also know that many, many jobs will be created. The one thing that I will say to young people who are coming out of school, who are concerned, who are anxious about AI. Be the expert of using AI?

1:01:57

Speaker A

Yes.

1:02:17

Speaker B

How much? Look, we all want our employees to be expert at using AI. And it's not trivial. Not trivial. And so knowing how to specify, not to over prescribe, leaving enough room for the AI to innovate and create while we guide it to the outcome we want, all of that requires artistry.

1:02:18

Speaker E

You had, you had this great advice to when you were at Stanford, I think it was, which is I wish to youo Pain and Suffering. Do you remember that? Yeah. Fantastic. What's your advice to young people around, what they should be studying? So if they're sort of about to leave high school, because now those are the kids that are at this like really native, they haven't made a decision about college, what to study, if at all go to college. How do you guide those kids? What would you tell them?

1:02:41

Speaker B

I still believe that deep science, deep math, language skills, you know, as you know, language is the programming language of

1:03:06

Speaker E

AI, the ultimate programming language.

1:03:16

Speaker B

And so as it turns out, it could be that the English major could be the most successful.

1:03:18

Speaker E

Yeah.

1:03:22

Speaker B

And so I think, I think I would just advise, whatever education you get, just make sure that you're deeply, deeply expert in using AIs. One of the things that I wanted to say with respect to jobs, and I want everybody to hear it, that in fact, at the beginning of the deep learning revolution, one of the finest computer scientists in the world, I deeply respect, predicted that computer vision will completely eliminate radiologists and that the one field he advises everybody to not go into is radiology. Ten years later, his prediction was 100% right. Computer vision has been integrated into all of the radiology technologies and radiology platforms in the world. 100%. The surprising outcome is the number of radiologists actually went up and the demand for radiologists is skyrocketed. The reason for that is because everybody's job has a purpose and it's task. The task that you do is studying the scans, but your purpose is to help the doctors help the patient diagnose disease. And so what's surprising is because the scans are now being done so quickly, they could do more scans. Improving healthcare.

1:03:23

Speaker A

Yes.

1:04:46

Speaker B

But doing more scans more quickly allows patients to be onboarded a lot more, treated a lot more quickly. And as it turns out, because hospitals enjoy making money too.

1:04:47

Speaker E

Yeah, right.

1:05:00

Speaker B

They're doing more scans, they're treating more customers, early patients, the revenues go up and guess what?

1:05:01

Speaker D

Perfect.

1:05:08

Speaker C

And a country that grows faster, productivity increases, a wealthier country can put More teachers in the classroom, not less teachers in the classroom.

1:05:10

Speaker B

That's right.

1:05:18

Speaker C

You just give every one of those teachers a personalized curriculum for every student in the room. It makes them all bionic and leads to a lot more.

1:05:18

Speaker B

Every single student will be assisted by AI, but every single student will need great teachers.

1:05:25

Speaker E

Yeah.

1:05:31

Speaker A

Amazing. Jensen, congratulations on all your success. And really this is an incredibly positive, uplifting discussion. We really appreciate you taking the time for us.

1:05:31

Speaker E

He is the steward we need.

1:05:40

Speaker A

You are, you are. I think you need to be more vocal. I'm being very, very vocal about the positive side of it. I think there's so much humorism.

1:05:42

Speaker E

But I also think it takes the humility to have this level of success and be humble about. We're making software guys.

1:05:49

Speaker A

Yeah.

1:05:55

Speaker E

And I think that that's actually really healthy for people to hear. We have done this before. We have invented categories and industries before. We don't need to go to this scare mongering place. It does nothing.

1:05:56

Speaker A

And we get to choose.

1:06:08

Speaker C

Right.

1:06:09

Speaker A

We have autonomy and agency. We get to pick how to.

1:06:09

Speaker B

We sure do.

1:06:12

Speaker A

Okay, everybody, we'll see you next time.

1:06:13

Speaker B

Thank you.

1:06:15

Speaker A

On the all in interview.

1:06:16

Speaker E

Okay. Well done, brother. Thanks, man.

1:06:17

Speaker B

Good job.

1:06:20

Speaker E

Thank you, sir.

1:06:21

Speaker A

That was awesome.

1:06:22

Speaker B

Good, good.

1:06:23

Speaker D

Appreciate you.

1:06:23

Speaker B

You guys are awesome. Look at this. Look at this big crowd behind you guys, man.

1:06:24

Speaker A

I think they're here for you.

1:06:28

Speaker B

I'm going all in.

1:06:35