OpenAI Podcast

Episode 12 - State of the AI Industry

50 min
Jan 19, 20263 months ago
Listen to Episode
Summary

OpenAI CFO Sarah Fryer and investor Vinod Khosla discuss the current state of AI, predicting 2026 will be the year of maturing agentic systems and multi-agent workflows. They argue there's no AI bubble based on underlying demand metrics like API calls, with compute constraints being the primary limitation rather than market demand.

Insights
  • AI adoption follows a capability gap closure pattern - users currently utilize only single-digit percentages of available AI capabilities, similar to early email and mobile adoption curves
  • Enterprise AI transformation is driven by consumer-first adoption, with 90% of corporations using or planning to use OpenAI within 12 months
  • The correlation between compute investment and revenue growth is direct and measurable, with OpenAI showing consistent ratios across 2023-2025
  • AI will create a massively deflationary economy by the end of the next decade due to near-zero cost labor and expertise
  • Healthcare represents a major AI breakthrough area with 230 million weekly health questions on ChatGPT and 66% of US physicians using it daily
Trends
Multi-agentic systems maturing for enterprise applications like ERP automationAI business models evolving from single subscription to multi-dimensional pricing including credits and licensingShift from simple Q&A AI usage to true task-working agentsRobotics industry projected to exceed automotive industry size within 15 yearsAI-driven productivity gains of 27-33% in top quartile companiesMovement toward people-plus-agents organizational structuresDeflationary economic impact from AI reducing labor and expertise costsMultimodal AI capabilities expanding beyond text to voice, vision, and real-world interactionEnterprise AI adoption following consumer preference patternsMemory and personalization becoming key differentiators in AI platforms
Quotes
"Demand is limited not by anything other than availability of COMPUTE Today."
Sarah Fryer
"I think we matured in vibe coding in 2025. I don't think we've matured in agents. So agents, especially multi agentic systems, will mature to the point of having real visible impact."
Vinod Khosla
"We've handed people massive intelligence, right? We've handed them the keys to the Ferrari, but they are only learning how to take it out on the road for the first time."
Sarah Fryer
"The reality is what's the actual demand for AI, which is the number of API calls. What Wall street tends to do with it, I don't really care. I think it's mostly irrelevant."
Vinod Khosla
"I think the robotics business, both bipedal and other robots, will be a larger business in 15 years than the auto industry is today."
Vinod Khosla
Full Transcript
3 Speakers
Speaker A

Hello, I'm Andrew Main and this is the OpenAI podcast. Today our guests are Sarah Fryer, CFO of OpenAI, and legendary investor Vinod Khosla of Khosla Ventures. In this discussion, we're going to talk about the state of the AI ecosystem, whether or not we're in a bubble, and how startups and investors can succeed as AI progresses.

0:00

Speaker B

Unlike something like Netflix, where they're running so many hours in the day, I think of it much more like infrastructure, like electricity.

0:21

Speaker C

Demand is limited not by anything other than availability of COMPUTE Today. I think the conversation we need to have is what will people do?

0:27

Speaker A

2025 was about agents in vibe coding. Now it's 2026. What's the story of 2026?

0:40

Speaker C

I think we matured in vibe coding in 2025. I don't think we've matured in agents. So agents, especially multi agentic systems, will mature to the point of having real visible impact. Whether you're an enterprise and you have multi agent systems doing full tasks, like running an ERP system for you, you know, doing all the reconciliation every day, accruals every day, tracking contracts every day. I think that on the enterprise side, but today on the consumer side, it's still a hassle to plan a trip. That's a multi agent thing that looks across a lot of different things from your food preferences to the restaurant reservation to airline schedules, to your personal calendar. Those will start to mature, I think, a year from now. So I'm pretty excited about that. I think models in robotics and real world models that go well beyond robotics, like general intuition, will all start to happen in the next year. So I think those are areas to look for. There's usual functions like memory in LLMs, continual learning in LLMs, reduction of the impact of hallucinations, those are all areas I could go on. There's half a dozen areas in which AI doesn't do as well today that will start to be addressed.

0:47

Speaker B

Yeah. And I think at its baseline, what Vinod is saying is 26 is the beginning of closing this capability gap. So what we know is we've handed people massive intelligence, right? We've handed them the keys to the Ferrari, but they are only learning how to take it out on the road for the first time. We need to give consumers more and more easy ways to go from ChatGPT as just a chatbot call and response. Most people use it today just to ask questions. But how do we take it towards being a true task worker that books that trip for them or helps them get A second opinion on what they just heard from their doctor or enables them to create a menu for their diabetic child. Right. How do we help them really move from simple questions into actual outcomes that make my life better? And then on the enterprise side, it's that same continuum. How do we close the capability gap? One of the things we know from our state of the enterprise AI and the enterprise report that our chief economist put out at the end of last year is on the front versus just even the median corporation. The average number of messages or the median is about 6x, which will tell you that 6x the usage from a company that's already on the frontier. And we know that frontier isn't even pushed to its max. So for us it's this focus of how do we help consumers move along that continuum to true agentic task working. And then for enterprises, how do we create a much more sophisticated, vertically specialized outcome for enterprises that allows them to go from maybe a very simple ChatGPT implementation the whole way to something that's transforming the most important part of their business? For a healthcare provider, it might be their drug discovery process. For a hospital, it might be the time to admit a patient, to get that patient back into the community. For a really large retailer, it might be just larger basket sizes, higher conversion rates and much happier customers. So it's the basics of closing that capability gap.

2:29

Speaker C

So I might add one other perspective. We've talked about the number of areas in which the technology will advance and capability will advance. I would venture to guess today of the people using AI, whether it's personal or enterprise, some single digit percentage are even using 30% of the capability of the AI. So this percentage of people who using 30% or 50%, let alone 80% of the AI's capabilities will keep increasing. I think that's a 10 year journey before people learn to use AI.

4:43

Speaker A

I've seen this. Some people kind of pundits confuse adoption curves for capability curves. And that's come up where you've seen me.

5:19

Speaker C

So that's the point I'm making.

5:27

Speaker B

And it's a force multiplier because today we have over 800 million using ChatGPT today, 800 million consumers weekly using. But you know, that number should be in the billions. And then what percentage use are they using it for? It's like we've just turned electricity on in the home. We've wired up the home and they've turned on the lights, but they have no idea that they could now heat their home, they could cook, they could Curl their hair. Right. There's so many things you now can do.

5:28

Speaker A

An analogy I've used is that email didn't really get much better between 1990 and the year 2000. Neither did mobile, but usage went way up. And the problem wasn't like, well, we need better email, we need more, better mobile. It's like people need to learn all the things they could use it for, right?

5:54

Speaker B

Yeah. And in a more sophisticated way, like mobile is always one. That's interesting to me because when mobile took off, people just took their desktop websites and turned them into mobile. And they were really hard to scroll, but I guess you at least had them in your pocket. But then you realized you had a gps, so now you could have Uber and now you could do things with location or you had a camera at your fingertips. Okay, so now, yeah, I can take photographs of all my friends, but I can also snap a check and deposit it into my bank account. Although we should fix the whole paper check thing. But that's an aside.

6:08

Speaker A

It still seems like just take a photo of this and now I get money in my bank account.

6:38

Speaker B

Yeah, but that all existed in the minute mobile was available to us, but just the ability for human ingenuity to come to work on it. So I think you're right. I don't even know if we need more intelligence than we have today to vastly increase outcomes. But of course the models are going to keep getting more intelligent as well.

6:42

Speaker A

You mentioned health, and that's one of the really kind of high stakes things we think about when it comes to just probably the most important thing. And it's kind of fascinating to think about that. Just, you know, a few years ago we got ChatGPT and we're using it for very simple applications and now we're trusting it with HIPAA compliant data. Do you look at that as sort of a marker of how fast or how well things have been accelerating? Are there other ones like that you think about to say, okay, now we know we're some new level?

7:02

Speaker C

Health is clearly one of those areas. I've long believed it revolutionized health by making expertise be a commodity in all areas of health. The problem with health is regulatory. First there's constraints on what AI can do. I can't legally write a prescription, even if it's better than human beings at writing a prescription. That is not only the fda, but it's actually beyond the FDA into the American Medical association. Institutionally controls that function. They will be incumbent resistance in a lot of areas. I think we can talk about it if you like. But diagnosing is still a constraint because the FDA controls that there's no AI approved as a medical device yet. So that all. Fortunately this administration is doing a very good job of moving quickly and taking the appropriate level of risk. So I'm pretty pleased to see what's happening there.

7:27

Speaker B

On the health front, we see in our data 230 million people every week ask ChatGPT a health question. 66% of US physicians say they use ChatGPT in their daily work. I'll tell you at a personal level, my brother is an HDU doctor in the uk, so his job is you hit the er, they don't know how to triage you so they send you to him. You kind of don't want to show up to him. He's expected to have, he's very good though. He's very good at what he does, but it means you're not in good shape. But he's expected to have an almost an encyclopedic knowledge of every disease that ever existed. So I always give the example. He works in Aberdeen in Scotland. If you showed up with malaria, he will not think of that. That is not in his pattern recognition. And yet that could have happened. I don't know. You went on vacation, summer, you got bitten by a mosquito, boom, you're showing up in an ER room in Aberdeen. What ChatGPT can do or what the model can do is really act as a great augmentation to the doctor, which is why I think 66% of them are using it. And that number's only growing. It's probably already much higher. And so I think it's just a great example of something like health. We're getting the benefit of our doctors being able to have always the latest research in front of them, always the latest known interactions, say between someone's drug regime and what they're living through and experiencing as individuals. But it also puts some independence back into consumers hands. So now I get the opportunity to, ahead of time, do some research on what my symptoms might be saying so I can have a much more educated conversation with my doctor. It allows me to maybe get a second opinion or know that I want to go ask for a second opinion. It also, we go very fast to these extreme places. But just even things like, hey, I've got 20 minutes a day to exercise, I know I'm suffering from type 1 diabetes, what could I do in 20 minutes? Or my daughter has an interesting issue with the food she eats. And so it used to be a super Just frustrating thing to go to a restaurant to even, because we'd have to almost ask the server so many questions. And now we can photograph a menu chat suggests what are likely the best dishes for her to order. And then we can have a bit more of a, of a terser conversation, but a bit more productive on what's going to work. And it has just changed how we think about just eating. Takes it away from all about the food to why we're going out for dinner together. And so I think they're all these just examples of something like health. It's already happening and it's going to keep getting better and better. And then to Vinod's point, I think regulatory environment is going to have to catch up.

8:43

Speaker A

No matter what kind of system you're under. The cost of medical care is exceeding the GDP of every country, the rate at which increases. And it seems like we needed AI, we needed it now, and it can be helpful. And as you pointed out, it's the first time the cost of medical intelligence has dropped year over year. But that comes with a lot of demand for compute. And we have a lot more questions that we want to have answered. And certainly people can see the need for more compute, but the scale and scope at which OpenAI is investing in compute is incredibly huge. You know, we're talking, you know, numbers that are just really hard to fathom. How does OpenAI determine that need? You know, what are the metrics you're looking at to think that, like, yes, we need to spend this much.

11:32

Speaker B

So first of all, we are trying to make sure we stay investing in computer to match the pace of our revenue. We've seen a really strong correlation between in period compute and in period revenue. I'll give you an example. If you just go back in 23, 24 and 25, our compute was 200 megawatts, 600 megawatts. And we ended last year at 2 gigawatts against that. And it's really easy because the numbers match up. We exited 23 at 2 billion in ARR. So 200 megawatts, 2 billion. We exited 24 at 6 billion. So 6 billion 600 megawatts. And we exited last year a little over 20 billion. 20 billion, 2 gigawatts. Actually, it's been accelerating. So that's just. Even if you look at the slope of the line, it says more compute, more revenue. Now there is definitely a timing mismatch because I have to make decisions today about making sure we have compute in not even 26 or 27 but 28, 29 and 30. Because if I don't put in orders today and don't give the signal to create data centers, and it won't be there Today we feel absolutely constrained on compute. There are many more products that we could launch, many more models that we would train, many more multimodality things we would explore if we had more compute today, for example, even in the last year, I think the overall hardware investment globally has gone up by something like $220 billion. That's just how much actual spending has gone up. If you look at chips, chip forecasts have gone up similarly, about 330, $34 billion. So it's not just OpenAI. The signal from the whole environment is AI is real. We are in a paradigm shift. We need to invest to give people the intelligence they need to do all the things we just talked about, for example. So back inside of OpenAI, we do spend a lot of time going very deep on what is our demand signal in consumer, in enterprise, in developers. We think about what's the mosaic first at the base, like on an infrastructure layer, how do we create max optionality. So we want to be multi cloud, multi chip. And that gives us an interesting layer. At the infrastructure layer, one tick up at the product layer. We also want to become more multidimensional. So we used to just be one product, ChatGPT. Today we are ChatGPT for consumer with all of the blades inside it, healthcare and so on. ChatGPT for work. But we also have SORA as a new platform. We have some of our transformational research projects one tick up. We also then have a business model ecosystem that's becoming much more multi dimensional. Began with a single subscription because we'd launched ChatGPT and we needed a way to pay for the compute.

12:16

Speaker A

We now have multiple ChatGPT subscriber.

15:15

Speaker B

By the way, I love you for that multiple subscriptions. We went to the enterprise and had SaaS based pricing. We have credit based pricing now for places where high value is being found. People want to pay more to get more. We're beginning to think about things like commerce and ads. And then of course longer term, I like models like for example, would we do licensing models to really align, let's say in drug discovery, if we licensed our technology, you have a breakthrough, that drug takes off and we get a license portion of all its sales. It's great alignment for us with our customer. So if you think about those three tiers, I actually think of it like a Rubik's cube. So we Went from a single block, one csp, Microsoft, one chip, one product, one business model to now a whole three dimensional cube. And one of the things I love about a Rubik's Cube, I'm probably not getting the number exactly right, but I think it has 43 quintillion different states it can be in. It always blew my mind when I was in university. So now just think about that cube spinning. So we pick a low latency chip going alongside something like coding, that's 5x the pace that people expect. We can charge a high end subscription for that. So it's almost like you line up the cube and you get three colors on one side. We could spin the cube again and say low latency chip, faster image gen, more free users come in. But that creates more inventory for ultimately perhaps an ADS platform. So you can start to see how the goal in the last 12 months has been creating more and more strategic options that allow me to keep paying for the compute. We need to really achieve our mission AGI for the benefit of humanity.

15:18

Speaker C

So the way to simplify that is demand is limited not by anything other than availability of compute today, whether it's SORA or more broadly. And then there's price elasticity, where demand is infinite for compute. So I think that's the way to think about it. We haven't even started to exercise the price elasticity leverage. It just we can't fulfill demand and it's limited by compute. So all the people talking about bubbles and things I think are on the wrong track. They have no sense of how large this changes and how much more demand elasticity. There's a need for API calls.

17:09

Speaker A

As one of OpenAI's earliest investors, you made a bet early on, you saw where this was headed. But you've saw the dot com bubble. You watch what happened there. But you've also seen other things. The mobile revolution, you've seen this happen with other areas. And you mentioned the term broad. And is that sort of where your conviction comes from is just how many different areas it touches?

18:02

Speaker C

Yeah, look, when we invested, we had one simple metric. There was no projections to look at, no product plans to look at, no chatgpt to look at. It was very simply the idea. If we develop anywhere near close to human intelligence, let alone supersede human intelligence, its impact is going to be huge. It was this hand wavy approach, like the consequences of success are really going to be consequential. So why not try that people? There's also this funny notion of bubble. People equate bubble to stock prices, which has nothing to do with anything other than fear and greed among investors. So I always look at bubbles should be measured by the number of API calls or in the dot com bubble which people refer to, it should be amount of Internet traffic, not by what happened to stock prices because somebody got overexcited or under excited. And in one day they can go from loving Nvidia to hating Nvidia because it's overvalued. Those gyrations aren't reality. The reality is the underlying number of API calls. If you look at Internet traffic during the dot com bubble, prices may have gone up violently and gone down violently. There's no bubble detected in Internet traffic. I would almost guarantee you you won't see the bubble in number of API calls. And if that's your fundamental metric of what's the real use of AI, usefulness of AI demand for AI, you're not going to see a bubble in API calls. What Wall street tends to do with it, I don't really care. I think it's mostly irrelevant. Great for press articles because press has to fill their column inches. But it's not reality. So prices of things aren't reality or stock prices, private company valuations. The reality is what's the actual demand for AI, which is the number of API calls.

18:24

Speaker B

Right. And I think if I hark back to that moment where you were looking at 1999, the value people were getting from the Internet at the time was actually very. It was so young, so nascent that you couldn't really see how it was changing their lives. I do think that with AI, it's happened so fast, that change, it's very real. Like as a CFO, forget about being the CFO of OpenAI, but as a CFO, what I see happening in my organization and is truly taking tasks that previously I would have kept having to add. More and more people doing fairly mundane things like let's take something like revenue management. So in a team that does revenue management, one of the things they do every day is they have to download all the contracts that we signed the day before or through the week and they have to read all of those contracts to make sure there's no terms sitting in it that are unexpected, that are effectively non standard terms. Because a non standard term means that there could be a revenue recognition change that has to happen. And that's a very big deal for a finance team. That's the number one thing. Usually your auditors come in to audit you on the pace at which we are growing the number of contracts every day is going up in multiples. So my only choice in a pre AI world would have been hire more people and imagine what those people's jobs are like. You come to work every day and you read a contract and then you read the next one and the next one. It is so mundane and such drudgery. And it's not why people went to school and learned about the accounting field or thought about being a finance professional. But that's kind of the job we hand them as an entry level job today using our own tools here at OpenAI. I now have overnight all of those contracts are pulled out of a system. They are put into a tabular database, the databricks database in our case. The agent or the intelligence is able to go through it shows me exactly what is non standard and why. It suggests what therefore the revrec is. But it also suggests the insight which is should this term even be here? Did the salesperson just give away something they shouldn't have? In which case I go and I coach them. Is it actually telling me something about my business that's starting to shift? In which case this non standard term actually should become a standard term. And I'm actually what I'm experiencing is a shift in my business model which might actually be a good thing. Or perhaps I want to find a different way to help get the customer what they're looking for, the salesperson what they're looking for, but maintain my revenue recognition, my current business model. Right. So I now want my more junior entry level people are over on the right of that discussion and they're kind of refining the job they loved. That to me is why it's not a bubble because the value is real and tangible. It also means I probably can have a smaller team, I can have a much more high performing team, a much higher morale on my team, better retention rates. All of these I can put into numbers to say my business is now healthier. I think that's the piece when the press is trying to lead with the bubble conversation or whatever. They just miss that we are investing with demand, if anything behind demand at the moment. A bubble to me suggests you're investing ahead of demand and there's going to be a gap.

20:51

Speaker C

And you look at productivity numbers, they're going up in the companies that are adapting AI, especially the newer set of tech oriented companies. The numbers are just absolutely amazing. So one of my favorites is a little company called slash, about 150 million ARR. They have one person in accounting only a controller because they adapted an AI oriented ERP system, they replaced NetSuite with it. But it's just amazing what they can do. And the CEO was apologizing to me, he might have to hire a second person. And they're moving really rapidly. I just saw a story, somebody replaced 10 SDRs with one SDR and AI essentially that the one SDR remaining supervises.

24:26

Speaker A

I've been hearing two stories about where instead of hiring somebody that's in an area that doesn't create growth, they can now then when they hire, hire people that are creating a lot more growth for the company. And that's why you're seeing a lot of these tech companies just build so fast.

25:22

Speaker C

You know that old phrase the future is here now, but it's not evenly distributed. I see all these single points of huge productivity gains and efficiency gains or agility gains, the ability to move faster. But very small percentage of the people in the world, in the US or worldwide have adapted these or even know they exist. And so this issue back to demand. I think these ideas, some of these examples will spread to everybody over time and you'll see an exponential growth of adoption of these technologies. That's why I don't think demand is the question.

25:36

Speaker B

Yeah, Vinode is absolutely spot on. I think McKinsey did a study that showed for companies that are more in the top quartile and their productivity as measured by any kind of financial metric you would pull is up in the 27 to 33%. That's a really meaningful jump. I think where you were going is it doesn't just mean fewer employees overall, there's definitely a place to kind of shift people over into more growth oriented jobs. I was hiking this weekend with someone who runs a very large consulting company that you all would know of and he was talking about how what he thinks of more his back end systems, the leader there is now talking about her organization as people plus agents. And she has a one to five ratio, one person to five agents. But on the front end they're actually back at rehiring to grow because clients need more help now to think about deploying AI. So it's actually shifting back I would say to the jobs people want to do, not the jobs that maybe were just open to them because more and more of the world had become this kind of so much information that people were parsing it. Now we're finally back to a machine and agent intelligence parsing it.

26:23

Speaker A

I want to touch back on the consumer side. You mentioned ads and certainly the argument can be made that with ads, you can increase the benefits to people, you can provide more services, more AI, you can help pay for the compute and people get more out of those tiers with that. But that brings up the question though of trust. And when people think about AI initially, even asking questions, people worried about what does ChatGPT do with my information, once you have ads in play, people worry about that because it's often just a big question of how does that affect the rest of the product in the org.

27:37

Speaker B

Yeah. So I think you started in the right place, which is today, 95% of our users use our platform for free. On the consumer side, and that's absolutely where our mission is. Right. AGI for the benefit of humanity, not the benefit of humanity who can pay. Right. So access is very important from an ads perspective. I think number one, we have to just make sure everyone understands you're always going to get the best answer the model can provide you, not the paid for answer. And I think other platforms have fallen back into that. Where you're not sure, is this a sponsored link or is this truly the best outcome? We have a North Star, which is that the model will always give you the best answer. I think the second thing to understand is that there can be a lot of utility in ads. So we want to make sure people know when it is an ad that they're working with. But for example, if I do a search for a weekend getaway to pick your favorite city, I don't know, San Diego, an ad for airbnb might actually be very helpful and you might even want to have a discussion with the ad or with the advertiser in that case. In a ChatGPT setting, that's very rich, but you're clear that it's in an advertising setting. And I think this is where there's. There has to be more innovation on what feels endemic to the platform, not just kind of the old world of stick banner ads on things. I think the third and final thing for me is again, there always has to be a tier where advertising doesn't exist. So we give the user some choice and some control. But we're very mindful of your data. When we released Health, we were very clear your data is off to one side, it's not being used to train on and so on. And I think we just need to keep giving users that kind of. That trust is everything for OpenAI and that we're going to stand by those principles, even when it comes to things like ads.

28:08

Speaker A

On the consumer side, is it going to be A world where you're going to have a lot of subscriptions to different AI services.

30:01

Speaker C

I think you'll have every model. Most people will have more than one subscription. Media is a good example. Most people have more than one subscription in media and so that's a good proxy for consumer behavior. Different people will pick different choices, including free choices, which also. Which which is ad supported media too. So even the same services you can get or pay or for free, I think you'll see a wide range of diversity.

30:07

Speaker B

How do you think about though, the expense of going to a different platform? So I like ChatGPT memory. I'm finding it more and more helpful because as I ask about one thing, it remembers something we talked about maybe weeks ago, months ago, Pulse, which is today not widely distributed, but it's the way I wake up in the morning now so I can actually. It's so amazing. And when you start connecting it to things like your calendar. So it's not just saying you say are very interested in AI data centers, which clearly it must think I'm the most boring person on earth because this is what I see a lot of. But it also says, hey, on your calendar you're going to be sitting down with Vinod today. Remember a couple of these things. It's so helpful. But if I am multi homing, I'm losing the benefit, which is not the same as if I subscribe to the Wall Street Journal, the Economist and the New York Times. They're not really losing out. If I go read in other places in the same way or I'm not losing out.

30:38

Speaker C

Yeah. So I do think memory is an important question. Whether there'll be one per wear or more than one per wear of the models. On each model there'll be multiple services that may offer different trade offs. So even whether you're talking health or media, Even on the OpenAI models, there's multiple people providing services. So that's what I was thinking of, multi homing. But obviously I don't think OpenAI will be 100% of the market. I hope so.

31:35

Speaker B

I was going to say I hope.

32:11

Speaker C

So too, but I'm okay with that.

32:12

Speaker A

It's an interesting business model. I think it's hard for people to wrap their heads around because like Netflix is a great company, but there's only so many hours on the planet that people can watch Netflix. Right. And mobile is great.

32:14

Speaker C

Right.

32:25

Speaker A

I'd only need so many minutes of mobile per week or whatever to do that. With AI and intelligence, you can have more intelligence. I can buy more and get better answers and do this. And I think that's. I think I'm still trying to wrap my head around about where that goes. The idea that you start at one level of free, use it for free, then you go to a smaller tier, and then as it becomes more useful, you start increasing that. Where does it go?

32:26

Speaker B

So I think, unlike something like Netflix, where they're running so many hours in the day, I think of it much more like infrastructure, like electricity. How much electricity do you use in the day? I don't know. I walked into a room today and there was a fan blowing. It was really nice. It cooled it down. There are lights on around us right now. There's so many. I charged my phone overnight and it worked for me all day. So I think that the state we live in today is much more. I call on ChatGPT, I invoke it. As opposed to intelligence just being baked in, I think this will be the big change over the next couple of years. You'll kind of look back almost. It'll feel a little toy like that. We used to do this thing, and instead it just is everywhere around us. And so it's not really quite answering the question you're asking, but it's that I don't get so caught up that there's only so many hours for people to do things, because I feel like almost everything I do in life requires intelligence. Because I'm walking around hopefully, with some intelligence up here, and if I can get that augmented, I think it's going to surprise us. Like, as we were talking before we got started, you said about on your phone, when you suddenly discovered you had a flashlight and a camera. It is. You say that and it's so obvious. And yet with ChatGPT, every time I discover kind of a. What feels like almost a slightly cute use case, I'm so blown away by it. Like, yesterday morning, I do love the Economist. I wanted to read the editorial. I didn't really have a ton of time because I was running upstairs to get ready. So I took a photograph of the editorial because they're very good. They put it on one page and I asked ChatGPT to read it to me. And it did it. And I was like, oh, my God, this is awesome. So I just think there are all these moments where we're just getting started. And multimodal, I think, is probably the biggest because phones taught us to talk with our thumbs. And I think this new world we're moving into, there's going to be new hardware that Just really help us understand that we can talk, we can listen, we can see, we can write and do all of these things in a very human way that we're just scratching the surface of.

32:49

Speaker C

Sorry, let me give you a different frame on that. I agree with all of that. If you look at what we talked about, the Internet earlier and the bubble associated with it. But what the Internet did is give you access to a lot more stuff, whether it was media, YouTube videos or TikTok or you name it, information of any sort. But it's expanded it to the point where no human can actually use the Internet fully. I think of AI as given you're limited to 8,000 some hours a day, some of which is meant for sleeping, it'll make your time much more efficient. The Internet exploded information available to you to the point where you couldn't use it. And I think what AI will do is filter it to make your every hour the most effective hour if you know how to use it. So intelligence will reduce the world to what is most relevant to you personally. And I may have a different set of priorities than Sarah, so I think of intelligence as summarizing the world to the most relevant things for me and the most relevant things to her, which are different. So I think that's where there's almost unlimited capacity for intelligence to be used to reduce information. When the Internet exploded information.

34:58

Speaker B

Yeah.

36:32

Speaker A

We've talked a lot about consumer side and it feels like OpenAI is very much winning. The consumer side question comes up about enterprise and how is OpenAI going to compete and win in that area?

36:34

Speaker B

So I think we're already winning in this area. What I see is 90% of corporations are saying they either are using OpenAI or intend to use over the next 12 months. Right. I think the second is Microsoft and Microsoft using our technology. So I actually think we have. This is where the consumer is a really potent part of the enterprise flywheel. So as I said earlier, when someone, you know, you back in the day when you first started bringing your iPhone to work and corporates didn't want you to do that, you just discovered you can't say no to the tidal wave that is consumer preference. So something I'm already using that I've already got in my pocket and I get to work, my expectation is work is at least as good, if not better. And so that's what's helped drive our actual enterprise business. The fastest company ever to get to 1 million businesses on a platform. And we did that in about a year and a half. But where to from here because clearly we're just scratching the surface. So some of it is certainly meeting customers in terms of their vertical so that we talk to them in their language and we learn this art of enterprise selling, which is let me not tell you all about my products, but let me understand your problem. Like what is your board forcing on you, Mr. Mrs. CEO? What is the thing your customers most want that you can't deliver? Okay, let's start putting intelligence against that. We can then drop that down into some light vertical specialization to quite heavy vertical specialization. Things like rlling models that are very pertinent to a use case. Like let's say in an energy company, it might be really understanding that particular oil well or all the seismic data they have to say what's the recovery we're going to get out of this gas field? That is deep specialization. Then I think it gets the whole way to some of these big transformational research projects that we have begun where we're actually almost taking over someone's whole business and helping them rethink it in a smarter, faster, better way that ultimately drives their key business metrics. So it's a journey I think most corporates have start it with wall to wall ChatGPT. That's an easy starting point. They've done some coding and in many cases a lot of coding. When I talk to corporates, CEOs are starting to say things like 60% of all my production code was built by an agent. I'm like, you didn't even know what production code meant 12 months ago, but now you're saying that that's good because it means you're tracking it. But on agents it's just starting. Like we only see about 14% of all kind of customers. When you go out and just survey us, corporates are using something agentic today. 14% when I just explained what's happening in my finance organization. So I think we are just getting going. But I couldn't be more excited about the opportunity. It's huge.

36:47

Speaker A

Okay, but if I'm a startup and I look at everything OpenAI is doing, I might be asking is there room for me? What do I get to do?

39:47

Speaker C

Look, models will keep getting better and do more and more, but I do believe there's lots of room to build on top. No one company can do everything on the planet. There's billions of people who are working whose job AI can help with. I don't think OpenAI will specialize in every one of them. So I think the careful thing to do is be Clear where the models will go, OpenAI or others and what they will be able to do and how do you use that best to then specialize into more interesting world. Some sort of specialization where you add something that's additional to the base model. And frankly just intelligence isn't the only thing to provide a solution. There's lots of other stuff that goes around solution beyond intelligence. So I think there's lots of opportunity to build on top of these models and the more powerful they get, the number of opportunities to add to it dramatically increases.

39:56

Speaker B

How do you think about. So I think a lot about use cases where there's already a lot of data that that's been aggregated perhaps by that startup, by that company that today I think 95% of the world's information actually sits behind corporate firewalls, university firewalls and so on. So even though we talk about the vast training that's occurred, again we're just getting going. But I think companies that have already built businesses that have aggregated that data, have access to it and then on top of that have managed complex workflows. So I often give the example of our procurement system, procurement system per se, not that complicated. But what it does very well is it understands things like delegation of authority. So it knows what the board has approved in terms of approval limits. So it knows that when this software contract comes in, it's X over X amount, so only I can approve it or if it's beneath that, but it knows a VP can approve it. It doesn't know that Andrew's a vp, but it knows to touch the HRS system and check what's his level. And so the whole procurement flow can happen in a way where I have compliance and governance and hopefully makes just the whole company run faster. Those are places I get interested for startups. So where have you got access to unique data with a complex workflow? It feels like there's more of a moat around that, that we want to work alongside you, but the general purpose model is not going to do all of that itself.

41:07

Speaker C

Yeah, no, I completely buy that. I think there's lots of opportunity. I've seen quite a few startups around just permissioning around data like who can do access to what information, for example. I've seen a whole bunch of startups around customizing to each company the models for their history and their priorities and.

42:35

Speaker B

And the agent, the whole identity side of agents. I think we're just starting to understand both the risk that can happen when you have agents talking to agents talking to Agents, but then also how are you going to permission that and then start to think about like agentic commerce? Like the complexity that's coming is also quite big. So to suggest there's no more opportunity as a startup, I think it's never been probably more interesting or fun to be a startup.

43:00

Speaker C

Yeah, I think there's more opportunities than there've ever been.

43:27

Speaker A

What are you looking for now? What gets you excited when you talk to a company?

43:30

Speaker C

Well, the hardest thing is great people always. But I think the other thing that has been in short supply is agency, where people sort of have the agency to make things happen. That's again comes down to people, but there's so much opportunity. I think traditional things like knowing a space or experience in space is much less relevant now. It's more agency. We've not talked about the whole new world of robotics and real world models and all that. That's a whole space by itself that we probably don't have time for.

43:33

Speaker A

Whoa, do we? We've got time.

44:15

Speaker B

I've got plenty of time.

44:17

Speaker C

I'd love to.

44:18

Speaker B

I want to go there.

44:19

Speaker A

Yeah. Because we talked about where we're headed here and you, you famously talked about kind of the world of 2050 and things are moving fast, models are getting faster and more capable. And where do you see things like robotics headed?

44:20

Speaker C

Well, I think two years ago when I gave a talk at ted, I said the robotics business, both bipedal and other robots, will be a larger business in 15 years than the auto industry is today. We think of auto industry as one of the larger businesses on the planet. And this other thing will be larger. I don't think there's very many automotive companies who are thinking of the world that way. They're thinking about how to use a robot in their assembly line. Not that that business is larger than their current business. All driven by the intelligence of robots. So massive opportunities for startups there. And we are seeing a lot of activities.

44:33

Speaker B

Yeah. And we, and I think sometimes we underestimate. So when you think about robots in the home. Right. People, very fertile area. No one's really had a breakthrough though. There's so many different issues around the complexity. Actually sometimes the more time I spend in AI, they actually the more respect I have for the human condition in a way because our ability to move around the world. And do you know, if you watch like the people in robotics getting so excited about a robot folding clothes, you know, perhaps my 18 year old I'd be just as excited about. But for the average human, I assume they can fold clothes.

45:21

Speaker A

But I think, I think the hello world of robotics now is folding clothes.

45:56

Speaker B

But you do get a little stuck in your head that they have to somehow be a human. But it turns out there may just be these breakthrough moments, like for example, companionship in the home. Right. We have an aging population. What's one of the biggest we talk about epidemics in the world? Loneliness, probably one of the biggest epidemics. What does someone living alone maybe has just lost a spouse value most? Just someone to converse with in a way that feels intuitive and human. We see people using ChatGPT more and more for this conversation. But is there a humanoid esque breakthrough? Or it turns out you don't need it to make coffee or full clothes or do the dishes. Although that would be good too. But it might just be something a little bit more simple that still adds a lot of value and is just the first crawl of crawl, walk, run of this kind of future that VINOD is talking about, where that whole complex is X times more valuable ever than we saw in automotives.

46:00

Speaker A

I think that it's interesting because we can sort of think of kind of like our present and put robots in places and do things like that. It's really hard to think of when you really have extremely low cost labor, manufacturing, et cetera, and then the world you can build from there, because we can look at that as a good solution for now. But when the cost of building a wonderful state of the art assisted living facility where you can put a bunch of people together, the cost drops. I think that's the thing I have the hardest problem is for me is to really think like, what does it really mean when you lower the cost? We've lowered the cost of intelligence. What does it mean? We really lower the cost of labor.

47:00

Speaker C

My personal view, sometime, probably towards the end of the next decade, you'll see a massively deflationary cut economy because labor will be near free, expertise will be near free, most functions will be almost zero cost. How it exactly plays out, a little hard to tell how purchasing power versus production of goods and services plays out. But I expect we'll see a hugely deflationary economy at a level people aren't planning on. So there's social aspects of adoption of AI that hasn't been handled yet. I think the conversation we need to have is what will people do? I get asked that a lot. How will people make a living? I think the minimum standard of living governments can assure people is going to be much, much higher without needing to earn an income. I mean I can't imagine much better primary care. Like 10x more primary care than today doesn't happen for a dollar a month. I have a hard time imagining how that happens. It will be true. It costs almost nothing to have free primary care, free education, almost AI tutors for every personal tutors for every child. That's already happening. So there's a set of services that'll be free. There's some hard nuts to crack. Housing is the hard one for people in the bottom half of the US population. They spend 40 some percent of their income on housing and food. There's some hard knots, but I do think both are addressable by robotics and better approaches.

47:35

Speaker A

Well, this has been a very interesting conversation. I'm excited to see where things are headed. Thank you both for joining us here on the podcast.

49:32

Speaker C

Thank you, thank you.

49:39