ChatGPT – The Super Assistant Era | BG2 Guest Interview
Nick Turley, Head of Product at OpenAI, discusses ChatGPT's evolution from a demo to 900 million weekly active users, focusing on retention strategies, the transition to proactive AI agents, and the challenges of scaling with GPU constraints. He covers pricing evolution, the recent Code Red focus initiative, and OpenAI's vision for building a 'super assistant' that can take actions beyond just answering questions.
- ChatGPT's retention curves are 'smiling' - users who churn initially often return months later as they discover new use cases for AI in their lives
- The next evolution beyond chatbots will focus on AI taking actions and being proactive rather than just responding to prompts
- GPU allocation is now more constrained than human resources, creating zero-sum trade-offs between serving existing users and developing new capabilities
- Power users are essential for product discovery because they reveal what's possible with the technology before the company can discover it internally
- The transition from unlimited subscription models to usage-based pricing is inevitable as AI capabilities scale with test-time compute
"ChatGPT originally was entirely free and the reason for that was that it was intended to be a demo and we were going to wind it down after a month."
"We've got about 10% of the world coming to us now, 90% left to go. There's so much more opportunity."
"I care a lot about long term retention and I would put all my points there because I'm really proud of the retention stats we have."
"GPUs are zero sum and if you don't have more GPUs, you really have to figure out how do you make very, very hard trades."
"The most important perma skill in this era is curiosity, because if the machine can answer all your questions, you better have good questions."
Chatgpt originally was entirely free and the reason for that was that it was intended to be a demo and we were going to wind it down after a month. We then realized that the demo went viral and people loved the demo and it was actually a product. But we realized to be a product, you can't take the product down every time you're at capacity. So we ship subscriptions simply because it could shape the demand. It was a way of gracefully turning users away when we had to turn away someone.
0:00
You guys are at 900 million weekly active users now and that growth has been incredible. The next billion users, where are they going to come from?
0:26
We've got about 10% of the world coming to us now, 90% left to go. Right. There's so much more opportunity.
0:35
Well, Nick, so excited to have you here.
0:47
Thank you for having me. Apoorv.
0:50
You've had quite the journey from Germany to the US for Brown.
0:51
That's true.
0:56
Most recently at Instacart delivering groceries in 30 minutes to now delivering AGI to billions. I'm sure that was a plan all along.
0:57
Yeah, clearly. Total master plan.
1:04
Well, tell us about your journey. How did you get to OpenAI? I know, it's a fun story. And your three and a half years or so at OpenAI, how have they gone?
1:08
The only through line in having any sort of employment decision has been entirely people based. So I don't claim any credit for joining OpenAI or predicting ChatGPT or anything like it. But I hit up someone who I admire a lot, who I got to know at Dropbox, Joanne, who worked here at the time. And I asked her to get off the Dall E2 wait list and she told me I had to interview if I wanted to get off the waitlist. So I took the bait and got totally nerd sniped in the process and here I am.
1:16
There you go. The Dall E2 waitlist will get you.
1:52
It's a great recruiting tool.
1:55
Nice, nice.
1:56
We should do more wait lists. Probably.
1:57
Yeah, yeah, yeah. Well, you know, the big super cycle we're in is chatgpt now, I assume over a billion users on the monthly side, 900 million weekly active users that recently reported up from zero three and a half years ago. You could have. If I imagine what the dashboard of Nick Turley looks like, it could have users, it could have paying subscribers, it could have daily active users, it could have retention, engagement. I mean, there's like 15 things, maybe all of them. What is your North Star? What are you optimizing for? What is Nick looking at in his daily dashboard?
1:58
It's funny, right? Because it's such a young product. It's been, to your point, three and a half years. And this kind of question, it kind of changes as you evolve and you grow up and you ask yourself, what are we really building here? And to this day want to build a super assistant that can actually help people achieve their goals. And ultimately the thing we care about is, is our product doing that? Is it actually helping you do the thing that you're coming to the product to do? And it's so different for different people, right? Some people are trying to get healthy, other people are trying to start a company, learn a new topic, do their taxes. There's all these different things that you might be doing. And the true measure of success is within that we're helping you do that. And obviously we look at WAU in particular because we want to know if you're coming back to the product. We look at retention, but we look at all kinds of stuff in aggregate because really there isn't this one single thing that you can optimize for.
2:37
If you were to allocate 100 units of points to these metrics, which metric? Can you distribute the 100 units across these metrics in order of importance for you right this second?
3:38
It's a good question. I care a lot about long term retention and I would put all my points there because I'm really proud of the retention stats we have. But ultimately the sign of durable values, whether enough people are coming back in three months because that means you're really solving their problems. And I think things like revenue, they follow from that versus trying to go on those things directly. And we've had a lot of success making very principled decisions on this stuff. One good example is GPT4 used to be behind a paywall because we couldn't serve it to everyone. And then we had GPT4, which was a total breakthrough in our ability to inference it. So we just gave it away for free. And that ended up being totally revenue positive and retention positive because it just provided access to the tech. And I think when you make your decisions that way and you focus on the customer, you end up with a great product. And revenue obviously follows too.
3:49
Phenomenal. Yeah, well, it, it shows up in the numbers. You know, I posted this chart yesterday on the data that we have, you know, from a third party. The retention curves for Chat GPT are smiling. Look at that. Just like that. And that is a rare, that is a very rare occurrence. You know, as we know. And why do you think, like if you were to give us a narrative on that smile curve, what is the, why do these smile curves exist? What are you seeing in chat GPT that has people who have, who have maybe turned off for a couple of weeks or months coming back? And why are they coming back?
4:43
Look, there isn't one single thing, you know, the way that you build a retentive product is lots and lots of little things and really trying to make it better systematically. I will say that with AI and in particular ChatGPT, I've found that it takes people some time to really understand all the parts of their life they can delegate and I think many users for that. It's a multi month process for them to understand how can this thing help me and what are all the different ways that I can plug ChatGPT into my life. But when I think about some of the breakthroughs and levers we've had, things like search and personalization, they, they have helped solve those user problems because search provides way more daily value to you. It used to be that ChatGPT was a pretty worky product. We'd see usage go down on the weekend, we'd usage go down during the summer months when a lot of people were off from work. And today we're mobile first. The vast majority of usage is mobile and we see all these personal use cases and I think search was a big investment that got us there. And personalization makes ChatGPT so much more relevant for you because it gets to know you over time you get to know it. And those are two things that have materially moved the way that people come back to the product. But there's lots more to do and as I mentioned, I'm not resting on our retention stats even though we're obviously very proud.
5:17
Nice, nice, nice. And the other thing that I got wrong about ChatGPT was this is two and a half years ago. I was like, well, you know, let's look at who's going to win this consumer AI race. Typically these consumer markets are winner take most, winner take all. Look at search. Google has near 90% plus market share, 3 and a half, 4 trillion in market cap.
6:42
Mobile.
7:04
Same thing with Apple. Social, same thing with Meta. I was like, well AI Meta has all the distribution. Google's got all the distribution. They've got 3, 4 billion users. Well, it would be a flick of a switch for them to roll out their AI. But I was wrong. That's not what happened. ChatGPT. Turns out you guys are at 900 million weekly active users now and that growth has been incredible. Clearly distribution was not enough, right? So the same question for distribution. What are the levers for us that have gotten us through the scale? Is it model quality? Is it product quality? Is it features? Is it the experience or product improvements like memory and personalization or search? Same question. What would you say drove historical growth and success?
7:05
We've got about 10% of the world coming to us now, 90% left to go. There's so much more opportunity to reach more people and introduce them to the way that AI can benefit them. But when I look backwards, and I only say that because the next billion users might be very different in terms of how you engage and reach and provide value, but I look backward, it's been roughly sort of 1 3rd, 1 3rd, 1 3rd between sort of classic friction removal type of work. Like one of the biggest moments when you look at Pure Impact was removing the authentication wall. And Sam will say I told you so, because I think that was his feedback from like day one. I was like, you shouldn't have to log into ChatGPT, but it's stuff like that that you do for any product and it does matter. Some things never change, right? Then another third or so is what I would sort of core product investments. And they're really typically things that we've done together between research and product. So search and personalization are really good examples of that. Where we came together and we figured out not just UI UX evolution, but also how to post train these changes into the model and it was really the moments when we came together. Another recent example is we have these writing blocks that render when you ask about queries, where you're trying to write with the model and putting really good craft into those experiences really matters. And our users love it. And then another third of the growth has been just model improvements. Both step changes like going from GPT 3.5 back then to GPT 4, then going from GPT 4 behind a paywall to 4.0 suddenly available to everyone, right? But a lot of it is also the iteration that isn't splashy, that doesn't warrant a named release. I'm really excited about the updates we just made with 5.3, 5.4, etc. Because that is when we take a lot of user feedback and we methodically address it. And obviously that shows up in our retention as well. Sort of one third, one third, one third between classic friction removal and access core product investments and then pure model improvements.
7:50
And so the question that I've really been waiting to ask you is how do we get the next billion? And talk about that a little bit. There's a lot of it seems like, at least from the outside fog of war, if I was a consumer today in the market to pick my super assistant, you would have a couple of great options. You know, Claude out there, they're having some great traction. Last couple of weeks, Gemini Mega Distribution, Uber Distribution, and us, probably the leading product today, at least in user numbers. The next billion users, where are they going to come from?
10:11
First of all, just to contextualize that goal, we care about two things. At the end of the day, obviously reaching more people is really important. Means the direct manifestation of our mission to the world, where the more people we can introduce to the benefits of AI, the better that is. But we're also really excited to go deeper. And that means taking the same billion users that find value in ChatGPT today and actually providing more meaningful value in the world, actually helping them achieve their goals, not just answering questions. Right. So I'll talk about how we get to more scale, but I think it's important to remember that the way this technology is evolving is we're going to go beyond pure chatbots pretty fast, I think on scale. It's shocked me how many people have found value in ChatGPT as it works today, because I don't think delegation is a natural skill for most. And ChatGPT is a pretty. It's a power tool, right? You come to it, it doesn't tell you what it's for. You kind of have to discover it on your own and you have to use it. And then you'll learn about this prompt, that was really cool. And then maybe you're on Twitter and you learn about another one, or you're on Instagram and you learn another one. But the product, it's like a raw appliance. And I think one thing we really need to nail as we reach the next set of users is a product that has a bit more of an affordance, because I think for most people, they're very, very busy. And everyone, I think, in the world has intelligence constrained problems, like problems that more intelligence could help with, but you need to frame that to people. And I still feel like we're a little bit too much like a computer terminal and it needs to feel more like software or an operating system of software. So that's one thing. Another thing that gets at the same constraint is beginning to be proactive in a world where a lot of folks are too busy to Delegate their problems to AI or don't quite know where to start. I think being able to help you proactively is really, really important as well. But I think all of these are product evolutions that we could make on top of the current tech. And the thing that gets me particularly excited is prioritizing our next generation tech or reasoning models. Because the truth is, when you look at reasoning ChatGPT today, it's relevant to a very small group of people. It's relevant for the people who are trying to get the most out of ChatGPT. But I fundamentally believe that reasoning, it's transformative. And if you can figure out how to productize reasoning in a way that works on people's behalf without them even knowing, and that looks very much like the model doing long horizon tasks on your behalf, it doesn't mean you encounter the concept, it just means it's benefiting you. So there's so much work to do and you know, the product certainly has to evolve to be relevant for this kind of skill.
10:50
Yeah, one of the things that I've been hoping for a while and you know, Brad made a bet two years ago, when can ChatGPT help me take actions? When can ChatGPT help me be more proactive? And I think his bet expired end of last year. So we're very curious, when is that coming? And I'll frame that for you because, you know, with search engines and Google, you know, two decades ago you got on the 10 blue links, you could have spent an hour getting the answer. You can now get the answer instantly with ChatGPT. And it feels like the next step is actions 100%. And it feels like the next step is, you know, with, you know, Pulse is a great proactive product that feels, you know, I have a post that runs weekly, but what I would really like is like, hey, Nick spoke about something and just, just find me, make sure I know that Nick spoke about this. Or like, hey, this XYZ thing happened that I cared about a lot. When is, when is that going to get proactive? What is the modality going to look like?
13:42
Yeah, yeah. So there's two concepts. I think there's ChatGPT doing stuff rather than just answering and then as ChatGPT being proactive. And I think when you put them together, you start feeling like it feels like a super assistant because I think these things compound on the action taking piece. Strictly speaking, ChatGPT can do stuff. Today the action space is just very limited. It can search the web, which means it can use a search tool or a browser in the same way that a human would. It can make images, it can do all these things, but it clearly doesn't have the same action space that a human with a computer would have. And that is what we aim to build. And if you. Timing is everything on these bets, right? And I don't pretend to be great at timing either. When I look at past attempts that we've made, like the ChatGPT agent, for example, which kind of has capabilities like this, it was just slightly too early. The models weren't quite good enough to hit real escape velocity. And the problem is, if you don't have escape velocity, is that users don't learn to trust it. They don't even try. So when you look at a lot of things people were doing in the original version of ChatGPT agent, it was the things that happened to work, like migrating your file server into the cloud or something like that. Useful stuff, but very niche. And as this stuff gets better, we just have to get it to a point where people try to use it for real meaningful problems in their life, because then we can start hill climbing. And this has been the magic of ChatGPT, where ChatGPT, upon launch, was good enough to get real attempts at use cases, even if they didn't initially work. ChatGPT was a pretty bad writer originally. It was a bad software engineer, but people tried and got enough value out of it that we could take those use cases and make them great. And I do think we're about to get to that point with general purpose agents where it works well enough that you get at least partial credit. And because you're getting partial credit, you get really good tasks back. And then the magic begins, because once you have a set of use cases that you can climb the hill on, we can make them awesome. So on task, I think we're close, but I think even people inside of OpenAI would have had a hard time predicting exactly when this gets good. We've been excited about it for a while. On Proactivity, Pulse was a really great first step because what we wanted to build was a form factor where you're not prompting the model, the model's prompting you. For the reasons that I described earlier, which is it's so hard for people to delegate and to figure out what their problems are. What if the AI understood your goals and the things you're interested in and just could start being proactive on your behalf? Pulse is limited in the value it can provide for you because it's not connected to your Life and it can't take action. So it's producing information for you and people love that. I love that I've got mine running too, but, but I think the magic begins when you have actions and proactivity because then it can begin speculatively actually detecting, hey, you just landed where you were supposed to go, I'm going to call a cab for you. Or if you're at work, it's like, hey, I proactively ran this analysis because I saw your metrics dropped. So I think these things really compound and we need to nail multiple of the building blocks to really achieve the transformation and the form factor that we hope for.
14:44
As you were answering those questions, I now have 15 more questions for you. So I hope you have 15 more minutes. But okay, one by one, we'll start with what you said on actions and tasks. Got it. On timing, tough to say, but is there a shape or ordinality of tasks or agents that you think, hey, this is the kind of thing that's likely to come first whenever it does.
18:07
I mean, the thing that's already come first is the domain specific agents. Right. If you look at what's happening in, in code, we're fully there. It's mind bending, but we've got so many engineers who don't open their IDE like ever. And for me, as someone who used to code and then unfortunately got very, very busy, it's brought me back in the game. So Codex and products like it is clearly a product that has escape velocity where people are absolutely using it for all kinds of agentic work. And if you just take what people are doing and make it work even better, you kind of get all the way there. I won't be surprised if you see this happen for other forms of quantitative knowledge work just because it happens to have the properties that code has. It's testable, you know, if it worked or not. It's very RL friendly, but the domain specific ones already work. I think the thing everyone's working for is general purpose agents that just kind of work for anything. And that's why I think you need to win a consumer because it's very hard to train people into like, okay, it can work. Deep Research was a consumer product and it's really was our first agentic thing out there. But I think what consumers want is I can just ask it anything and we'll do what, what needs to be done without any sort of retraining and we'll get there just a matter of time at least.
18:29
A psychological goal is flight Bookings, totally. Restaurant bookings, shopping, all this stuff.
20:00
There are so many consumer problems and those are just the type of things that you would kick off right the minute you have productivity. There's things you don't even think of as agentic tasks, like you're trying to get in shape. You don't think of that as a task you would delegate unless you have a trainer, in which case you do, but most people don't. Right. But if the EI knew that it could totally start working in the background for you over very long periods of time and getting you, you know, here's your fitness plan. Okay. I actually signed you up for this thing. You could imagine it being quite helpful if it's aligned with your, with your long term interest.
20:08
You're going to give Ozempic a run for the money.
20:39
We got to be careful what businesses we get into, but hopefully we can help.
20:43
That'll be great. Cannot wait. Cannot wait. The second thing you said was proactive users and that might require us to go beyond chatbots. What's an example of a modality that might take ChatGPT beyond a chatbot?
20:47
So chat will always be close to my heart. It's the way we grew up and it's an important modality to stay. I think it's less about chat and more about natural language to me, where it's the fact that you can express yourself to the machine in ways that are very natural to you. Whether or not that's text, whether or not that's voice, whether or not that is structured ui, that is rendered by the model, that is just very, very powerful and that's here to stay. But I think the thing SA server that's right, for those who don't know, that's the name of our code base, short for Super Assistant Server, because it's proof that this was always the vision and it's always the vision. But the thing that will change, I think, is that chat is a great way of expressing your intent. It's a good way of communicating with the machine, but it's not a great output where in many cases what you want back is an artifact. Here's your plan for your trip. Here is the analysis. Here is an outcome that I delivered for you. I just made you five bucks. This is what I want my AI doing for me. Yeah, totally. This is what people care about. And, and I think chat will always be there as the way that you sort of disambiguate your intent and you kick off the task. But I don't think it's necessarily the final deliverable. And I think that's the way in which we can evolve. So hopefully that's a very graceful transition because I'm very lucky. And it's hard earned to have a billion people coming to you weekly for a thing that they love. But I think it's a great jumping off point because we have so much unsatisfied intent from people where they're clearly trying to do something. And ChatGPT is helpful enough, but it could be so much more helpful. And I think that's where we evolve.
21:03
Yeah. And you must be sitting on so much of this data where people are showing up to ChatGPT and attempting, as you said three years ago, they were at least making the attempt.
22:45
Yeah.
22:54
So you might have at least a frequency histogram of like, hey, here are all the things that people want to achieve with us.
22:55
We do. We have really awesome classifiers that run automatically. It's fully privacy preserving, but gives us a sense of what use cases people have. And it's important. Right, because when you make a new model, make a model update, you want to know what use cases just got better, what use cases got worse. And that's not always trivial to figure out unless you have really good analytics on the system. But so much of my learning is actually qualitative, where I will just have a habit of reaching out to a fairly random set of users who just figure out what they're doing. And I've never worked on a product where three and a half years later, you're still learning every time, because usually by that time you know what the use cases are that your product can deliver on. But our tech is so unusual in the fact that I keep learning about something crazy I didn't know was possible.
23:00
Wow, that's awesome. Basically a billion users, I suspect a small fraction of them are power users who are getting maybe thousands, maybe, maybe, maybe tens of thousands of value on their $200 subscription.
23:47
Yeah.
24:01
The vast majority is, you know, middle of the pack. And then a few call it casual users who are, you know, start using ChatGPT as search maybe or teach me about AI or help me with my homework. What is your focus like maybe in those constituents, power users, casual users, and early users, or however you frame it. What is our focus on for each of those three factions?
24:02
Yeah, yeah. Well, first of all, I feel accountable to our entire user base, in fact, our non users too, because products like ChatGPT can have real externalities on all humans. But when I think about sort of the way we build, it's really useful to imagine the extremes. One extreme being a user who doesn't care about AI at all, who has a busy life and needs to be convinced of the value that we can provide, because that forces you to really nail the interface and to expose the capabilities that are hidden in the model in a way that people can actually grok Then the other useful extreme is our power user base, because power users are the users who teach us what's possible. It's actually impossible for us to do all the product discovery on our own simply because of how empirical this technology is and how much you actually learn post launch. So building for each of those extremes can be valuable. But our user base is incredibly diverse and people have so many different use cases. And this is why I like to look at all kinds of different segmentations, not just frequency, but also what use cases are you coming to us for? But definitely huge variety in the ChatGPT user base. I look up to macOS, for example, as an example, where it really works. For people who don't understand technology at all, it's entirely magical. But if you are a power user, you've got terminal, you've got settings, you can configure almost anything in macOS and it's really beautifully done where the complexity is progressively disclosed so you can interact with it and love the simplicity of it all, but you can also got all the knobs and. And developers love it. Right. And so I think this is kind of the inspiration for how we want to be in ChatGPT. That doesn't mean we always live up to it, but it means that building for power users is extremely important. And that's not just a property that I think is sort of aesthetically exciting. It's also really important in AI because it's the power users who show you what's possible. They are actually doing the product discovery because it would be impossible for us with such an empirical tech to do all the product discovery on our own. So the type of user who subscribes to ChatGPT Pro, who used Codex before it quite worked, who is now the strongest advocate of tools like Togits and teaching us what's possible, that is an incredibly valuable member of the community and it might not show up in your weekly active users. It's just one number, right? But this is exactly why there isn't a single North Star. And you really need to take these different segments very seriously. So I love building for power users and you asked on token consumption, et cetera. It's so fascinating to see. There's people who get incredible value out of these products and watching what they do is very informative.
24:26
Okay, so we're very focused on the entire user base. Learn a lot from the power users. You know, the other thing I might say is the power users right now are getting a lot of value, almost too much value, and a lot of no such thing. No such thing. The, the, the analog that is most common is the Uber and lyft of the 2015 era.
27:32
Right.
27:55
And you know, it took. It took a while, but I know you were thinking about it a lot. I know you guys are thinking about pricing quite a bit. Yeah, maybe. Tell us a little bit about pricing right now. Pricing is pretty simple. Is there a path for folks who are getting a lot of great value to price that product differently and meet them where they are? And the other way on the other side.
27:56
There's no world in which pricing doesn't significantly evolve when the technology is changing this quickly. ChatGPT originally was entirely free, and the reason for that was that it was intended to be a demo, and we were going to wind it down after a month. We then realized that the demo went viral and people loved the demo and it was actually a product. But we realized to be a product, you can't take the product down every time you're at capacity. So we shipped subscriptions simply because it could shape the demand. It was a way of gracefully turning users away when we had to turn away someone. And it felt like the fairest and most equitable way of doing so is saying, hey, if you really need this product, pay a subscription fee and you got it. Then we figured out how to make the product stable, and we had the choice of do we keep the subscription thing or do we go back to free? And we realized we had consistently more tech that we couldn't scale. GPT4 being the first example, because we had way too many free users to serve GPT4. And we put it behind the plus plan. And so the way we stumbled into subscriptions was sort of accidental, by trying to just solve for the user. And it felt like the right way at the time to provide maximal access to our tech. Since then, we've had so many other breakthroughs, including test time compute, where you can scale up intelligence kind of as much as you want, more or less. And it took us in the entire industry a little bit of time to turn that into product value. But we're here now where our, our power users want to use more and more and more intelligence and it's possible that in the current era, having unlimited plan is like having an unlimited electricity plan. It just doesn't make sense because people may need a lot, a lot of electricity and they're getting a lot of value out of that. There's a reason you can't buy that. Right. So obviously I want to be really thoughtful about the way that we evolve our plans and SKUs and subscriptions, but be incredibly, you know, surprised if it didn't change, given the magnitude and profoundness of the technical breakthroughs that we've had and the product breakthroughs that follow.
28:18
Yeah. And, you know, relatedly so I imagine you're going to have something for the power users. What about the other side? How do we get the casual users into the wheel and still monetize them?
30:28
As mentioned, our business model will evolve in the North Star's access. We would like to pick a way of providing. We want to provide an offering that maximizes the number of people who can access our most powerful tools. I think for the longest time that has been subscriptions. Subscriptions have the downside of the fact that in many markets, people don't have credit cards or they don't use credit cards to subscribe to software. And we're interested in other ways that can maximize access of the tech. Our ads pilots are in that spirit. We really view it as a tool of bringing ChatGPT and our intelligence most broadly to anyone around the world. And it is an example of how we constantly need to evolve and figure out the best way to bring the demand in line with what we are able to offer.
30:45
Makes sense. Makes sense. You know, the ad space has been a tricky one because, you know, Sam has historically expressed reluctance about ads and, you know, you've got to maintain a lot of trust while delivering that. So I guess what changed?
31:40
I think we've talked about this several times in my history at OpenAI, and every time it came up we said if we were to do ads, we'd have to be really thoughtful about the way we do it. So the first thing we did starting end of last year was to really engage the company on if we put ads in ChatGPT, how should we approach it? What should the principles weigh? How do you preserve the things that are magical about ChatGPT while getting the benefits of ads, which is our ability to bring our most advanced tech to anyone, regardless of their ability to pay. And I really love where we ended up on the principal side. On the experience side, we're very, very early. But on the principal side, I feel really proud because it's very important that the answer of ChatGPT be independent. As an example, respecting user privacy is very important and there's a lot to learn from the way that tech has evolved over the last few years, or really last decade. I like that the principles are out there before we've even really gotten started. We're very early with our pilots. It's kind of interesting. Obviously, very anxiously and eagerly looking at our support inbounds and data. And the most common inquiry about ads is not how do I disable ads or turn off ads, but it's like how do I run an ad? Because the entire ecosystem is really excited to be part of the story and to figure out a way to talk to ChatGPT users. So there's a lot more to come, but I'm very eager to get this right.
31:59
Yeah, I'm sure you guys will Switching gears. Nick, something you and I have spoken about a little bit is distribution and partnerships. There's a couple of big partnerships last year, Apple, Reliance with Gemini. Those are two big user bases, right? A lot of India, a lot of the iOS users. Tell us a little bit about how you think about partnerships for ChatGPT to meet the user base, and maybe specifically on those two as well.
33:41
Look, I think partnerships are a great way to bring two products together and
34:14
to
34:20
expose something like ChatGPT to people who might not otherwise have encountered it. The thing that I care about most when considering something like a partnership is what is the user experience and can we make it amazing? Because at the end of the day, when you look at what's going on in the market,
34:23
you can get users
34:49
to click on things, you can get them to tap any sort of product, especially if it looks like a product they recognize, etc. But if the experience isn't truly awesome, people will churn or they will at least not retain in the way that we've been lucky to retain them on ChatGPT. So for that reason I'm super interested in tasks like that. But it needs to be great. It's accrued to the user. We are very lucky to have a great brand and a recognizable product for many folks. And I want to make sure that anything we do is accretive to all that.
34:50
Nick, you are a master of trade offs. You must be making a lot of trade offs right now. Tell us about some of the trade offs you're making. Tell us about a trade off that you might be making that people don't appreciate from the outside.
35:25
There are a lot of Trade offs indeed, for different reasons. What I encounter a lot is trading off, delivering on the people, on the use cases that exist in the product today and making them better versus productizing step change technology that's going to generate a whole other set of use cases. Because when you think about how ChatGPT came to be, it was a totally open ended product. It was basically a user experience around a technical breakthrough and we couldn't have told you all the ways that people find it valuable. But putting it out there was really important because it allowed us to discover and the world to discover what you can do and then post chatgpt we can obviously very systematically go and improve on the things that people actually want to use it for. And when you're at a company in this moment where you both have such amazing traction with what exists today and the most mind bending breakthroughs on the research side, the balance you have to strike is making the core product you have better today with all the things that matter, latency, reliability, making the use cases really great that people come to with, you know, providing access to the step change. And we try to get the balance right, but we're a small team and we don't always get it right. And for that reason it's one of the most difficult trade offs that I have to deal with.
35:40
Nick, I imagine one of the hardest trade offs you guys make Here is those GPUs that are melting between ChatGPT, between Codex Research. How do you guys allocate the GPUs?
37:14
That is a very good question and I'll let you know when I figure it out. Just kidding. We've gotten a lot better at this. I really hope by the way, to be at a point one day and I've yet to reach that point where we don't have to face this trade off because it's really painful to have real user demand for products that you can't serve. If you only ever worked in software, that's an entirely unusual dynamic where you just limited by this zero sum resource out there. The marketplaces have it, but I think pure software doesn't really have the dynamic right. So one thing we try to do, obviously we prioritize our existing users first we want to provide a fast, reliable product and that is critical and table stakes. Then when you look at new capabilities, the sort of naive business school thing to do would be to probably look at revenue, incremental revenue per GPU or something like that. But this is where it's more an art than a science because we often have new breakthrough capabilities that are entirely 0 to 1. Deep research was one of those. We couldn't have told you. Is there going to be demand for consumer demand for a research product, but if you don't productize it to find out, you will never know. So this is where we have to be a little bit thoughtful on how we balance, you know, things that are no brainers, that people are really going to love with things that are brand new ideas. Then obviously on the research side, there's a reason that Mark has the job he has because a big part of his job is figuring out what research to fund. And obviously GPU is a big part of that. So very nuanced topic that we're continuously getting better at. But for me the priority is always on our users.
37:27
Yeah, the other takeaway that I had is you don't have line of sight to a time when you won't have that problem.
39:13
It's been so fascinating because we obviously have been incredibly lucky to encounter more and more users who want to use our technology. But then the value that we're able to provide for each user is going up as well. And GPU consumption correlates pretty well with that value. And when you just look at token consumption per user, especially in the enterprise too, which is a massive opportunity, you see a lot of very GPU hungry workflows. And yes, demand keeps going up even as prices go down.
39:22
This is a fascinating insight. People used to think that humans were. You can't kind of make more humans. Well, auto takes nine months and then 19 years. But you're saying, well that's actually more, more, less finite resource than GPUs.
40:01
Yeah, I mean on the human side you can hire more humans. And obviously we've been busy doing that and bringing the best talent across functions to OpenAI in the world. With agents, you can also get more leverage per human. You can make your humans very effective at their job to do more. But GPUs are zero sum and if you don't have more GPUs, you really have to figure out how do you make very, very hard trades and, and hate making hard trades for our users. Hence the desire to have more GPUs. But it's useful to start with the most zero sum trade off when you do your planning. So I think starting working backwards from GPUs is usually pretty good idea.
40:17
Yeah, we have all these external data sources for charts of users and usage and activity and retention and all those things. What we don't have is tokens per User over time. And I bet that chart is like a sweet line going this way.
40:57
I think internal is pretty good. Our internal employees is a pretty good indicator for what's about to happen. And yes, the charts are mind boggling.
41:13
Yeah, yeah, yeah, Fascinating. Okay, a couple of quick ones on the present before we go into the landscape, which is, you know, shopping. We just moved into a new house, we took some photos and we were hoping that all our furniture would magically appear that chatgpt helped us paint. But, you know, a lot of, a lot of recent updates on ChatGPT shopping. Tell us about it. What are you thinking on shopping as ChatGPT as a shopping assistant?
41:22
Shopping is one of those use cases that exist organically in ChatGPT today and they work. You can ask ChatGPT about any purchase you might be planning and get pretty excellent advice. But it's also one of those cases where the experience that exists in chat today, it's not the perfect experience that you would want because shopping is very visual, for example, so you're gonna wanna actually see products and images and be able to compare and contrast, not just read walls of text. People care about the sources of,
41:51
where
42:27
can I learn more about a given product, et cetera. And so there's a lot of work to do to make this discovery really, really good and allowing people to use ChatGPT as a, as a, as an assistant to find the right product to buy. And that's where our focus lies, is making that really great and making that really great in a way that works for our retail partners as well. Because as I mentioned earlier, there's huge appetite from the ecosystem to be part of the ChatGPT journey. And nailing the discovery piece has been the most promising focus area to date.
42:27
Nick on ChatGPT, you must see a breadth of information, you must see a breadth of use cases that people are doing with ChatGPT. And tell us something about what does the world underestimate about ChatGPT that you have maybe been surprised by or listener might be surprised by?
43:03
There's been a real change in the way that people think of ChatGPT over the last year or so, where it's increasingly a true thought partner to people. It's not just a thing that answers your question, but it's a thing that you can, it's a sparring partner that you can actually think things through with and that shows up in all kinds of demands ranging from life advice or if you've got a relationship problem, you can actually get a lot of value to ChatGPT helping you think through how to handle it and how to talk to your partner about it all the way to a work setting where you're working on an analysis or you're trying to figure out how to frame something or you're trying to build something. And ChatGPT really shows up as a second brain of sorts of. I think that's qualitatively different in terms of how the mental model it occupies with people and you see in the usage patterns and the use cases that exist. And I think the more we nail things like proactivity, which we talked about earlier and tasks, et cetera, I think the more it's going to feel like a teammate in the workplace and like a super assistant at home and I think that's going to meaningfully change the use cases that people come for.
43:22
Yeah, I've been. The most high stakes thing I do with ChatGPT is we have a new baby and the baby's crying at three in the morning. Chad, GPT, you know, what's, what's going on?
44:37
First of all, congrats. Second of all, I've heard this from all parents in my life. The ChatGPT has become indispensable. Indispensable as a thought partner. And it makes sense. Right. If you have a really specific scenario or you think it's a specific scenario to you, ChatGPT really comes through and can you help you build confidence. And I think that's such an empowering thing. Right. And I imagine new parents aren't always the most confident about what is the right thing to do. And if ChatGPT can make you feel like you have agency and control and you know, I think it's really valuable.
44:51
Yeah, it's huge. Well, thank you for making ChatGPT literally getting me an extra hour of sleep every day.
45:28
It took a village. But that is a great metric. That should be. The North Star metric is like incremental hours of sleep. That's, that's a great one.
45:33
Incremental hours of sleep. Incremental hours of joy.
45:40
There you go. I mean, you joke, but like we talk about this a lot and because spiritually that, that, that is pretty close to what we hope we can do. Right. Is, is help you reach whatever you consider self actualization.
45:42
Yeah.
45:55
Whether or not that's sleep or joy or any other goal you might have.
45:55
Yeah, yeah, yeah. Well, thank you for, thank you to the village. We're going to switch gears and talk about the landscape. Sure. There's a lot going on on the field, you know, how would you frame ChatGPT's differentiation to people out there? There's a lot of different products out there.
45:59
Look, it's the best time in history to be a consumer of technology. It is indeed, because you got options and the competition is intense, and I think that's beautiful. And it's actually good for us too, because if you were to pre mortem why a company like OpenAI does not achieve its mission, it's probably focus because of the sheer number of opportunities that become possible when you approach AGI and having competition and options out there. I think it forces us to focus on our customers too, and on things that really matter, which aren't always the most flashy things. Right. So it's latency reliability, the quality of the user experience. So I think it's a really good thing. I think the biggest differentiation of ChatGPT is the team behind it. Because we're not static, anything we build will get copied, sometimes in ways that are high craft, sometimes in ways that are checkboxes. And it's really important to us that we evolve the category and build the super system that we've always imagined. And I think the reason that I have confidence that that's possible at a speed that outpaces. The dynamic of being copied is that we have an amazing team and that we have an amazing team across research and engineering and design and all the different functions that it takes to make something amazing. And I think our unique ability has been to bring those functions together, to build something that is sort of at the intersection of useful and possible right in that moment. So my best answer for you is we keep pushing forward and we hope to be expanding what people think of this product.
46:20
As you know, last winter we had obviously what was called Code Red. Google had a great model. There was a lot of talk about it. Marc Benioff switching very vocally to Gemini and us delaying ads and health agents and shopping, basically hit pause on everything, making ChatGPT better. Talk to us about that moment, both about what led to that and what was happening in that moment.
48:02
Yeah. So, first off, Code Reds are a tool we use to create focus. And as you can imagine, when you're in a place like OpenAI, and this is what makes us special to work here, is there are so many different things going on. It's a research lab. We are pursuing many different ideas. Right. And there's been these moments where we've wanted the company to come together to solve a problem across boundaries, no matter what your project might have been. End of last Year we had one of those moments where we felt like we need to show up for our users. We need to focus the things, focus on the basics like reliability, performance, the way that talking to the model feels, making personalization really great, all these elements that our users care about. And I loved it because it was really an opportunity to work with a bunch of folks who I don't normally get to work with on making the product great. We just exited the Code Red, which we knew we would with the launch of 5.3, which is a great model for the everyday user. It's great to talk to. And 5.4, which is workhorse if you're trying to do real knowledge work. And undoubtedly we're going to continue to use the tool of a Code Red whenever we want to create focus. But I'm excited because I think ChatGPT is in a great spot.
48:31
Yeah. So Code Red is over now.
49:55
That's correct.
49:57
It's not the new normal.
49:57
It's not the new normal. We want it to be a special thing, but it is a tool I suspect we will continue to use.
49:59
That's great. That's great. And maybe tangibly, if you were to point out how did Code Red change chatgpt or maybe the ops or how the team operates.
50:05
The thing I try to get foster with the team is focus. So we are certainly more focused than we were six months ago on the things we really want to nail. And some of those things are very behind the scenes, like latency reliability, those kind of things. And some of those things are very considered efforts, like involving ChatGPT into the Super Assistant. So focus is the main lasting artifact. And as you imagine, it's hard to stay focused sometimes when there's so much going on in the space, but that's the hard job. And you asked me about trade offs earlier. Getting the team to focus on the things that really matter to users is certainly one of them. That's always worth it.
50:15
Yeah. You know, in the back of my mind that I ask you that question is all the other founders that are. That are in the arena right now and just a reminder that, hey, CodeRet is a tool for you Wartime. And Palantir, as we used to call it, is a tool.
50:57
Yeah. I think every company does it differently in terms of how you get stuff done, but I think it's really valuable to have terminology that means something that signals to people it's okay to drop your other stuff and it's okay to focus on this thing together. Even if that wasn't your original job. So I think it works really well at a place like OpenAI. But imagine startups would have an equivalent.
51:12
Yeah. You know, one of the things that got everybody's imagination on our team was what Peter was doing at openclaw. Incredibly potent to put all the tools together. Obviously Peter is a great builder. Congrats on bringing on Peter to the team. Tell us a little about what Peter is working on and when. When might the billions on chat GPT have something to to see there?
51:37
Well, first of all, I'm very excited for Peter to be here. I was excited to have another German speaker in the house. He's Austrian, I'm German, so we were exchanging Guten Morgens. But the openclaw is so inspiring because it brought to life in many ways a vision that we'd had in different forms, admittedly around this kind of AI that is fully embodied, that exists across different UIs that can do stuff for you, that has state, that has an interaction pattern that feels a little bit more like talking to a human. Because openclaw allows you to interact in a very, very natural way where you can send many texts back and forth and it's very curt. And so there's a lot of elements of OpenCloud that I think were very clarifying to folks across the industry. But the best, you know, I'm super excited to just learn from Peter and bring into the company and figure out what we can do together. So there's a lot more to come.
52:00
All right, so now onto the most fun section, Rapid Fire. All right, you ready?
53:11
Sure.
53:15
Well, we'll start with my favorite game, which is long, short. Pick an idea, a startup, a business, a product that you love, you think you're very bullish on.
53:16
If I were starting a company today, I'm really excited about these companies that are going into companies and getting extremely hands on and doing effectively professional services with AI. Because we've saturated all the emails and you need to get proximate to the problems. So it's those companies that I'm paying attention to.
53:28
Fascinating. So this is an example. This would be like, hey, you're going and either acquiring or going inside an operating firm that has scale and a humming engine.
53:50
Exactly.
54:00
And making that a more efficient engine.
54:00
Yeah. Or just like, you know, you're doing contracts for customers that have really hard problems and you're actually going in and committing to solving the problem. Problem
54:02
outcomes.
54:12
Yeah, because there's a reason, I think, that we've made so much progress on math and coding, but not on many other domains. Because those are domains we are proximate to. We as people who work in labs, there's all kinds of other domains that we are not as proximate to. And if you get proximate, I think you can build something transformative. And I think this is more important now precisely because the easy problems have been solved, the obvious problems have been solved by the models. Credit where credit is due. I think NotebookLM is awesome and differentiated and helps me learn new stuff. I think it's great.
54:12
It's so good. Yeah, it's so good.
54:47
I think this is the example of you can innovate and you can build something totally different. It's awesome.
54:49
Yeah, yeah, yeah. It's so good. Particularly for. I found it for some more technical learning to be a very approachable way
54:54
to learn, and it's really cool. I feel like AI, an underrated capability of AI is to just transform things into a different medium. And I think that's so important for learning. We just launched these dynamic math blocks which allow you to visually understand math inside ChatGPT learning is obviously a big use case for us too. And I think just being able to transform things from text to visual soon, from visual to video, and like all these different media is amazing because people have such different ways of processing information. And some people are like auditory learners, some people are visual, some people like reading. So I think that's really magical and a great, great angle to take.
55:02
Yeah. Amazing, amazing, amazing. You know, one of the things I think about a lot is education and education for kids now in school. The world's changing so fast. I'm not sure our education system's changing that fast. What advice would you have for students who are in school now who might have to adapt faster than the system around them might adapt?
55:44
It's a really good question and something that I've thought a lot about myself. I think the most important perma skill in this era is curiosity, I think, because if the machine can answer all your questions, you better have good questions. And the only way to have good questions, I think, is to pursue the things you were actually excited about from an early age and throughout your entire life. And I reflect on this because the only reason I'm here and working on this stuff is because I thought it was neat when I got nerd sniped in, um, in the interview process. Right. And it was like, this is so cool. And, and so no matter what you're doing, I think that's an important skill, is to be curious and learn to stay curious. And I Think I'm confident that if you foster that skill, you will know how to adapt to, you know, an evolving landscape of tools and AIs and jobs. So that would be my advice.
56:11
Yeah, curiosity is, has always been the permanent scale. Our friend Bill Gurley wrote about it in his book Running Down a Dream.
57:17
But I'm going to have to check that out.
57:24
Yeah. What is a job that gets more valuable, not less, as AI gets better as AGI arrives?
57:26
Well, I think maybe the easy answer is being an entrepreneur because it's the best time to build ever in terms of being able to self actualize your idea. Maybe one that is maybe non obvious is I think writing actually is very important and it's not because the AI can't write. AI will become amazing at writing just like any other domains, but because I think the skill of writing forces you to be very clear on what you have to say. And even though prompt engineering is obviously going to go away and has gone away to much extent, the idea of expressing what you want to a machine requires you to be a pretty good writer and a very precise writer. So I would say that that is a. In any profession that involves very clear writing and therefore thinking I think is well set up.
57:37
Yeah, 100%, honestly. I mean this is the whole thing about sloth, right? There's just so much.
58:36
That's the other thing. I think there's going to be a permanent need for high quality, trusted, authoritative content and tools like ChatGPT can help you discover that content. But I think the need for amazing content is also here to stay.
58:43
And final question, what has been your AGI feel? The AGI moment, when did you feel it?
58:59
I've had so many honestly and it's definitely not stopped. A few weeks or so after I joined OpenAI GPT4 had finished training and I remember trying it out and it actually it didn't impress me at all nor anyone else that week because it kind of didn't work. And it's because we hadn't figured out how to post train it. And I think seeing it go from kind of wait, is this really a thing? Or was GPT3 kind of it to wow, actually this is an entire step change with what felt to me at the time who didn't understand much about AI at all as like just some tweaks or a little bit of final stretch work was profoundly humbling because you can realize that it might not look like we are close to really powerful useful AI, but we probably are. And then the moment that really there was Two things that GPT4 did that felt like AGI to me. One is it could do poetry, and I didn't think it was possible for AI model of poetry. Just kind of fundamental, philosophically. It just didn't feel like in scope. And then the other one was it could produce code that actually worked and compiled. And then my next moment where I stared at the ceiling just in awe was when I realized GPT4 could just simulate an entire computer terminal, like a full computer, with commands, et cetera. And I'm like, wait, how would this be imbued in a language model? There's been so many moments since then. Honestly, reasoning was a moment. One of the moments was when I think Mark and I were giving a demo of reasoning in front of the whole company. And this was a moment where we were still trying to find use cases that were hard enough for the AI for the reasoning to make a difference. We're way past that point. We know, but at the time, I think we were having to do a puzzle in front of everyone. And I think one of the moments that made me totally feel the AGI is we were in the middle of the demo and everyone started laughing, and I was like, wait, what? What is funny? And then I stared at the screen because we're showing this chain of thought as it was streaming out of the model. And the model swore and said, oh, damn, it may have to adjust because it realized it had made a mistake in the puzzle. And the fact that it did that, but in particular the fact that it did that in a way that was entirely emergent from the RL process, completely blew my mind and made me feel quite humble about what else these models might be able to do. So that was one of those moments. And then most recently, watching people use codecs, like watching people walk around with their computer open because they don't want the task to end. Watching people who have never coded in their life make stuff and bring ideas to life, that feels like an agm. So, honestly, it just. It's just accelerating for me, and it doesn't wear off at all. And everyone has a different thing, obviously, but those are worth some of mine.
59:03
Yeah. You know, it's 10 years ago, there was a product called Kite. I don't know if you remember, it was for software engineers. It was like an AI coding product. That's when I felt the hunger for personal AI, and nothing happened for 10 years. And then everything happened in the last 10 months.
1:02:16
The timing thing is really hard because it's actually quite possible to predict where things will end up I think in terms of the kind of product and form factors you're going to have but to know when it happens. It's really hard for me to make statements on anything between sort of eventually and in three months because of all the ambiguity around.
1:02:33
Well that's a tight enough window now and three months is a tight enough window.
1:02:53
Three months is pretty okay. Try to stick to the three month plan more or less though my team would probably tell me we don't but I try. But yeah, anything in between three months and eventually is difficult.
1:02:56
Yeah yeah yeah. Well thanks for doing it. You've got a lot going on. This was a total treat. We so excited to see all the great products you release for us. We can do anything to be of help. Let us know.
1:03:10
Awesome. Thanks very much. Thanks for having me.
1:03:23
Of course man. This was fun.
1:03:24
As a reminder to everybody, just our
1:03:36
opinions, not investment advice.
1:03:38