FULL INTERVIEW: Sam Altman Responds to Anthropic’s Attack Ads, Live on TBPN
Sam Altman, CEO of OpenAI, discusses the launch of Codex 5.3, their best coding model that allows mid-turn interaction and faster performance. He addresses Anthropic's attack ads about OpenAI's advertising plans, defends their approach to ads, and discusses the future of AI agents, enterprise deployment, and the broader transformation of software development.
- There's a significant capability overhang in current AI models - the intelligence exists but better tooling is needed to harness it effectively
- The future of work will involve managing teams of AI agents at increasingly higher levels of abstraction
- Traditional SaaS companies face disruption but those with strong systems of record and willingness to transform may survive
- AI-generated content consumption remains challenging - people love creating but don't necessarily enjoy consuming others' AI content
- The compute shortage continues to be a major bottleneck, alternating with energy constraints
"We are not stupid. We respect our users. We understand that if we did something like what those ads depict, people would rightfully stop using the product."
"Every company is an API company now, whether they want to be or not."
"A joke that some people at OpenAI make is that soon the chart that matters is just going to be GDP impact."
"There's such a capability overhang that building better tools to let people do that will be very, very important."
"Using a deceptive ad to criticize deceptive ads feels. I don't know, something doesn't sit right with me about that."
Well, without our first guest of the.
0:00
Show, Sam Altman, the CEO of OpenAI. He's in the restream waiting room. Welcome to the show, Sam.
0:02
How are you doing?
0:06
Welcome back.
0:07
Good.
0:08
Thank you guys for having me back.
0:08
Thanks so much.
0:09
Big day.
0:10
Big day. Kick us off with where should we start? Should we start with the model or Frontier?
0:10
Can we start with the model just because I'm having some fun with it?
0:16
Yeah, absolutely.
0:19
Break it down.
0:20
What'd you launch?
0:20
So we launched 5.3 codecs. It is, I think the best coding model in the world. We took a lot of the feedback that People had about 5.2, 5.2 codecs and got it into one model. It is much smarter at programming, but it's also way faster. You can interact with it mid turn. I think it's got a much better personality. It's really good at computer use. So it feels like a very big step forward. It was funny as we were deploying it this morning, a couple of very extremely experts at using these models noticed and said, man, something's really different with Codex and they caught it mid deploy. So I think you can really feel it quickly.
0:21
That's great.
0:56
Oh, you're saying people outside of OpenAI just every day use.
0:56
Yeah, I like the short of an hour that we put it out before.
0:59
We, you know, talk about interacting with a mid turn. How does that work? Why is that important? What does that unlock?
1:01
So people are starting to use these, these tools for very long pieces of work at one time, you know, multi hour tasks. And sometimes you don't specify it correctly right. Sometimes something's not set up right. Something just screws up the ability. They can do amazing things with no steering, but they can do much more amazing things if you steer them along the way. So this has been, this is one of the things I felt most new about this model.
1:08
So talk about orchestration, how this fits into Frontier, because I want one second.
1:31
It's notable like if you see a coworker making a mistake and you don't interrupt them, that's rude, right. It's deeply inefficient.
1:35
It is incredible what these models can do without any feedback. Like if you think about a new coworker especially, you know, you train them and you give them a lot of feedback early on and they learn the job and you correct them and they kind of get practice in the models. They will soon do that, but right now they don't do that. So we just rely on either they get it right one shot or we collect them. Correct them along the way.
1:44
Yeah. So I think there's a lot of people that are running multiple agents and multiple tabs. They're starting to think about orchestration feels like Frontier is a piece of that. But if you're interacting with a model that's running mid turn, how does the user experience change for developers with 5.3 and then what will it look like in the Frontier world?
2:05
I think we will be heading towards a workflow where a lot of people just feel like they're managing a team of agents and they'll keep. As the agents get better, they'll keep operating at a higher and higher level of abstraction, which at least watching what's happening so far is a jump that people are going to make pretty well. The models are so good now. There's such a capability overhang that building better tools to let people do that, which the Codex app that we launched on Monday was a great step forward for, will be very, very important. But you will be managing very complex workflows. The agents will keep getting better, so you'll keep working at the maximum of your management bandwidth or cognitive ability to keep track of all the stuff. And the tools to make that easy to do will matter, I think, more than intelligence for a little while because there's such an intelligence overhang already.
2:27
What's the role of a forward deployed Engineer today at OpenAI towards the end of the year, capability overhang that feels like raw meat for a forward deployed engineer. It's like they solve that problem.
3:20
Yeah, I mean, look, eventually the models will get so good that they'll help companies deploy themselves and the Ford deployed engineers will again get to work at a higher level of abstraction. But for now, you go into a company that is not AI native and say, okay, you've said you want us. You know, they say they want to deploy AI, they really are not sure what to do. How do I hook this up to my systems? Do I need to fine tune a model on my code base? How do I think about orchestrating agents and using things from different companies? Most of all, or at least what we hear most frequently, is how do I think about security of my data and how do I know that these like AI coworking agents are not going to go access a bunch of information and share it in ways they shouldn't or get, you know, a context exploit or something like that. So the Ford engineers take this incredible new technology and a platform like Frontier and say, we will connect your company to an AI platform so that you can use all these agents and workflows and everything else you want.
3:35
How important are these metaphors or how temporary them? I was very interested in reading about Gastown, and you have these pole cats and it's this whole Mad Max world. And that feels like maybe just a temporary aberration where you're setting up agents for specific tasks. Tasks, but also that could be incredibly valuable in explaining to a large corporation of how they're going to integrate AI across the whole organization.
4:31
Yeah, I. I suspect, like everything else that's happening when an industry is moving so fast, all of this is somewhat temporary on like a long enough timescale. And you. As these models become more capable and these agents are operating on very long time horizons with the ability to just kind of figure it out and our trust in their robustness keeps going up, then maybe you don't need a lot of the abstractions we need today. You know, maybe you just, like, have a single AI bot that runs at your company and you can say, hey, I want to, like, launch this new product. And it does everything an ambitious person would do. But that's not where we are today. So today we have to put in a little more to get the pieces put together.
4:55
Yeah. How are you thinking about the meter benchmark for long task horizon? You're at the top of the charts. At the same time, it feels like we might need a new chart if we're talking about agent swarms, because they'll be able to do things that go for weeks, but they will subdivide the work. There's some subdivision that happens within a reasoning model, but it doesn't truly parallelize, at least that I'm aware of. So what does it look like in a world where you go to a model, but now it's spinning up a whole bunch of different models underneath?
5:42
I think there's two of the key insights of the whole field are in this question. Number one, the implication that no chart in AI lasts more than a few years is. Right. And like this one kind of, you know, we'll see how much longer it's really useful.
6:13
Yeah.
6:26
The second is a lot of people thought that, okay, we're going to just need a super long task and super long task horizon, so we need a super long context. And definitely what people have already seen with coding agents is by agents breaking up work, orchestrating it, well, farming it off to sub agents. Even with the current limitations of the technology, we can do something which should not be surprising because it's similar to how people do things and get amazing amounts of Work done. So that's been cool to watch. I think that will keep going. A joke that some people at OpenAI make is that soon the chart that matters is just going to be GDP impact. And then the question is, what's the one that comes after that? But everything else, a lot of these proxy metrics, there's now so much economic value in what the models are doing.
6:27
What do you think could come after?
7:15
I have no idea. Do you have an opinion?
7:16
Happiness.
7:18
I don't know. When you look back at some of your blog posts from 10 years ago, your predictions were usually pretty on point. Yeah, maybe it's harder to predict.
7:18
Thank you. Thank you.
7:28
Specifically the merge, basically.
7:31
I mean, there's a bunch that are good.
7:33
Like right now we're getting, you know, thousands of messages in the chat about 4o. And you predicted in 2016 that people could become, you know, very attached to a chatbot.
7:34
Yeah, I'm working on like a big prediction blog post for the next 10 years. Seems too far, but the next five. But because it's like, I'm sure a lot of it'll be wrong and it, you know, it's still fun to try the sort of like relationships with chatbots. Clearly that's something now we got to worry about more and is no longer an abstract concept. Even the question of what comes after gdp, like, one reason I think that's interesting is the way we measure GDP now could start going down, even though quality of life goes way up and we don't have a lot of practice with things like that. But the massively deflationary.
7:46
Not just Europe.
8:31
Not just Europe. We want quality of life going up too. Just keep it going down.
8:32
Switching gears, what do you think about the work, the Neolab boom, the research efforts that are happening all over Silicon Valley? It feels like there's. There's an acknowledgement that there's research breakthroughs that need to happen and everyone's taking different shots at those. Do you think that those companies will just find a breakthrough and join a lab or launch their own products?
8:39
I think it's fantastic. First of all, one of the meta things that we wanted to do when we started with OpenAI is when we started OpenAI is like there had been a period where the technology industry, and Silicon Valley in particular was amazing at new research labs or just doing new research in industry in general. And then it kind of fell apart and there hadn't been a good one in a while. And part of what we hoped to show, and this was not only us, like a Lot of people were excited about new research labs is that industry could do research again. So seeing that now become like fashionable and all of these new labs, I think it's totally awesome. Some will succeed, some will fail, some will kind of go into some other effort. But having industry support research in, you know, startup style, I think it's wonderful.
9:02
Over the next two years, would you expect to acquire more individual product companies or more research labs?
9:44
Good question. I don't have a strong opinion there. I would bet. Well, I'd say the very best ones will often look like a mixture of both. Like the one that I have in mind right now is something that very much looks like a mixture of both. So maybe that's. Maybe the shape of things to come is the really, truly extraordinary product work will have more and more research component and it'll be kind of a more of a hybrid thing.
9:54
Is data the new oil is there value?
10:22
Yeah, we were joking a couple of.
10:24
Days ago, that bunch of data. But they don't understand AI. They don't know how to monetize well.
10:26
And that phrase was effectively wasted a decade ago.
10:30
Yeah.
10:32
And so to say it now sounds really silly.
10:33
Yeah.
10:35
But it feels like it could be more true now than ever.
10:35
Yeah.
10:38
You know, certainly. Yeah, man, they really did waste it a decade ago. I was just thinking of the kind of people that used to say that.
10:40
And yeah, TED talks.
10:45
You're not supposed to call them out. They're nice people over ted.
10:47
You know, definitely like the sort of the magic relationship of this last eight years, whatever you want to call it, has been like, you know, that we can put in more and more resources, compute data, new ideas, whatever, into creating an artifact. And it gets like the log of it gets better. So you can throw in, you know, that's why we have this huge exponential increase in resources. But we keep getting better and better models. And for all of the concern people have about it's going to top out or it's slowing down or whatever, like no one's been right about that. I mean, sometimes they, it looked like they were for a couple of months as we digested a new model or came to a new form factor. But it has been an incredibly smooth last six or eight years of this. What those resources are, there can be some trade off between, you know, sometimes it's better to spend your money on better data, sometimes on more compute, sometimes something else. On the whole, compute power is the new oil is the statement that feels closest to true to me. But there will be other parts too.
10:53
Is Software dead.
12:04
It's different. It's definitely not dead. But what software, like how you create it, how you're going to use it, how much you're going to have written for you each time you need it versus how much you'll want sort of a consistent ux. That's all going to change. There have been a number of these big sell offs of SaaS stocks over the last few years as these models have rolled out. I expect there will continue to be more. I expect there will be big booms in software. I think it's just going to be volatile for a while as people figure out what this looks like. The statement that someone said to me that has stuck in my mind most these last couple of weeks is that every company is an API company now, whether they want to be or not. Oh yeah, because agents are just going to be able to write a scene.
12:08
Yeah, we had, we had Dara, we had Dara from Uber on yesterday and he had a pretty refreshing kind of approach. We were asking about integrating agents with Uber and, and he recognized that, yeah, the ad business could potentially be threatened if you can order an Uber and ChatGPT. But he basically said like you have to think of the consumer, the consumer wants to order an Uber via their preferred agent, you should let them, otherwise you're going to have other problems.
12:56
Yeah, that is the right take for sure. And I don't, or I think so at least. And we've been through platform shifts like this before where you, I mean Uber wouldn't have existed without one. It wasn't until iPhone where you could like have it make sense to order an Uber to write where you were as you were out in the world. So I think there will be totally new things that happen, other things you'll use in new ways. But definitely as I've started using codux, my excitement about having agents go off and do things for me and still use other services, pay other services. I'm sure we'll need to figure out new business models and how revenue gets shared around, but that will happen.
13:23
Yeah. Talk more about Codex Desktop.
14:00
One more question on SaaS. Have any public market SaaS companies tried to get a soft landing with OpenAI and do you think there's any, there's, there's any value. Just let's say.
14:01
No one has no public SaaS companies that I'm aware of have tried that with OpenAI. Look, but some of them, some of.
14:15
Them, some of them will certainly be durable and are on sale right now and potentially just need new energy and need kind of a new, an entirely new approach and maybe OpenAI could provide that.
14:21
Yeah, yeah.
14:35
I think some will be incredibly valuable. Some do feel like a thinner layer now, but I don't know, like I was talking recently to a bunch of SaaS companies and they do not, they do not feel unexcited. Like they're like, we're going to go through a big transformation here and you know, yeah, sure, other people can instantly write software now, but so can we. And we got a great system of record and seems reasonable. Some will make it, of course.
14:35
Yeah, yeah, yeah. Talk more about the Codex Desktop rollout. It feels like, you know, successful amount of downloads, but key like a shift for, you know, people who are maybe lightly technical but don't have time to set up an IDE and configure an environment to actually start writing software. I want to know about plans to integrate to the phone. That was a big moment, I think for a lot of people with the, the Claude bot Molt bot open claw thing was like, oh, I can text something and it will go and write code and that's valuable. And that unlocks a new agentic experience. Like where do you see the Codex Desktop ecosystem going?
15:01
I am so Codex Desktop has been somewhat of a surprise to me in terms of how much people love it, including how much I love it myself. I think it's a great example of 10% of Polish of the experience of using these models, especially when there's so much capability. Overhang goes an extremely long way to what you can build and how you interact with this stuff. Of course we should have an ability to kick off new tasks from mobile and we'll do that. I mean, really what you want is like your single AI that's working for you on a unified backend, access to all of your data and your ideas and your stuff and your memory and your. All the context and the ability to work across a lot of surfaces. And often you'll be at your desktop, often you'll be on your phone and you just want to add something in. But it is a pretty profound shift in my own workflow, not just for coding tasks, but more general purpose tasks. It's still kind of hard to use if you're not at least reasonably technical. But obviously we'll find a version of this product that can do other knowledge work tasks and control your computer and things like that where you don't have to be and will. And it'll bring the magic of building stuff really to a lot of people. Because even if you never look at code. You'll be able to build something reasonably sophisticated. One of the things that I have built when I was playing around with the new Codex app is this thing I had always wanted just like this magic auto completing to do list. I really work with to do lists and this idea that I could just put tasks in and it would try to go do them. If it could complete them, it could complete them. If it needed questions, it would ask me questions. If it needed, you know, if I had to do something, I could still do it the old fashioned way. But an interface like that where you know all the stuff you want to do, you just sort of explain to a computer or your AI and it tries to go off and do them and sure, if you're on your phone, you're going to just add a task on your phone or you know, if you want to easily import something from email, you're going to do that like felt feels really good. So I'm excited about all of the ways that this will just become a general knowledge work agent.
15:43
Yeah.
17:48
Were you.
17:49
Unsurprised to see a product like OpenClaw come from Open source? Because it's certainly that kind of user experience is not, I would imagine this is something that you knew would be a thing and yet I think the part of the magic of openclaw is that it would be very, very difficult for a large tech company.
17:51
Peter didn't make many phone calls to hyperscalers to say hey, I'm going to be integrating your API.
18:11
It just went on and you guys. And when I think back on the sky acquisition, this kind of experience was probably very top of mind and things that you're working towards internally.
18:17
I love the spirit of everything about openclaw and you are totally right that it's much easier to imagine a one person open source project doing something like that than a company who is going to be afraid of lawsuits, data privacy and everything else. You know, they're like, I think this is kind of how innovation works. Something like that starts, it's clearly amazing. There will be a way to make a mass market version of that product, but letting the builders build the equivalent of the homebrew homebrew computer club spirit go here is so important.
18:28
Yeah, totally. Can we switch to social? I feel like if I Google Sam Altman social I get pure AI in Sora and then also demand or predictions about a human only social network. Where do you see social going broadly? How do you want to integrate it with it and power it in the future?
19:00
The Mold book thing was like a very interesting social experiment to watch and I think points to agents interacting in some sort of social space, hopefully on behalf of people, at least in some degrees could be quite interesting. I don't think we know what to do there yet, but it feels like social is going to change a lot. And I am interested in the space of what a social experience can look like when your agent is talking to my agent and coming up with new stuff. Clearly putting a lot of AI bots on the existing social platforms is just making everyone crazy and not that fun. So that's not the right answer. But I think we can design something new for what this technology is capable of that will feel good and useful.
19:21
Yeah. Is there a solution to the bot problem? That's just all the labs sort of integrating with all the other platforms and even if you can't detect it's AI generated, you can literally say we just generated, generated those tokens, like those exact tokens are in our database.
20:10
They can't do that because their open source models are like good enough to write at this point.
20:26
Yeah, yeah.
20:30
I am excited about sort of like assertion of humanity instead of trying to like detection of AI as a thing here. I don't know if the social platforms, it's like in their interest to solve this. It is cause it's creating like at least in the short term it creates like a lot of engagement and increased usage. So I believe they could solve it if they wanted to. I'm not sure it's in their interest. I'm actually not even like it is. I don't like it, but it is. Some people do seem to enjoy it.
20:33
Yeah. Can you talk a little bit more about where SORA as a video generation model is going? It feels like tool use is maybe under discussed, you know, adding reasoning. It's not just the diffusion model. It's giving these models the ability to make linear cuts and overlay motion graphics. And when I scroll the Instagram reels that I see, they're like vibe reels with cutouts and it flips negative and it's all color graded and it's stuff that like, yeah, you could probably diffuse it all, but it's pretty cool just to teach a model to also use After Effects or whatever video motion graphics suite you want to use. Is that an interesting unlock? What do you see going on?
20:59
So all of that stuff will happen and I agree with you, the models will get really great at doing that. People love generating videos.
21:44
Sure.
21:50
I would say people we have not yet found A way that people really love watching other people's videos. This is true for a lot of other AI. Like they love to, you know, people love talking to ChatGPT or whatever. It's not that compelling for most people to like read other people's ChatGPT generation. So I think there is something.
21:51
But isn't that for all writing and all videos?
22:07
Yeah, it seems stronger to me in this case than the general case. But maybe you're right. Maybe this is.
22:11
Yeah, but if somebody says, hey, I generated a 15 minute video, I'm really excited for you to watch it. And you watch the first 10 seconds and you're not that captured by it. I don't care that it was human made.
22:15
Yeah, maybe you're right. And it's not a special case.
22:27
Do you see that in the data with SORA downloads? Because I've noticed that I'll generate stuff on sora, download it and share it to a group chat. And then it's this little in joke that me and five other people get and we see this like family group chats of, oh, it's our dog and our kids. But like there's not really like, okay, yeah, this is a business, you know, everyone likes this.
22:31
Absolutely. I would say that the most common use case is something like that, you know, like memes on group chats is a real killer use case of sort of.
22:51
How is the Disney rollout going? I was super excited about it. Jordi was extremely bullish on it from a business strategy perspective.
22:59
Look at how image models have grown various LLMs historically and now you're going to have an image, image and video model that can do something that no other LLM can do, at least legally.
23:06
Yeah. And is Bob Iger joining OpenAI?
23:17
Bob Iger. I love it. No, thanks for that.
23:21
He's going to be looking for a job in here. He's a free agent.
23:24
Pick him up, hit him up for us, you know, do some recruiting. That'd be great.
23:27
The.
23:32
I think that generating characters in images and videos is going to be very important to people and they really like that. Like other, like we were saying otherwise. I don't think many people like want to watch me and some Star wars character doing something together. But I might think it's cool. Yeah, it's, you know, there's like a real trend going on right now with ChatGPT where it's make a caricature of me and my job based off of everything, you know about me and those kinds of things people actually do, like looking at other People's media.
23:33
Yeah. It's almost like a face filter or something. There's like enough of. It's the studio Ghibli moment. Like there's enough of the human still in there that it's not. It's not. You can't. Yours is not the same as mine. So it's still personalized.
24:05
It's personalized and it says something about you. And, you know, a lot of these things, a lot of what's gone viral before with Imagegen, I think it's like if you can make people look a little bit more attractive or cool than they look than they are in real life.
24:17
Yeah.
24:32
Without sort of having to ask for that.
24:32
Yeah. How are you thinking about the actual. The actual rollout? We were, we were debating between, like, open the floodgates. You can generate any Disney property versus, like it's one. It's Spider man week and everyone's posting Spider man. And then it's, it's, you know, Mickey Mouse week and there's another viral moment.
24:34
I'm not sure what the team is planning there. I know Disney's had some different opinions about how they, what they want to do and try to be a good partner there, but I'd be excited to open the floodgates personally.
24:50
Oh, that'd be fun. Cool. Talk about your first. Speaking of video, your first super bowl ad, it felt like not generated. Lots of motion graphics, the black dots coming together. Like, what was the, what was the goal with that ad? Who were you trying to speak to? It didn't feel like a direct response. QR code, download the app? What was the mission?
24:58
I love that ad. I think that was such a cool one. It was clearly not meant to be like a mass market or direct response ad, but speaking to the people who are at the center of this revolution and just trying to celebrate everything that has come before and everything that will come after. We didn't hear a lot about it from average users of ChatGPT, but we heard a lot about it from researchers in the field and a lot of residents there. It was definitely not generated. It was done the old fashioned way. And, you know, it had like, a lot of people loved it, a lot of people hated it, and then many people in the middle didn't get it. And I felt okay about that. It's a great encapsulation. I like our ad for Sunday. Okay. It's about Codex. No surprise.
25:21
But yeah, talk about the evolution of the advertising to be more just clear about the actual use case, the value. Like, what are you trying to say with your advertising strategy now as it refers, as it relates to like video.
26:08
Well, the thing I would most like us to say, and I think this is a new challenge given where the models are, is to teach people what they can go do with AI. I mean AI is now unbelievably capable and most of the world, it's still like asking it basic questions. On ChatGPT, everyone can go build amazing things now. Everyone can go do all kinds of work. Scientists are going to make new discoveries and I'd like to, you know, to the degree that advertising we do can teach people how to use this, I think that'd be awesome. Yes.
26:27
The KPI is like reduce the capability overhang broadly.
26:57
I think that should be a general KPI for us, not just of our ads. Yeah, the products that we build, how we teach people to use those products like the. That feels very important.
27:02
Yeah. Anthropic also has a bunch of ads in the super bowl. Seems like run a ton.
27:13
Damn heard.
27:18
What do you think that you're getting wrong about? About their characterization of how ads will roll out in chat apps.
27:18
Well, it's just wrong. Like the main thing that I think is we are not stupid. We respect our users. We understand that if we did something like what those ads depict, people would rightfully stop using the product. No one like our first principle with ads is that we're not going to put stuff into the LLM stream that would feel crazy, dystopic, like bad sci fi movie. So the main thing that's wrong with the ads is like using a deceptive ad to criticize deceptive ads feels. I don't know, something doesn't sit right with me about that.
27:30
I asked, I asked Claude, what if? If what the definition of playing dirty. And it said, what did it say? Misleading others about your intentions, hiding information or creating false impressions. Yeah, thought it was a little dirty. I thought it was well played. But it was, it was, it was.
28:06
It was well played for sure. And it was a funny ad. And they, you know, like the sort of the stuff about the ChatGPT personality that most annoys me, which we'll fix very soon. I thought they nailed in the ad, so that part was funny. Yeah, but I don't know, you know, like I also, I think it's great for them not to do ads. We have a different shaped business. I did notice that they said in their thing like we may later revise this decision and we'll explain why if so.
28:27
Yeah, the blog post, the blog post was Kind of did a good job of disarming the pro ad people gave themselves an out in the future. Do you think they care?
28:55
I think it doesn't matter. Like, I think it's a sideshow. You know, people are excited for a food fight and between companies, but like the, the amazing capabilities of these models, the product, the kind of like the groundswell of excitement around codecs, that feels way more important to me.
29:07
How do you stop the pausing that happens in voice mode? Do you need new hardware for that or is it a model capability thing?
29:26
We need new model. We may need some new hardware too, but mostly we just need a new model. I think we will have a great voice mode by the end of this year.
29:33
What's the bigger bottleneck? Energy or chips?
29:42
It goes back and forth right now again, it's chips, chips. Is there anything but, you know, be different, different times?
29:45
Is there anything we like society, America should be doing more aggressively to increase the supply of fabs.
29:51
Yeah, I think it is. Well, it may get solved on its own. Normal capitalism may solve it. But I think somehow deciding as a society that we are going to increase the wafer capacity of the world and we're going to fund that and we're going to get the whole supply chain and the talented people we need to make that happen would be a very good thing to do.
30:00
Do you think there's an upper bound on model iq? The race right now is you're smart, but you're not smart for days, you're smart for hours. Can you go much farther and get much smarter?
30:23
It seems certain. Upper bound. I don't know. I don't know how to think about that question yet.
30:37
I can't even reason about what 2000 IQ looks like. I don't know what that means.
30:44
It's funny you say I can't reason about what it means to think about a problem for 10,000 human years.
30:50
That's another good one.
30:56
Yeah, yeah.
30:57
That's crazy.
30:58
But maybe IQ is going to feel even, even weirder. I. Yeah, I don't know. I feel like. I somehow feel like this isn't going to feel as strange as it sounds and the like for a bunch of reasons. We're so focused on other people, we're so focused on our own lives. We're so focused. We have such a human centric nature that like, okay, this thing is really smart. It's inventing new science for us, it's running companies for us. It's doing all this stuff and that sounds like it should be impossibly. Weird. And I think it'll just be very weird.
30:58
Do you think space data centers will provide a meaningful amount of compute for OpenAI in the next two to three years? Five years? No, 10 years.
31:31
You just keep going 10,000 years.
31:44
I wish Elon luck.
31:48
Okay. The funny thing about the whole back and forth about ads is that in our world, the criticism is that you didn't launch ads early enough. Is there a world where you wish you launched earlier? How is the actual rollout going? Are advertisers happy? Do you have a really long roadmap or do you think you'll be faster at catching up to what's frontier in ad?
31:50
We haven't started the test yet. We start the test soon, but you know, we're going to. It's going to take us some number of iterations to figure out the right ad unit, the right kind of, the right way this all works. Do I wish we had started earlier? We have gone from like, not a company, you know, three years and three months ago or something like that. We were like a research lab and now we are like a pretty big company with a lot of products. So there's many things I wish we had done faster. I think we were correct on the trade off here of how we balance things that we need to do. I wish, you know, we launched this very cool enterprise platform this morning. Wish we had done that earlier too, but, like, deal with the monstrous growth of ChatGPT and Codex and all this other stuff.
32:13
Good problems to have. Last question for me. What happened to that internal writing model that you used to write the essay? That feels like something that was really cool, but we never really saw the light of day.
32:54
We're going to get a lot of that spirit into a future model again. It's like there's so much stuff happening. We have to make these hard prioritization decisions.
33:09
Sure.
33:19
I would love a cool writing model. Not as much as I would love a cool coding model. And it's what is possible now for coding for science. That's the thing I'm most excited about, for accelerating all kinds of research, AI and otherwise, for really accelerating the economy. I think that's like the right thing for us to most prioritize in terms of new capabilities. But yeah, well, like, you want, you want a model that can write beautifully because it means it. Well, you want to write a model that can write beautifully if it can also think very clearly and express that very clearly. That's just useful in normal work.
33:19
Yeah, that makes no sense.
33:56
Last question for me how have conversations been with the broad OpenAI leadership team? You guys are in a position where any single word or sentence you say in any situation can be spun into a headline immediately, and then you guys have to go on damage control, kind of correcting the narrative. But, of course, the original message is often, or at least the original news is often seen more broadly than the correction. And it seems like an interesting challenge.
33:57
It is a strange way to live. And I don't, like. I don't know of any private company that has ever, like, been so in the news and so under a microscope. And it, you know, at some level, it's frustrating. And, you know, we're so squarely in the sights of everybody's anxieties and every competitor trying to take us down, and everybody's like, just, what is going to happen with AI to their part of the business or their own lives that there's like, a lot of plasma looking for an instability to collapse on. In some other sense, though, the. The subjective experience of it is we are so busy on so much exciting stuff that it often feels like there's this crazy hurricane turning around us. And when we sit, it's, like, fairly calm. You know, the media or Twitter goes insane about something one day. They're talking about a crazy meltdown. We're like, that is insane. Like, okay. And people talk about it all day and then later find out it's wrong and sort of seems like a lot of wasted energy. But we're just like, we have this great new model coming. People are building incredible stuff. Companies are transforming. We're trying to, like, figure out how to get more compute and deal with this compute crunch. And we just kind of like, keep going and we're busy. And then if we, like, open Twitter, pop up our heads and look at the news, it's like, wow, that is an insane, crazy thing happening. Completely divorced from reality, or 99% divorced from reality. And like, okay, someone will correct it. But then we get back to work and people flip out again. And it is weird to watch when we look outside, but it is less chaotic internally than I think you would imagine from reading the media reports.
34:26
Well, thank you so much for taking the time to come chat with us. Congrats on the.
36:11
Thank you. I'm excited to see the Codex ad.
36:15
Me too.
36:18
Please try it. The app and 5.3 have been, like, I think, the coolest thing we've done in a while.
36:18
Yes. With one prompt, I rebuilt the tvpn.com homepage to look exactly like Burke Hathaway. And it was just immediate. It was very fun.
36:23
Interesting choice.
36:31
Plain text. It was very easy. Immediately, one shot. It did not really push it to its limits, but I'm having fun. So thank you so much for coming on the show. We'll talk to you soon.
36:33
Thank you.
36:40
Great to catch up.
36:40
Goodbye.
36:41
Cheers.
36:41