“Engineers are becoming sorcerers” | The future of software development with OpenAI’s Sherwin Wu
80 min
•Feb 12, 20262 months agoSummary
Sherwin Wu, head of engineering for OpenAI's API and developer platform, discusses how AI is fundamentally transforming software engineering roles, enabling one-person billion-dollar startups, and creating unprecedented opportunities in business process automation. He shares insights on why listening to customers can be misleading in rapidly evolving AI markets, the importance of building for where models are heading rather than where they are today, and how organizations should approach AI adoption through bottom-up evangelism rather than top-down mandates.
Insights
- 95% of OpenAI engineers use Codex daily with 100% of PRs reviewed by AI, representing a fundamental shift from code writing to code orchestration and agent management
- The 'sorcerer's apprentice' problem is real: engineers managing 10-20 parallel AI agent threads must maintain oversight to prevent systems from going rogue, requiring new skill sets
- Customer feedback can be misleading in AI because models disrupt their own scaffolding—vector stores, agent frameworks, and other tools become obsolete as models improve, making forward-looking product strategy more valuable than reactive customer requests
- AI adoption fails when driven top-down by executives; success requires bottom-up evangelism from technical power users who discover use cases and spread best practices organically
- The real opportunity isn't one-person billion-dollar startups but a golden age of B2B SaaS with thousands of specialized $10-50M companies building vertical-specific tools for other AI-native businesses
Trends
AI agents transitioning from minutes-long tasks to multi-hour coherent task execution within 12-18 months, requiring fundamentally different product designBusiness process automation emerging as larger opportunity than software engineering, yet severely underrated due to Silicon Valley's tech-centric bubbleMultimodal audio models becoming enterprise-critical as speech and audio represent majority of business operations, currently underexplored compared to text/codingManager span of control expanding from 6-8 direct reports to potentially 20+ as AI tools enable higher leverage in people management and organizational context understandingStartup ecosystem bifurcation: fewer venture-scale returns but explosion of sustainable $10-50M micro-companies built by high-agency individuals using AI leverageCode review bottleneck elimination through AI review reducing 10-15 minute reviews to 2-3 minutes, enabling 70% more PR throughput among power usersDocumentation and tribal knowledge becoming critical infrastructure as AI agents require explicit context encoding to function effectivelyPlatform neutrality and ecosystem investment becoming competitive advantage as OpenAI releases all models to API to foster third-party innovationStress and anxiety around AI tool proliferation creating need for curated learning paths rather than comprehensive knowledge of every new releaseFloor plan and curb appeal as underestimated variables in real estate pricing, highlighting importance of qualitative factors in quantitative models
Topics
AI-Assisted Code Generation and ReviewSoftware Engineer Role EvolutionAI Agent Management and OrchestrationBusiness Process AutomationAPI Platform Strategy and EcosystemModel Capability ForecastingBottom-Up vs Top-Down AI AdoptionEngineering Manager Leverage in AI EraMultimodal Audio ModelsStartup Economics in AI EraCustomer Feedback Paradox in Rapid InnovationScaffolding and Model EvolutionContext Management for AI AgentsReal Estate Valuation ModelsChatGPT App Store and Platform Expansion
Companies
OpenAI
Sherwin Wu is head of engineering for OpenAI's API and developer platform; company uses Codex for 95% of engineer cod...
Cursor
Mentioned as hugely successful AI coding tool competing in space with OpenAI's Codex, demonstrating market opportunit...
Quora
Sherwin's first job out of college where he worked on News Feed and managed code reviews, formative experience in his...
OpenDoor
Previous company where Sherwin led work on real estate valuation models, discovering importance of power lines and fl...
Dropbox
Mentioned as company using DX platform to measure AI impact on developer productivity and engineering value
Booking.com
Mentioned as company using DX platform to measure AI impact on developer productivity and engineering value
Adyen
Mentioned as company using DX platform to measure AI impact on developer productivity and engineering value
Intercom
Mentioned as company using DX platform to measure AI impact on developer productivity and engineering value
Sentry
Sponsor providing error tracking and debugging platform with AI-powered Seer agent for PR review and root cause analysis
Datadog
Sponsor offering product analytics, experimentation, and feature flagging platform for product teams and engineering
Fintool
AI agent startup for financial services whose founder Nicholas shared insight that 'models will eat your scaffolding ...
Ubiquiti
Sherwin's favorite recent product discovery for home networking and security cameras, described as 'Apple of home net...
Eero
Home networking product Sherwin previously used before switching to Ubiquiti for superior software and mobile app exp...
Apple
Subject of Patrick McGee book on Apple's relationship with China; Sherwin describes himself as 'huge Apple fanboy'
Google
Mentioned as major AI lab that could theoretically compete with OpenAI startups, but market is large enough for all
People
Sherwin Wu
Head of engineering for OpenAI's API and developer platform; primary guest discussing AI's impact on software enginee...
Sam Altman
OpenAI CEO credited with coining phrase 'one-person billion-dollar startup' that Sherwin discusses as second/third-or...
Kevin Whale
OpenAI VP of Science quoted as saying 'this is the worst the models will ever be,' reinforcing importance of building...
Peter
ClaudeBot/MoldBot/OpenClaw creator mentioned as power user of Codex who trusts AI output enough to commit directly to...
Greg
OpenAI leader who reinforces company's commitment to ecosystem platform strategy and not squashing competitor startups
Mark Andreessen
Venture capitalist quoted on podcast saying 'AI makes good people better and great people exceptional,' aligning with...
Lenny Rachitsky
Podcast host conducting interview with Sherwin Wu about AI's impact on software engineering and business
Nicholas
Fintool founder who shared insight that 'models will eat your scaffolding for breakfast' about AI model evolution
Dan Wang
Author of 'Breakneck' book on US-China relations that Sherwin recommends for understanding different societal approaches
Patrick McGee
Author of book on Apple's relationship with China that Sherwin found fascinating and timely for understanding tech co...
QNTM
Online fiction author of 'There Is No Antimemetics Division' that Sherwin recently finished and highly recommends
Quotes
"Engineers are becoming tech leads. They're managing fleets and fleets of agents. It literally feels like we're wizards casting all these spells."
Sherwin Wu•Early in episode
"The models will eat your scaffolding for breakfast."
Nicholas (Fintool founder), quoted by Sherwin Wu•Mid-episode
"This is the worst the models will ever be."
Kevin Whale (OpenAI VP of Science), quoted by Sherwin Wu•Early in episode
"Make sure you're building for where the models are going and not where they are today."
Sherwin Wu•Mid-episode
"I think we might actually enter into a golden age of B2B SaaS. There might be a hundred other small startups building bespoke software to support one-person billion-dollar startups."
Sherwin Wu•Mid-episode
"Never feel sorry for yourself. You always have a sense of agency to pull yourself up."
Sherwin Wu•Lightning round
Full Transcript
95% of engineers use Codex. 100% of our PRs are reviewed by Codex. For engineers, I don't know what job has changed more in the past couple years. Engineers are becoming tech leads. They're managing fleets and fleets of agents. It literally feels like we're wizards casting all these spells. And these spells are kind of like going out and doing things for you. What do you think people aren't pricing in yet? The second or third order effects of the one-person billion-dollar startup. To enable a one-person billion-dollar startup, there might be a hundred other small startups building bespoke software. So I think we might actually enter into a golden age of B2B SaaS. I've been hearing more and more. There's this stress people feel when their agents aren't working. There's a team that's actually doing an experiment right now with an open AI where they are maintaining a 100% codex written code base. They run into the exact problems that you're describing. And so usually you're like, all right, I'll roll up my sleeves and figure it out. This team doesn't have that escape hatch. You've shared that listening to customers is not always the right strategy in AI. The field and the models themselves are just changing so, so quickly. They tend to like disrupt themselves. the models will eat your scaffolding for breakfast. What's your advice to folks that are like, okay, I don't want to miss the boat. Make sure you're building for where the models are going and not where they are today. There's a quote from Kevin Whale, our VP of science here. He likes saying this is the worst the models will ever be. Today, my guest is Sherwin Wu, head of engineering for OpenAI's API and developer platform. Considering that essentially every AI startup integrates with OpenAI's APIs, Sherwin has an incredibly unique and broad view into what is going on and where things are heading. Let's get into it after a short word from our wonderful sponsors. Today's episode is brought to you by DX, the developer intelligence platform designed by leading researchers. To thrive in the AI era, organizations need to adapt quickly. But many organization leaders struggle to answer pressing questions like, which tools are working? How are they being used? What's actually driving value? DX provides the data and insights that leaders need to navigate this shift. With DX, companies like Dropbox, Booking.com, Adyen, and Intercom get a deep understanding of how AI is providing value to their developers and what impact AI is having on engineering productivity. To learn more, visit DX's website at getdx.com slash Lenny. That's getdx.com slash Lenny. Applications break in all kinds of ways. Crashes, slowdowns, regressions, and the stuff that you only see once real users show up. Sentry catches it all. See what happened, where, and why, down to the commit that introduced the error, the developer who shipped it, and the exact line of code, all in one connected view. I've definitely tried the five tabs and Slack thread approach to debugging. This is better. Sentry shows you how the request moved, what ran, what slowed down, and what users saw. Sear, Sentry's AI debugging agent, takes it from there. It uses all of that Sentry context to tell you the root cause, suggest a fix, and even opens a PR for you. It also reviews your PRs and flags any breaking changes with fixes ready to go. Try Sentry and Seer for free at sentry.io slash Lenny and use code Lenny for $100 in Sentry credits. That's S-E-N-T-R-Y dot I-O slash Lenny. Sherwin, thank you so much for being here and welcome to the podcast. Thank you. Thank you for having me. I want to start with what's feeling like a barometer of progress in AI, especially in engineering. What percentage of your code, if you even write code anymore, and your team's code is written by AI at this point? I do write code occasionally now still. I'd actually say for managers like myself, it's way easier to use these AI tools than to manually code at this point. And so I know for myself and some of the other EMs, engineering managers at OpenAI, all of our code is written by Codex at this point. But more broadly, there's just so much energy. There's like a tangible energy internally around just how far these tools have gotten, how good Codex as a tool has gotten for us. And it's a little hard for us to exactly measure how much of the code is written because the vast majority of it, I'd say close to 100%, is usually generated by AI first. What we do track, though, is at this point, the vast majority of engineers use Codex on a daily basis. So 95% of engineers use Codex. 100% of our PRs are reviewed by Codex daily as well. So basically any code that goes into production that's merged in, Codex kind of has its eyes on and suggests improvements, suggests changes in the PRs. And so that's kind of what we're seeing internally. But by and large, the most exciting is just the energy that there is. Another observation that we've had is engineers who tend to use codecs more open way more PRs. So they're actually opening 70% more PRs than the engineers who aren't using codecs as much. And the gap is widening. So I feel like the people who are opening more PRs are starting to learn how to use the tool more and more, get more efficient, and that 70% gap keeps growing over time. And so it might have actually increased since I last looked at the number. okay so just to make sure we hear what you're saying you're saying all of the code of these 95 uh engineers at open ai is written by ai it's written and then they review it yep yep it's it's like crazy that that's almost like not crazy anymore that we're just like getting used to this i think there's still some getting used to to be clear uh there's also i think some engineers who I think trust Codex a little bit less. But basically every day I talk to someone who is blown away by something that I can do and kind of like their bar of trust kind of, or like how much they trust the model to do on its own goes up over and over time. And there's a quote from Kevin Whale, our VP of science here. And he likes saying this is the worst the models will ever be. And so this is the worst that the models ever be for software engineering as well. And so over time we just see people trusting it more and more and then we'll see the models get better and better as well yeah kevin wheel former podcast guest uh he he said exactly that line on this podcast and yeah yeah a few times yeah uh peter the clod bot slash molt bot slash open claw is what it's called now uh developer uh recently shared that he uses codex for his work and he feels like anytime it does things he just trusts that it has done the right job but he's just like almost certain he could just commit it to master and it'll be great. Yeah, yeah. He's a great user of Codex. I know he's in close touch with the team, gives us great feedback. I'm not surprised that he uses it. I mean, sorry, it's called OpenClaw. OpenClaw, yeah. OpenClaw is a great product. And then I saw that this morning, I mean, this is very recent, but this morning, I think Malt's book kind of like with Sherrod as well and seeing all the AI agents talk to each other is pretty surreal. It's basically hers happening in real life is what I'm hearing. Yeah. So just like coming back to this crazy moment we are living through for engineers in particular, we've gone from you write every line of code to now AI is writing all of your code. I don't know what job has changed more in the past couple of years, like job that we didn't expect to change this much. We're just like the job of an engineer is so different in the entire lifespan of an engineer. Like in the past couple of years, it's now shifted to I don't write any more code. How do you imagine the role of an engineer and the job of a software engineer looks in the next couple of years? Just like, what is that job? Yeah, it's I mean, it's honestly been really cool to see. And it's part of where the excitement is because like the job is likely going to change pretty significantly over the next one to two years. It kind of feels like we're still figuring things out, though. And so there's like this excitement I know, especially from some of the software engineers of like, we're in this rare moment, you know, maybe over the next 12 to 24 months where we'll kind of get to figure things out ourselves and set our standards for ourselves. In terms of where I see this moving, so I think there's a common thing that everyone's saying, which is, you know, people are generally, like, IC engineers are becoming tech leads. They're basically like managers now. They're managing fleets and fleets of agents. I know many of the engineers on my team basically have like 10 to 20 threads kind of being pulled on at the same time. Obviously not active running codex jobs, but just a lot of parallel threads. They're checking in on what they're doing. They're steering the agents and codecs and giving it feedback. And so their job has kind of really changed from just writing the code itself into being almost like a manager. In terms of where I think this will go one to two years from now. So one kind of metaphor that I kind of always come back to here is actually from this programming textbook that I read back in college called SIGP. I don't know if you've heard of it. uh structure and uh interpretation of computer programs so si si cp um at at mit it was really popular and it was actually used as the uh introductory it was the textbook for the intro programming course for a very long time um and it kind of has this cult following um it teaches you programming uh it teaches you a dialect of lisp called scheme uh and so it like introduces you to like functional programs like very mind mind opening that way but the thing that was memorable for me about that book. So I kind of read it in college. The very beginning of it kind of describes programming as a discipline and draws this metaphor to basically like sorcery. Like it says like software engineers are like wizards and you're like, you're like programming languages are like incantations. And you're like, you know, you're saying you're issuing these spells and these spells are kind of like going out and doing things for you. And the challenge is like, what incantation do you have to say to make the program do what you want? And this book was written in 1980. So this is a while ago. And I think that metaphor has actually kind of persisted over time. And I think it's actually playing out as we move into this new era of vibe coding or just like what software engineering will look like. Because programming languages were basically these incantations. They've changed over time. And the challenge is always, and the trend has been that it's been easier and easier to kind of get the computer to do what you want via programming. And I think the current wave of AI is probably the next stage of that evolution. It is now literally incantations because you can tell you know your uh you can tell codex you can tell cursor exactly what you want to do and then it'll go do it for you and i particularly like the wizard and like the the sorcery analogy because uh i think our current state is starting to move towards kind of like the the sorcerer's apprentice uh you know from fantasia uh where mickey mouse is like you know he finds the sorcerer's hat and he tries to do all these things and i think it's a really apt analogy because one uh it's just it's really powerful now these incantations you can do can is extremely high leverage but you kind of have to know what you're doing right like in sorcerer's apprentice the whole plot is like mickey goes wild the the brooms like go crazy and everything's flooding i think he literally sets the like sets the uh the brooms off on a task and then goes asleep uh and and so you know it's like vibe coding at its at its at its greatest and then eventually the the old sorcerer comes back and like cleans everything up and um you know when i see engineers kind of like doing these 20 different codex threads at a time, there is some skill and there's some seniority and like, you know, a lot of thought that needs to go into this because you want to make sure that the models aren't going off the rails. You definitely don't want to just like completely go away and, you know, like ignore the thing. But it's also extremely high leverage. Like, you know, a very senior engineer who's really proficient with these tools can now just do way more things via what they're doing. And I think this is also what makes it fun. Like it literally feels like we're wizards. It feels like we're closer to making it feel like this magical experience where we're casting all these spells and having software do all these things for you. I was thinking of the Sorcerer's Apprentice exactly as the metaphor as you were describing that, so I'm glad you went there. A previous podcast guest described it as you have a genie that grants you wishes, and it's a useful frame because you have to be very clear about the wish you want. like if you want to be big yes yeah or it might be like the monkey's paw type thing where you know it's actually you caught what you want but what are the side effects um yeah yeah i think that and the analogy is great and um yeah the crazy thing for me is just the staying power of that book sick be like it's called the wizard book you know people call it the wizard book because that is the metaphor that they kind of weave throughout the the book and um we're we've basically reached that point now which is which is which is really cool there's two kind of threads i want to follow here one is i've been hearing more and more there's this like stress that people feel when their agents aren't working. You fire off all these, you know, Codex agents, and then you have to keep standing on top of them. Oh, shit, one's not working. I'm wasting time. Do you feel that? Do you feel that across your team at all? Yeah, yeah. I mean, it happens all the time. And I actually think like this is where the interesting part of all of this lies right now, because these models aren't perfect. These tools aren't perfect. And we're still trying to figure out how to best interact with these with Codex or with these AI agents to get work done. We see this come up all the time. There's a particularly interesting team that we have internally. So there's a team that's actually doing an experiment right now with an open AI where they are basically maintaining a 100% code expert and code base. So you'll have the AI write code, but you'll obviously end up rewriting a lot of it and you might need to double check and change things. But this team is just fully codex pilled and just leaning in entirely. And they run into the exact problems that you're describing, which is like, you know, their challenge is, you know, I want to get this thing, this feature built, but I can't get the agent to do it. And so usually there's an escape hatch where, you know, then you're like, all right, I'll roll up my sleeves and like figure it out. And then instead of using codecs, I might use like tab complete and cursor and things like that. But this team, for the experiment, this team doesn't have that escape hatch. And so then the challenge, like, how do I get the agent to do this? And I actually think we're going to be publishing a blog post from some of our learnings here. But a lot of fascinating like paradigms and best practices are falling out of this. One interesting thing that we've noticed, I don't know if this is what you kind of feel, but we definitely feel it here is a lot of the time when the coding agent is not doing what you want, it's usually a problem with context and just like information that you've given it. It's just either underspecified or there's just not enough information around how to do something available to the agent, available to Codex. And so when you have to solve it through that, the challenge is then to add documentation and actually work around this limitation and basically encode more tribal knowledge that's in your head somehow into the code base, either via code comments itself or code structure itself, or via text files like .md files, skills, any type of additional resources within the repository so that the model can better do its task. There's a whole bunch of other learnings from this group, which I think is fascinating to explore. But yeah, kind of giving, removing that escape hatch of no longer using the AI has allowed them to start piecing together a lot of the problems that we'll have to solve if we really want to lean into agents. Another issue people run into, you talked about how people are shipping PRs like crazy, a lot more PRs if they're working with AI. Obviously, code review is becoming a bigger challenge. Is there anything you've figured out in your team to help speed that up to make that scale and not just create this terrible job for people where they're just sitting there reviewing PRs all day? Yeah, I mean, one thing is Codex reviews 100% of all of our PRs at this point. And so I actually think so. One really interesting thing that's happened is the things that we tend to hand to the models immediately tend to be the things that annoy us or like are the most boring parts of software engineering. It's also why it's more fun now because we get to do more, you know, more of the fun things. For me, speaking more for myself, I really hated code reviews. It was like one of the worst things for me. And then I remember in my first job out of college, it was at Quora. I owned, I was working on the News Feed. And so I owned the code for the News Feed. And so I was a reviewer for News Feed. and it was just like the central piece of code that everyone would touch. And so I would just, every morning I'd log in and be like 20 to 30 code reviews. I was just like, oh my goodness, I gotta like, you know, get through all of these. I would procrastinate and then it grows to like 50. And so there's just like a lot of code reviews. Codex is really good at reviewing code. So actually one thing that we've noticed that 5.2 in particular has gotten extremely, extremely adept at is reviewing code and especially when you kind of steer it in the right direction. And so for code reviews, yeah, we create a lot of PRs, but Codex reviews all of them. And it makes code reviews go from a, I don't know, 10, 15 minute task to sometimes even just like a two to three minute task because you have a bunch of suggestions already baked in. A lot of the times people will, especially for small PRs, like you actually don't even need people to review. We kind of trust Codex in this way. The original author kind of looks at Codex. It is, you know, the benefit of code reviews to have a second pair of eyes to make sure that you're not doing anything dumb. Codex is a pretty smart second pair of eyes at this point. And so that's something that we've heavily leaned into. The general CI process and like the post kind of push and like deployment process has also been heavily automated via codex internally at this point. If you talk to a lot of engineers, the thing that annoys them the most is after you've written your beautiful code, like how do you get it into production? You know, you gotta run through all these tests. You gotta like, you know, lint errors. You gotta all the code review. There's a lot of automated stuff you can do with codex. And so we've actually built some tools internally that help automate that process, automate the lint. If there's a lint error, it's a very easy codex fix. It could just patch it and restart the CI process. We're trying to collapse as little work for an engineer as possible, and the byproduct of which is they can now merge and push out a lot more peers. Codex writing the code, Codex reviewing its own code. I'm curious if you're open to using other models to review your model's work. Is that a path or is it just it's good enough, we don't need anything else? I will say there's definitely a circular thing here. And going back to Sourcer's Apprentice, you want to make sure you're not letting the brooms go crazy here. And so we're very thoughtful, I'd say, around which PRs are completely just codex reviewed. Most people still obviously take a look at their PRs. And so it's not like it's going to zero. It's more like going from 100% attention to 30% attention, which just helps things push through. In terms of multiple models, we obviously test a lot of models internally. And so we have a lot of those. We use external models less. We think it's important to kind of dog food our own models and kind of like get feedback there. But you can also, you know, there are a lot of like internal variants of models that you can use to give you different perspectives here as well. And we found that to work quite well. Okay, so just to make sure we get like a barometer of today's world at OpenAI in terms of AI and code, just so I understand. And then I want to move on to a different topic. 100% of code across OpenAI is written by Codex at this point? Is that the way to frame it? I wouldn't make the statement that 100% of code running in production today is written by AI. And it's kind of hard to do attribution there. But almost every engineer heavily uses Codex in all of their tasks at this point. And so if I were to guesstimate the vast majority of code at this point, it was probably authored by AI. Incredible. Okay, so there's a lot of talk and we've been talking about kind of the IC role, the work of an IC engineer. There's less talk about the changing role of a manager, especially an engineering manager. How has your life as a manager changed with the rise of AI? And just what do you, where do you think managers, what's the role of a manager in the future? It definitely changed less than an engineer There no codex for managers just yet However I use Codex quite a bit for some of the kind of more manager tasks that I do I say a couple things are changing. There are like some trends. So I don't think it's changed that much yet, but I see trends and I think if you play it out, you can kind of see where a lot of this is going. One thing that's becoming increasingly clear is Codex really empowers like top performers to get a lot like to be a lot more productive. And so it really like, and I think this may be true for AI more broadly, like across society, which is like the people who really lean in or like the people who have high agency or like will really get good at these tools will kind of supercharge themselves. And so I'm kind of noticing this now as well, which is like the top performers kind of end up being a lot more productive. And so you see a broader spread in team productivity in this way. So one thing that I've always done as a management philosophy is to spend actually the majority of my time with top performers, just like make sure they're unblocked, make sure they're happy, make sure they feel productive and they feel heard. I think this is even more true in an AI world where your top performers are going to just like really be shooting ahead using these tools. I think one example is the team that's maintaining a 100% codex generated code base, like just letting them kind of rip and see what's happening there. something that's paid dividends. So I think that's kind of one trend that I'm seeing where spending even more time with top performers for managers, I think, is likely going to continue. The other thing is, so this is more an observation, but my sense is with a lot of these AI tools available to managers, so less like writing code, but just things like chat chatgpt with organizational knowledge, like being able to do research and understanding organizational context a lot better. Another good example is we're doing performance reviews right now. And it's actually really easy to use chatgpt with internal knowledge, hooked up to GitHub and like our Notion docs and Google docs to get a really good sense of what this person has done over the last 12 months and writing a little, you know, deep research report for it. My sense is I think managers will be able to manage much larger teams in this world, kind of like how, you know, software engineers are managing 20 to 30 codexes, my sense of these tools will allow managers, people managers to be higher leveraged and will allow them to manage teams of way more than the current best practice of I think it's like 6 to 8 for software engineering. You kind of see this applied to the non-engineering domains like support or operations where it's like previously the size of a support team might be limited, but as you can pass off more things to agents, you can actually do more work and also manage more people this way. I think the same thing might happen for people management as well, especially in tech companies. And we're already seeing this. There are some teams where there are EMs managing quite a few people, and they're doing it pretty adeptly because of some of these tools where they can get higher leverage and understand what their team's doing, understand organizational context a little bit better, and operate in that way. I love this advice that the way you described it is you've always leaned into top performers and spent more time with them, unblocked them, make sure they're happy. The way Mark Andreessen, he was just on the podcast, the way he phrased it is AI makes good people better and it makes great people exceptional. Yeah, yeah. And what you're saying here is just doing this more and more is probably the right move, spending more time with the best people on your team to unblock them, make sure they have everything they need. Yeah, a very good example right now is there are, I would say, like a group of engineers internally who are really codex-filled and are thinking through what the best practices are for interacting with this model. And that is just an extremely high leverage thing for them to do. And so just like as a manager, I'm just like, yeah, go explore this. Whatever best practices come out of this, we have to share with the org. We do all these knowledge sharing sessions. We'll share documents and best practices everywhere. So things like that just elevate everyone. and uh and so i view that as like you know another example of this trend um uh that um that we're seeing where the top farmers really get exceptional people just like have a sense this is big ai is changing so much the world is changing uh it's going to be a huge deal what do you think people aren't pricing in yet into what will change into where things are heading just like what's an example of something you think you're like okay we're not realizing this yet so one of my favorite kind of phrases or things that have come out of this whole AI wave is the idea of the one-person billion-dollar startup. I actually think Sam may have been the first one to say it, but it's fascinating to think about, right? It's like, yeah, if people are so high leverage, at some point there will likely be a one-person billion-dollar startup. And while I think that's really, really cool, I think people aren't really pricing the second or third-order effects of this. And really what, you know, because what the one person billion dollar startup implies is that there's, you know, one person can just have so much more agency and so much more leverage using one of these tools that it is just super easy for them to get everything done that they need to for for their business to, you know, ultimately create something that's a billion dollars. but I think there are a couple other implications of this so one of them is if it's easy for a person to create a one person bill or if it's possible for a person to create a one person billion dollar startup it also means it's way easier for people to just create startups in general like I actually think this will like one second order effect of this is I think there's going to be a huge like startup boom and like small like SMB style boom where anyone can build software for anything right like one you're kind of starting to see this play out in the AI startup scene where software's become a lot more vertical oriented where like these verticals, like creating some AI tool for some vertical tends to work quite well because you really lean into that particular domain. You like really understand the use case for it. And so if you play out AI, there's no reason why you can't have like 100x more of these startups. And so I think one world that we might end up seeing happen is in order to enable a one-person billion-dollar startup, there might be like 100 other small startups building bespoke software that works extremely well to support other types of small, one-person billion-dollar startups. And so I think we might actually enter into a golden age of B2B SaaS and just software and startups in general. And so I think that's a really interesting trend to kind of see because as it gets easier and easier to build software, as it's easier and easier to run a company you might actually just end up seeing way more of these startups. And so the way I've been thinking about it is yeah, there might be one person billion dollar startup, or there might be like a hundred million dollar startups. There might be tens of thousands of $10 million startups. And as an individual, it's actually pretty great to have a $10 million business. That's enough for yourself or life at that point. And so we might really see see an explosion in that way. And I feel like people aren't really, you know, pressing that in. There's another kind of like third order effect to this, you know, and again, all of these, I guess, you get to the further and further out predictions, I think, there's a lot of uncertainty. I think if we end up moving to this world where you end up with these like kind of micro companies building software that works for one or two people who own the company and are working there, I think the startup ecosystem will change. I think the VC ecosystem will change. You know, we might end up in a world where there's just like a handful of big players that are offering platforms and supporting all of these startups. But, you know, the types of venture scale return startups that can really hundred or thousand X your investment might actually end up shrinking if you end up having a bunch of these, you know, smaller 10 to $50 million companies, which are not great for venture solid returns, but are great for the individuals, the high agency individuals who are now, you know, really leaning to AI to build these businesses for themselves. I love how many order effects we've been through. I want to hear the fourth order effect now, Sherwin. I'm just joking. Fourth order is too gigabrain for me. I can't think that far ahead. It's like Inception where just everything gets slower every time you go deeper into someone, every layer. Okay, so the billion-dollar startup, I think about this a lot because I'm not going to be a billion-dollar startup because what I'm doing is not venture-scale in any way and not super high leverage. But just seeing how many support tickets I get from just like the most ridiculous things, it's hard for me to imagine one person. Like I'm bearish on this billion dollar startup. I just want to share this thought simply because of the support costs. Even if AI is helping you at a billion dollars, just like unless your ACVs are very high and you have very few customers, it's just dealing with support. And people are like, you know, like they can solve their own problems, but they're like, I'll email support. I'll ask about this thing. Just dealing with that is hard to scale is in my experience. So unless you have, in my opinion, unless you have a bunch of contractors, which I don't know, does that count as a single person company? I feel like it's very difficult to scale a billion dollar startup and not have someone helping you with at least the support work. And AI, I think will only take you so far. So I think that's true. And actually, I think my view on it is slightly different, which is I think that your, you know, Lenny's podcast might end up becoming a billion dollar startup. But what I think might happen is instead of you kind of being the one person who has to dispatch an AI to solve and fix those support tickets, I think what might end up happening is there might be a whole smattering of other startups that are building software and super and and super tailored towards what you might need. And so there might be 10 or 20 startups that build support software for podcasts and newsletters. And that might be a one-person startup. It doesn't need to be a big one. And they might be able to just code up this product very, very easily. They're able to build their own thing. And because it's so tailored and unique and hopefully useful for you, it might be something that you purchase as the one-person billion-dollar startup. I would buy that. I would buy that. Yeah, there's like a question of like what you in-house and what you like kind of outsource. And what I think might happen is because the cost of writing software and building products is collapsing so much, you might end up outsourcing a lot of this and in doing so reducing the size of your company. And so that's kind of the world that I think might end up happening. Again, there's like high uncertainty in what might play out here. But the end result still might be a one like one person driving this like high, high, massive leverage company that might actually reach a billion dollars. I could see that. I also think about Peter at ClaudeBot slash MoldBot slash OpenClaw of just like how he barraged he is right now by all these asks and emails and pings and DMs and PRs just like I'm curious to and he's not even making any money out of this thing um yeah I can't imagine what it's like to be him right now it must be like absolutely insane it it's probably like uh uh you know like the the months after we launched ChatGVT the craziness that was uh as one as one man uh he's coming out on the pod by the way in a week oh that's exciting yeah uh maybe the fourth order of fact is distribution becomes increasingly important because there are so many freaking things trying to get your attention. So people with an audience and platform, I think, become more and more valuable, which is good, good stuff. Okay. I wanted to come back actually to your management stuff. So I really loved your insight about spending more time with top performers has been really successful to you. Just thinking about you as a manager of a team that is building the platform that powers basically the entire AI economy, like every AI startup is building on your API. Clearly, you're doing a great job. What other kind of core management lessons have you learned? What do you find is really important and key to your success as a manager of engineers and just people? Yeah, I think a lot of the lessons that I've learned here, I don't know how specific it is to the opening API or some of our enterprise products in particular. I think my management philosophy has obviously changed over time, but I think it's probably stayed the same more than it's changed over time. One of these principles is kind of what I talked to you about before, which is spending a lot of time with top performers, like actually spending, and to be very concrete, it's like more than 50% of your time with your top performers, with maybe your top 10% performers, and really, really trying your best to empower them. The way that I think about it is kind of come back to this analogy of software engineer as a surgeon, which comes from the mythical ManMonth book. So it's actually, it's funny. So I pull it from the book, but in the book, they actually describe this world where, I think they were predicting the future, because I think the book was written in the 70s or something. They said that software engineering might end up moving into a world where the software engineers are like surgeons or like in a surgery room, there's like one person doing the work. And, you know, there's the one person like cutting or whatever and like doing all the surgery. And everyone else in the room is there to just support them, right? It's like the nurse and like the assistant, the resident and the fellow. And then the surgeon's like, I need a scalpel. And they give them a scalpel. And then they're like, I need, you know, this tool and this machine and they'll bring it over. Everyone's there to just like, you know, support the one surgeon. And so the mythical mammoth actually predicted that that is kind of the direction that software is going to go. I don't think that's exactly played out where like, you know, it's much more collaborative and like, it's not only one person doing the work, but I've always really liked that analogy. And, and, and, and that analogy is actually what I strive to kind of like emulate in my own management philosophy, which is software engineering isn't really like surgery where it's not just one person doing work, but the way in which I like treating the people on my team and the way that I act as a manager is I want to empower them, make them feel like they're a surgeon. And insofar as like, making sure that I'm supporting them and making sure they have everything that they need to do their work. And it feels like they have an army of people kind of supporting them and looking around corners and giving them everything that they need when it's really just me as the manager. And so like, the example that I give is looking around corners and unblocking people, especially from an organizational perspective is extremely, extremely useful. And again, going back to the AI conversations even more important nowadays, right? Like, if people are just like cranking PR after PR, the main thing bottlenecking progress and, you know, shipping something tends to be organizational or like process oriented. And if you as a manager can kind of look around corners and kind of unblock the team, if you can, you know, like if the surgeon needs scalpel, but, you know, the manager kind of already has a scalpel ready for them, that's the best case scenario. That's kind of the way that I approach management and especially engineering management. And so that's something that's really, really stuck with me over time. And even though, you know, software engineers aren't exactly surgeons, that metaphor has always kind of stayed in my mind as of the rest of my career. I love that. And I feel like I wonder if that's something I can help with is look around corners and predict here, this engineer is going to be blocked by this decision. We need to figure this out. Yeah, that's actually a really good point. I haven't tried this yet, but I wonder what would happen if I ask ChatGPT hooked up to company knowledge, you know, like, what are the active blockers? Look through all the Notion docs, what are, maybe Slack messages, you know, it's probably in Slack somewhere what are the active blockers on my team and is there something I can do to help? Now, that's very interesting I have not thought about that, but you're right. We just had an insight right here Yeah, yeah, yeah. And it's I think even more interestingly, what do you anticipate will be a blocker for this engineer or this team in the coming months? Yeah, you asked the model, you asked the AI to do the second and third order things. Anticipate that and anticipate what the bloggers will be next month too. I think we've got a good idea right here. Yeah, yeah. This episode is brought to you by Datadog, now home to Epo, the leading experimentation and feature flagging platform. Product managers at the world's best companies use Datadog, the same platform their engineers rely on every day to connect product insights to product issues like bugs, UX friction, and business impact. It starts with product analytics, where PMs can watch replays, review funnels, dive into retention, and explore their growth metrics. Where other tools stop, Datadog goes even further. It helps you actually diagnose the impact of funnel drop-offs and bugs and UX friction. Once you know where to focus, experiments prove what works. I saw this firsthand when I was at Airbnb, where our experimentation platform was critical for analyzing what worked and where things went wrong. And the same team that built experimentation at Airbnb build Eppo. Datadog then lets you go beyond the numbers with session replay. Watch exactly how users interact with heat maps and scroll maps to truly understand their behavior. And all of this is powered by feature flags that are tied to real-time data so that you can roll out safely, target precisely, and learn continuously. Datadog is more than engineering metrics. It's where great product teams learn faster, pick smarter, and ship with confidence. request a demo at datadoghq.com slash Lenny. That's datadoghq.com slash Lenny. Okay, I'm going to shift to talking about the API and the platform that you all build. So you work with a lot of companies implementing your API, your platform, building on your tools. You told me that you find that a lot of companies actually have negative ROI on their AI deployments, which I think is what a lot of people read about and feel and think. And it's interesting you're actually seeing that. What's going on there? What are they doing wrong? What's happening in the world of AI and deployments in ROI? Yeah, so to be clear, I don't like explicitly see quantitative numbers around this. You know, it's actually really hard to measure these things. But especially from observing some companies kind of trying to do AI, I would not be surprised if a lot of AI deployments are actually, you know, negative ROI. I mean, part of this too is I think there's also general sentiment from folks around the country, like basically outside of tech, that AI is being forced onto them. And I think part of this is probably a symptom of some negative ROI, AI deployments. A couple of things I've observed around this. So one thing is, and I think I come back to this again and again, like I think we in Silicon Valley just forget that we live in a bubble. like we are so like twitter is a bubble it's our x is a bubble um silicon valley is a bubble software engineering is a bubble most people uh in the world most people in the u.s are not software engineers are not very ai pilled um are not following every single model release and so uh uh and so we're just like highly out of the loop on how to use this technology and so you know like we um we always talk about all these like best practices for codex all these like codex-filled people within OpenAI. I'm sure everyone on X who posts are like crazy power users of these AI tools. You know, they lean into skills, they lean into agents.md. MCPs. Yes, yeah, all of that. And when I talk to some of these companies, and I talked to the actual employees using these, it's like the most basic thing that they're trying to do. And they like have very little understanding of exactly how this technology works. And so that's kind of like one big observation for me, which is like they asking very simple questions of these things They really not pushing it just yet And so that kind of goes back to that kind of ties into what I think more companies do or like what should do or what a more ideal AI deployment setup looks like. And this is kind of how we've run things within OpenAI too. The companies where I think it started to work really well have a combination of both top-down buy-in. So it's like the C-suite is like, you know, we want to become an AI first company. So there's buy-in, they buy the tools, they have exec support, but it also has bottoms-up adoption and buy-in. And so what I mean by that is it has actual employees doing the work who are really excited about this technology and are willing to learn, evangelize, build best practices, and kind of knowledge share within the organization. We've seen this a lot internally. So obviously OpenAI has always wanted to be a very AI-centric company, but when it really started taking off was with the introduction of Codex and these tools where like people, like actual employees themselves could start applying it to their work. And I think you really need this because at the end of the day, everyone's work is like very different. It's like very unique. Software engineering is different than finance, is different than operations, is different than go-to-market and sales. And so there's like a lot of these like last mile intricacies of work that needs to really be done in a bottoms up fashion. And so my sense is a lot of these these AI deployments don't have like don't have bottoms up adoption. Like it was like an exec mandate and it's extremely top down and is very divorced from what the actual work looks like. And as an end result, you end up with a giant workforce that doesn't really understand the technology is like, I know I'm supposed to use this and maybe it's like on my performance review too, but I'm not sure what to do. And they look around, no one else is doing it. There's no one else to learn from. And so my, my, you know, my recommendation for companies kind of pushing this is find or maybe even staff a full-time team internally that is this kind of tiger team internally that can explore the full extent of the capabilities, apply to specific workflows, do the knowledge sharing, create excitement within folks who might want to use this technology. Because in the absence of that, it's actually very difficult to pick up. And who would you put on this tiger team? Is it like engineer-led, do you find in your experience? Is it a cross-functional sort of team? Yeah, it's interesting. Also, a lot of companies don't have software engineers. And so the pattern I've seen is it tends to be these software engineering adjacent, basically technical people, but are not software engineers. I think those are the ones who tend to get most excited around this. It's like maybe the support team operations lead who doesn't code, but loves using these tools and is like an Excel wizard or something. and so it's like technical adjacent or like coding adjacent and like, you know, pretty technical. Those are the kinds of people I've seen in these companies who just like really light up and get excited around this. And you can usually build a team around that. But yeah, it's like oftentimes not software engineers. Software engineers, I think, will understand this, but not every company has software engineers. It's actually kind of a rarity. They're hard to find. They're expensive. And so it's these other types of folks. What I'm hearing is the anti-pattern is top down. This is very, the CEO found an exec team just like we are going to go AI first. We're going to lean into AI. Everyone's going to be judged on their performance using AI tools, how much your productivity is increasing thanks to AI. And without that being just top down and not creating a team that is bottom up, spreading the gospel, you find that doesn't work. Yeah, yeah, exactly. And the advice is find the people that are most excited. And instead of kind of having them spread out through the organization, what you find works is create a little AI kind of evangelist team that finds ways to use it and kind of spreads it across the work. Yeah. I mean, another, it's kind of like hearing you, you play back to me, another way to think about it, kind of tying back to my own management philosophies is find the high performers in AI adoption and empower them, you know, let them build hackathons, let them, you know, hold seminars, do knowledge sharing, kind of create the seeds of, uh, of excitement internally. Okay. Amazing. There's a couple of hot takes. I want to hear, uh, from you, something that I've seen you talk about and share. One is you've shared that talking to customers and listening to customers is not always the right strategy in AI and it might often lead you astray. I don't know if it's that hot of a take. I think the main thing here is, so obviously you should talk to your customers. Like it's like you still talk to customers. I just think the AI field, especially what I've seen over the last kind of like three years, working on the API and seeing kind of all that evolve is the field and the models themselves are just changing so, so quickly. They tend to like disrupt themselves, especially around the like tooling and the scaffolding space. So there's this quote that I read actually earlier this week from an X article by this guy named Nicholas, who's the founder of a startup called Fintool, where I think he was sharing a lot of the best practices that he has learned through building AI agents for financial services, I think at a startup Fintool. and I had this phrase that I thought was really good which is the models will eat your scaffolding for breakfast. If you rewind back to 2022 right when ChatGPT launched these models were pretty raw and there was all this product scaffolding and things especially in the developer space to basically try and steer the model and build a scaffolding around it to get it to do what you want. Like agent frameworks, there's vector stores I think was really popular back then and just a whole smattering of tools here. And as you've kind of seen the field play out, that the models have just changed so much and gotten so much better that they ended up, yeah, literally eating some of the scaffolding. And I think this is even true today. So I think the article from Nicholas actually, you know, the current scaffolding, which is fashionable, is skills, files-based context management. I could see a world where at some point, you know, that's no longer useful, where the model can actually, you know, manage all that themselves or like, you know, or there might be, you know, it's hard to predict, but like might move on to some new paradigm where you no longer need this file-based like skills type thing. You have literally seen this play out, right? Like the agent framers, I think, are a little less useful now. There was a period of time like 2023 where we thought vector stores is going to be like the main way for you to, you know, bring organizational context into the models. And you need to, you know, vectorize and embed every bit of your corpuses. And then you need to do all this work to like figure out the vector search, to like optimize that, to pull out the right information, right time. All of that is scaffolding because the model, you know, was not good enough. And it turns out, you know, in this case, it turns out as the models get better, a better approach is actually to take out a lot of that logic and trust the model and give it a set of tools for search. It doesn't need to be a vector store. You could actually just hook it up to any type of search. It could literally be files on a file system like skills and agents MD. to kind of steer it as well. Obviously, there's still a place for vector stores. I know a lot of companies are still using it, but the entire scaffolding around that and building an entire ecosystem around that and assuming that's the only scaffolding that you need has really changed. And so tying this back to the like, you don't always have to listen to your customers. Because the field is changing so much at any point in time, a lot of people are kind of in this local maximum. And if you just blindly listen to your customers, they'll be like, yeah, I want a better vector store. I want a better agent framework for this. And if you had just only chased down that path, it actually would have led you to build something that, again, is the local maxima. Whereas as the models get better, we've had to reinvent and rethink the right abstractions and the right tools and frameworks to build around these models. And the cool slash exciting slash crazy annoying part is it's a moving target. And so, yeah, like the current smattering of tools and frameworks right now will likely need to evolve and change pretty significantly over time as the models get smarter and better. But that is just the nature of building this space. I think that's what makes it exciting. But it also means when you talk to customers, you kind of need to balance the exact feedback that they want with where you think the models are going and where you think things will trend over the next one or two years. It's interesting how this is, the bitter lesson is, you know, this big lesson that AI and ML folks learned, which is just like, don't, the less you overcomplicate, the less logic you add to machine learning to AI, the more it'll be able to scale and grow and just like take it all away and let it just compute basically, just give it more power to get smarter on its own. Yeah, there's literally a version of the bitter lesson applied to like building with AI where, you know, we were trying to architect all this stuff around and it turns out the models will just kind of, you know, eat it all away. And honestly, like OpenAI API team has like been guilty of this, where we kind of like took some, you know, left and right turns when we shouldn't have. But yeah, the models still end up, models get better and we're all learning the bitter lesson day in and day out. So what would be the key takeaway for folks building on, say, the API or just building agents and, you know, having to build a little bit of this around for now? Is it just, yeah, what would be the advice? My general advice, and I've been giving this to people for a while and I think it's still true today, is make sure you're building for where the models are going and not where they are today. You know, it's clearly a moving target. and I think a lot of the companies that I've seen, startups that I've seen really, really do well, is they build a product for an ideal type of capability that is maybe 80% of the way there today. And they end up having a product that kind of works, but it's just almost there. But then as the models get better, suddenly it might click, and then their product now is incredible because it works maybe with 0.3 at some point, it suddenly works with 5.1, 5.2 suddenly it unlocks it. But they're building these products with the model capability improvements in mind. And with that, you end up creating an experience that's way better than if you had assumed that it's static in the first place. And so that'd be my general advice, which is build for where the models are going and not where they are today. You end up building a better product. You may need to wait a little bit, but the models are getting so much better so quickly, you often don't need to wait that long. So it says, follow that thread. where are like in the next six to 12 months, where is the API heading? Where's the platform heading? Where are the models heading? As much as you can share, I know there's a lot of secrets here that maybe you're more excited about, or do you think that people should start to prepare for however much you can share? I mean, so the obvious one is how long of a task these models can do coherently. So there's like the meter benchmark that I think tracks software engineering tasks and how long, you know, like how long of a task can these models do 50% of the time, 80% of the time. I think we're at something like multi-hour tasks being able to be done by, software engineering tasks being able to be done by these frontier models 50% of the time. And then I think 80% is something like just under an hour. But the sobering thing about that chart is they plot all the previous models on this chart as well. So you can really see the trend of this. That's something that I'm really excited about, which is, you know, I actually think products today really optimize for tasks that the model can do for like minutes at a time. Like even codecs and like the coding tools, I'd say like, you know, it's in the CLI. You're kind of like seeing it be interactive. It's really, you know, quite optimized well for like maybe at most 10 minute type tasks. I have seen people push codecs to the limit and to like multi-hour long tasks. But again, I think that that's more of the exception. but if you follow this trend like I think in the next 12 to 18 months we could see models that could do multi-hour long tasks very very coherently at some point it might reach like you know six hours a day long task where you kind of like dispatch it and have it do you know do things on on its own for a while the types of products you build around that will look very different you want to give the model feedback you obviously don't want it to completely run wild for a day maybe you do but you probably don't and then the universe of things you can have the model do really expand. So that's something that I'm really, really excited about seeing. Another thing over the next 12 to 18 months where I think it'd be really cool is improvements in the multimodal models. And actually by multimodality, I'm mostly thinking about audio here where the models are pretty good at audio. I think they're going to get a lot better at audio over the next six to 12 months, especially the native multimodal models, the speech-to-speech ones. I think there's also interesting work being done around new types of models and architectures on the multimodal audio side as well. But audio, especially in the enterprise and in a business setting, I think is a hugely underrated domain still. Everyone talks about coding, it's all text. But we're talking in audio. A lot of the world's business is done via audio. A lot of services and operations are done via talking in audio. And so I think that area is going to look very exciting in the next 12 to 18 months. And I think there will be even more unlock for what we can do with audio models there as well. Amazing. So quick summary, expect agents and AI tools to run longer to that trajectory to continue to increase. And then audio and speech becoming a bigger deal, more first party and native and better and core to the experience. Yeah. Extremely cool. Okay, I want to go back to one of your hot takes, another hot take that I've seen you discuss. You're very bullish on business process automation as an opportunity in the world of AI. Talk about that. Yeah, this goes back to the thing that I said previously, which is we live in a bubble in Silicon Valley. And a lot of the work that we do, that we're used to, software engineering, product management, building products, is very differently shaped than the work that goes on that runs our entire economy. and I see this in and out when I talk to customers if you talk to any company that's not based in it's not a tech company there's a lot of business processes and so what I mean by this is I generally delineate it as there's like software engineering is kind of like open-ended knowledge work right and this is why I think tools like Codex tend to be quite good because it's exploring and you're giving it these open-ended things but software engineering is fundamentally pretty open-ended and is not very repeatable, right? So like you build a feature, you're not trying to build the exact same feature over and over again. And a lot of like tech jobs are in the space. I think like data science is kind of in the space as well. Even some of the like strategic finance stuff. But as you move further and further away from software engineering and like what is core in tech, a lot of jobs are just business processes. They're like repeatable things, repeatable operations that some manager at a company has kind of like iterated on, there's usually a standard operating procedure that people want to do and you don't want to deviate from it that much. You know, there's like in software engineering, the ingenuity isn't deviating, but a lot of the work being done in the world is actually just running through these procedures and operations. Like if I, you know, if I call a support line, they're running through one of these. If I call my utility company, there's a bunch of processes and things that they can and cannot do for me. And so I'm just extremely bullish on this general category of like, and I think it's underrated because it's so different from what we think about in Silicon Valley, people tend to not think about it. But how can we apply AI and some of the tools and frameworks that we have towards this business process automation, towards automating and making easier repeatable business processes with high determinism that is fully integrated with business data and business decisions? and different systems within an enterprise? And how can we actually make that process better? Because I actually think there's a lot of opportunity and a lot of work to be done in that area. And we just don't talk about it because it's a little bit less in our wheelhouse. So your take here, just to make sure I fully understand it, is you think there's a much bigger opportunity outside of engineering for AI to impact productivity of companies and also jobs of these folks that are doing these kind of repetitive, easily automated tasks. Impact jobs and also just impact how work is done. Like so much of work is done in this way. Like you think about, you know, like what a, like basically I talk to customers all the time, big enterprises, like how will AI transfer my company? Like how will it run in a world with AI in like 20 years? And, you know, software engineering is part of the story, but there's so much more on the business process side. And I actually think it might look even more different on the business process side. and the work there is pretty substantial. It's actually interesting. I don't know from an absolute percentage or absolute basis, I don't know if it's bigger or smaller than software engineering. Software is pretty huge and pretty expensive as well, but it is pretty massive and it's definitely bigger than, you know, it's bigger than you would think it is based off of how people talk about it or don't talk about it on X or Twitter. Okay. Going in a slightly different direction, having built the platform, building the API, people building on API, the biggest question on people's minds is always just, how do I not have OpenAI squash my idea and build their own thing and then, you know, destroy this market I created? What's the general policy? What's the general philosophy of how startups should think about where OpenAI is unlikely to go? My general answer here is the market is so big and so massive. Like, I actually think, startups should just not overly think about where OpenAI or these labs are going. I've talked to a lot of startups that have not worked out, startups that are doing really well. Every startup that I've seen that has kind of fizzled out is not because OpenAI or a big lab or Google or something has come to squash them. It's because they built something and it really didn't resonate with the customers. Whereas the ones that take off, even in very competitive spaces like coding, Cursor is huge at this point. And it's because they built something that people really love. And so my general advice is like, don't, you know, don't overly stress about this, just build something that people like, and you will, you will have a space in this. I can't overstate how big of an opportunity there is right now. Like the, the, the opportunity space and building with AI is so big. Like a good example of this is, is like the space is so big that the Overton window of what is acceptable and not acceptable for VCs to do has completely changed here. VCs are like investing in like competitive companies left and right. It's just like the space is so big because the opportunity is unlike anything that we've seen before. And while, you know, that affects how VCs operate, from a startup perspective, it's like the most empowering thing in the world because even if you just build something that some people really, really love, you will end up with a massively valuable business. And so that's why I tell people like don't overthink about it. The other thing like I also think is important to remember, at least from an open AI perspective, one thing that we've always held very near and dear, which both Sam and Greg helped reinforce from the top as well, is we actually view ourselves fundamentally as an ecosystem platform company. The API was our first product. We think it's really important for us to foster this ecosystem and continue to support it and not squash it And so if you kind of look at the decisions we make this is all we through it Every single model we released in one of our products gets released in the API Like even you know we release these codex models now that are a little bit more optimized for the codex harness. But they always find their way into the API. And like all of our, you know, customers end up using those. We don't hold back on any of that. We think it's really important to keep our platform neutral. And so, you know, we don't block competitors. We allow people to have access to our models. we also want you know like we've recently been testing more of like the sign in with chat gpt you know product as well and so we want to foster this ecosystem i think it's really important that we do so the general like thinking about this is like you know a rising tide like lifts all boats and you know we might be a aircraft carrier we're like pretty big at this point but we think it's important to raise the tide because everyone kind of benefits and i think will benefit as well like our api itself has grown pretty significantly because we we act in this way and so i'd really encourage people not to view OpenAI as this kind of like, you know, thing that'll just shove people out of the way, but instead focus on building something valuable. And we, you know, remain committed to providing an open ecosystem. Why is that important to OpenAI? Just this focus on building a platform, creating a way for people to build businesses? Just like, is that just, that's been the vision from the beginning? We want this to be a platform. It's been the vision from the beginning, it goes back to our charter actually, like our mission. So the OpenAge mission has always been one, to build AGI. So we're obviously doing that. But then the second thing is to spread the benefits of it to all of humanity. And there's kind of like a lot of, you know, the main part there is all of humanity. And obviously, ChadGPT is trying to do this. We're trying to reach however many, you know, the whole world. But very early on, and this is why we launched the API back in, I think it was like 2020 or something, like really early. We don't think we as a company will be able to reach all of humanity. I don't know, every corner of the world is pretty deep. And so we actually feel like in order for us to fulfill our mission, we need to have some platform-style thing here where we can empower other people to build the customer support bot for podcasters and newsletter hosts because we're not going to be able to do it ourselves. And so we've largely seen this play out with the API. This is why we talk to so many of our customers and really love seeing the diversity of things built on. But yeah, it's been there since A1 because we view it as an expression of our mission. And you haven't even mentioned the App Store that you guys are launching, the ChatGPT App Store. Yeah. Is that under your umbrella, by the way, or is that a different org and team? It's a different team. So it's under ChatGPT. We obviously collaborate very closely with them. And they built an apps SDK, which is a built-in close collaboration with our team. But that is more within the ChatGPT umbrella. But that is also another, example of this, right? It's like ChadCBT is like, we kind of have these 800 million weekly active users who are just coming over and over again. It's a great asset to have as a business, but man would it be better if we could somehow allow other companies to come in and take advantage of this as well and build for this audience as well. And then ultimately we think it will help us expand that group as well, right? And so it all kind of comes back to the mission and uh we find that being a platform being open tends to help here just that number 800 million i think it's m m a is just like weekly weekly weekly yeah it's crazy billion people using weekly just like it's sort of how many know how these numbers we're just used to now but that's insane unprecedented yeah it's it's mind-boggling for me to think about from a scale perspective uh honestly and the way i think about it is like 10 of the world uh and It's growing, by the way. It's shooting up. Come to ChatGPT and use it every day. Or sorry, every week. At this point, I just want to double down on this point you're making. OpenAI's mission was to make AI available to all of humanity. And I think some people diss that. They're like, oh, you know, it costs money. And it's like the fact that there's a free version of ChatGPT that anybody can use that is not so different from the most powerful AI model that exists in the world. for free that's not gated that anyone can use like if you have if you're a billionaire there's only so much more you can get out of ai than what someone you know in a village in africa can can get and i know that's always been really important to open ai yeah yeah i mean like uh that that's why i think we've leaned into the health work we've leaned into like like uh education is going to be very interesting here um the other insane kind of trend here is is the free model has gotten so smart over time like the free model back in 2022 was you know like uh well it's good at the time but it's like nothing compared to what you get today because you get 25 today uh and so the like you know raising the floor across the world is kind of you know something that we're really trying to do and we view it as part of our mission the other flip side of this by the way is like you know kind of talking about like the billionaires or whatever i know people love saying like you're using the same iphone that like you know steve or sorry like mark zuckerberg's probably using or like the billionaires are using. For like $20 a month, you're basically using, you know, like using the same AI that, you know, the billionaires are using. For like $200 a month, you get the same pro model that, you know, all the billionaires are using, but they're probably not using pro for everything. They're probably just using the plus tier ones for their day in and day out. And so, yeah, this kind of like democratization and just like spreading of this benefit like across all the world is something that's really meaningful to us and something that that drives a lot of what we do. One last question, just for folks that are thinking about building on the API or just like, oh, wait, I could do cool stuff with OpenAI's models and APIs. What does your API and platform allow people to do? Like, I know you can build agents on top of the platform. Just talk about what you allow. So fundamentally, the API offers a bunch of developer endpoints. And these developer endpoints basically let you sample from our models. the most popular one that we have right now is one called responses API and so this is an endpoint and it's optimized for building long running agents so agents that will work for a while so what you can basically use at a very low level you're basically just giving the model text the model will work for a while you can kind of pull it to see what it will do and then you'll get the model response back at some point. That's like the lowest level primitive that we have for people and that's actually what a lot of people use. That's the most popular way of building on top of API. With that, it is like super unimpanied and you can do basically whatever you want. It's like the lowest level thing. We've also started building more and more kind of like layers of abstraction on top to help people build some of these. And so next layer up, we have this thing called the agents SDK, which has also gotten extremely, extremely popular. This allows you to use, you know, the response API or some other API endpoints that we have to build what you might more traditionally think of as an agent, like an AI kind of working in an infinite loop, it might have sub-agents that it delegates to. It starts building all this framework, all this scaffolding actually. We'll see where this all goes. But it makes it a lot easier for you to build these kind of agents, giving it guardrails, allowing it to farm out subtasks to other agents and kind of orchestrate a swarm of agents. The Agents SDK kind of allows you to do that. And then above that, we've now started building tools to help also with kind of like the meta level of deploying an agent. So we have this product called AgentKit and Widgets, which are basically a bunch of UI components that you can use to very easily build a very beautiful UI on top of either our API or agent's SDK. Because, you know, a lot of times these agents kind of look very similar from a UI perspective. And so there's AgentKit. We also have a smattering of like evals products, like evals API, where if you want to test and see if your agent or your workflow is working, you can test it in a very quantitative way using our Edals product. And so, yeah, I view it as these various layers. They're all kind of helping you build what you want with our AI, with our models, and with increasing levels of abstraction and how opinionated it is. And so you can use the whole stack, and it very quickly allows you to build an agent, or you can go down the stack as low as you want to basically response to the API and build whatever you want because of how low upload is. Sherwin, is there anything else that you want to share? Anything else you want to leave listeners with? Anything we haven't touched on that you think might be helpful before we get to our very exciting lightning round? The only thing I'd leave folks with is, yeah, I think the next two to three years are going to be some of the most fun in tech and in the startup world that we'll have in a very long time. and I would just encourage people to not take it for granted. I entered the workforce in 2014. It was great for a couple of years. I felt like there was a period of five to six years where it wasn't very exciting in tech. And then in the last three years, it's just been the most insanely exciting, energizing period of my career. And I think the next two to three years is going to be a continuation of that. And so I would encourage people to not take it for granted. I'm trying to not take it for granted. At some point, this wave is going to play out and it's going to be a lot more incremental. But in the meantime, we're going to get to explore a lot of really cool things, invent a lot of new things and change the world and change how we work. And so that's the main thing I'd leave folks with. I love this message. I want to spend a little more time on it. When you say don't miss it, what do you recommend people do? Is it just build, lean in, learn, join a company, building really interesting things? What's your advice to folks that are like, okay, I don't want to miss the boat? Yeah, I would just say engage with it. So it's basically like what you said, lean in, building, Tools on top of this is part of the story. Just using the tools. You don't need to be a software engineer to lean into this. I think a lot of jobs are going to change here. So just using the tools, understanding the limitations of what it can and cannot do, so that you can kind of watch the trend of what it can start to do as the models improve. And yeah, and so it's basically like getting used to this technology and getting familiar with it, instead of kind of like laying back and letting it pass you. On the flip side of that, there's a lot of, I think, stress and just anxiety around like, there's so much happening. How do I keep up? I got to learn out. CloudBot this week. Oh, God. Is there something you learned about it? Just not like you're at the center of this. How do you not get overly stressed and worried about missing things that are going on and just stay on top of news? What are some things you've done learned? Yeah. So I think I'm personally a bad example of this because I'm basically chronically online on X and our company Slack. so I actually try and absorb I end up absorbing a lot of it what I will say though is just like from observing other folks who are less you know addicted to this stuff like I am yeah a lot of it is noise like you don't need to you don't need to have like 110% of this kind of pass your mind like go into your mind honestly just leaning into like one or two different tools starting small is already like you know more than you need here I think just the combination of the frenetic pace of the industry, X as a product, just creates this insane pace of news, which is honestly very overwhelming. The main thing is you don't need to know all of that to really engage with what's happening right now. And even something as simple as just install the Codex Client, play around with it. Install ChaggyBuchin, connect it to a couple of your internal data sources, Notion, Slack, GitHub, and see what it can and cannot do. All of that, I think, is a part of it. Amazing. Sherwin, with that, we've reached our very exciting lightning round. I've got five questions for you. Are you ready? Yeah, yeah, absolutely. First question, what are two or three books that you find yourself recommending most to other people? I'll talk about one nonfiction, one-on-one fiction book. The fiction book was, I just finished reading it. I really recommend it. It's, uh, uh, there is no anti-memetics division by QNTM. Uh, it's a, uh, I think it's like an online author, but I saw it being shared on X. Uh, this, this, uh, it's like a science fiction-y kind of book. Um, and it was, I basically devoured it in like two days. Um, it was, it's super, super well-written, super fascinating. It's about a government agency that's fighting, you know, things that make you forget it. Um, and so it's just a very like smart, like creative book that, that, and fresh, uh, honestly, in terms of like source material that I really like. So I'd recommend that one. The book is also unintentionally hilarious. It's like meant to be like this like sci-fi, almost like horror style book, but it made me laugh a couple of times. So that's the fiction book. Nonfiction, so I'm going to cheat and I'm going to recommend two of them. So in the last year, I've been reading a lot more about China and kind of like the US-China relations. And I think there are two books that came out in the last year that have been, you know, really, really eye-opening for me in that regard. First one is the Dan Wang book, Breakneck. That one was really, really good. I really liked his analogy of like the lawyerly, US is the lawyerly society. China is the engineering society. And there are pros and cons to each. I read it and I was like, hmm, yeah, it does seem like we're run by lawyers in the US. So then that's one. And the other one is the Patrick McGee book on Apple in China. It was super, super interesting. I'm a huge Apple fanboy. Like if you could see my desk right now, it's all Apple stuff. but just like one it was just super fascinating learning about apple's relationship to china and then two it just like had a lot of inside information about apple as a company that i found fascinating so it was also quite a page turner and um also you know very very timely a timely book as well the anti-memetics book sounds amazing i'm buying it right now as you're talking yeah yeah it's it's like i think it's only like a couple hundred pages i literally finished it was just like so so good okay great tip okay uh favorite recent movie or tv show you have really enjoyed yeah that one's tough because you know with i have two kids and uh a busy job and so i really haven't had much time um to watch tv shows uh i will say in the last couple weeks i watched a couple episodes i'm actually a big anime guy and so uh i watched a couple episodes there's a new season of this anime called jujutsu kaisen uh that's out uh so season three of jjk was really good. In general, I'm a huge fan of Japanese anime. I think they create the most novel and unique plots and universes that Western media has shied away from. And so generally a big fan of that. But yeah, haven't really watched much, but saw a couple episodes of JJK recently. Extremely understandable in your role. Yeah. Favorite product you recently discovered that you really love? Yeah, okay. So I recently had to set up Wi-Fi and home networking. And I went all in on Ubiquiti routers and security cameras. I'd never heard of it before I had to do this. I always just had a very simple setup. And it's just such a well-built product. I don't know if you've used it before, but it's basically like the Apple of home networking. So beautiful products. But the thing that actually makes it extremely good is that software is good. And so they have a really great mobile app. to help manage all of the home networking. And so basically Ubiquiti, you can use it to buy wireless routers. You need Ethernet wiring throughout your house to use it. But I actually think what makes it really good are security cameras. So if you have security cameras that are plugged into the Ubiquiti ecosystem, they have an incredible mobile app, an Apple TV app, an iPad app to kind of see the live feed of your cameras. And so they're a little pricey, but not that pricey. but it's been just an incredible product experience. All right. I went Eero, so I made a mistake. Eeros are pretty good too, but I'm fully converted to ubiquity at this point. Okay, good tip. Okay, two more questions. Do you have a favorite life motto that you find yourself coming back to in work or in life? Yeah, the one that I always repeat to myself is never feel sorry for yourself. There's a lot of things that are going to happen at work, in life, and reminding yourself to never feel sorry and that you always have a sense of agency to kind of pull yourselves up is something that I've had to tell myself a lot and also something that I repeat to a lot of other folks as well. Last question. So in your previous life, you worked at Open Door where you led work on basically figuring out how much to pay for houses. You basically built the model that told the company, here's how much we'll pay for this house. What's like a variable in the price of a house that you didn't expect is really important and impacts the price of a house? There's a bunch that were surprising. I'll maybe list the couple of most interesting ones. Power lines and high-voltage power lines are super, super... They actually impact your price quite a lot. I didn't really fully internalize this until I went to Dallas and observed when your house sits next to one of these giant voltage lines, it's buzzing. And most people have families, you don't want your kids kind of near there. So I think that was one that really, really kind of surprised me. That makes sense. yeah and then the other one which which was something that was always something really difficult for us to uh quantify uh was floor plans uh and so it is very important like yes of course it's really important but just like quantifying what a good floor plan is like what a really bad floor plan is like like we were doing all these things like how wide is the kitchen and like is it a what style of kitchen is it and then like where's the master bedroom and and so it was just really really hard to quantify but i remember floor plan was a big one because like we'd have a home that like wouldn't sell and then our uh ops team would go in and be like yeah that's the floor plan issue so like how do you how could you tell it's like you go inside you just feel it it feels you know the floor plan feel feels off uh so yeah those are ones that were uh surprising and then the last one that was more impactful than i thought is um general like curb appeal and like even like the front door uh and so i actually think there's a zillow book on on this where the front door replacement tends to be the highest roi uh for homes um but just like the feel of like as you walk up to the home as a buyer what you're interacting with and the first moments of the house i think was uh i'd underrated its importance that is extremely interesting uh and i love that you had to figure how to do all this uh in code and not yeah yeah and floor plans i have a bunch of stories around like for floor plans there's like there's like uh it's not digitized so there's like a handful of people who have like paper floor plans uh of like all these homes in like phoenix and dallas um yeah a lot a lot of fun fun stories from the open door days okay Sherwin uh thank you so much for doing this this was incredible uh where can folks find you online and uh and how can listeners be useful to you yeah so I'm uh online on on Twitter on x I'm just at Sherwin Wu and uh yeah I mostly just tweet about uh OpenAI and API and some of the products that we're launching uh and then how folks can be interesting uh can be useful to me uh I love hearing about things that people are building and so if you're working on a startup if you're hacking on an idea you know would love to uh just reach out to me on x um I would love to hear about what you're building and learn about how OpenAI can help support you. Amazing. Sherwin, thank you so much for being here. Yeah. Thank you, Lenny. Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at lennyspodcast.com. See you in the next episode. Thank you.