Lenny's Podcast: Product | Career | Growth

How Anthropic’s product team moves faster than anyone else | Cat Wu (Head of Product, Claude Code)

86 min
Apr 23, 20265 days ago
Listen to Episode
Summary

Kat Wu, Head of Product for Claude Code at Anthropic, discusses how the company ships products at unprecedented speed—moving from 6-month to 1-day timelines—by removing barriers to shipping, using research previews, and maintaining extreme focus on their AGI safety mission. She reveals the emerging PM skills needed in the AI era: product taste, understanding model capabilities, building evals, and the ability to iterate rapidly while maintaining quality.

Insights
  • The PM role is fundamentally shifting from long-term roadmap coordination to enabling rapid weekly feature launches; success now depends on removing shipping blockers rather than planning multi-quarter initiatives
  • Product taste—deciding what to build rather than how to build it—becomes the most valuable PM skill as code becomes cheaper; engineering background helps but isn't required if you can evaluate model capabilities
  • Anthropic's unifying mission around safe AGI enables faster decision-making across the org because teams willingly sacrifice individual product goals for company-level objectives, creating organizational alignment at scale
  • The character and personality of Claude (low ego, positive, competent) is not a cosmetic feature but core to product success; it makes Claude a better coworker and directly impacts user outcomes
  • Building automations to 100% reliability is critical; 95% automation creates more friction than value and requires significant iteration with model feedback to achieve production-ready quality
Trends
PM role convergence: engineers doing PM work, designers shipping code, PMs writing evals—role boundaries are dissolving in favor of outcome-based ownershipResearch Preview as standard launch strategy: shipping unpolished features early to gather feedback reduces commitment and enables weekly launches vs. quarterly releasesModel-driven feature deprecation: as models improve, previously necessary product harnesses (like to-do lists) become optional, requiring regular pruning of the product surfaceToken spend approaching salary costs: as models improve, knowledge workers delegate more tasks, increasing per-person token consumption each model generationAgentic product complexity: as features ship daily, users struggle to keep up; companies need better onboarding and education (e.g., /powerup) to prevent feature fatigueInternal tool democratization: AI makes custom app building accessible; teams build personalized software for specific workflows instead of forcing fit into generic toolsEval-driven product development: quantifying success through evals (even 10 well-designed ones) is becoming standard PM practice for AI-native productsMulti-task orchestration as next frontier: moving from single-task success to managing dozens/hundreds of parallel agent tasks, requiring new UX and verification patternsCharacter/constitution as competitive moat: Claude's personality drives adoption and user satisfaction more than raw capability; this is a defensible, hard-to-replicate advantageMission-driven hiring and decision-making: companies with clear, non-commercial missions (safety, alignment) can make faster trade-offs and attract mission-aligned talent
Companies
Anthropic
Kat's employer; company shipping Claude Code and Cowork at unprecedented pace with focus on safe AGI mission
OpenAI
Competitor mentioned as having first-mover advantage but now being outpaced by Anthropic in product velocity
Cursor
Mentioned as company using WorkOS for enterprise features; also building AI-native coding products
Vercel
Mentioned as company using WorkOS for enterprise features and building in AI-native space
Replit
Mentioned as company using WorkOS and building AI-native developer tools
Slack
Described as core OS of Anthropic; essential communication infrastructure for rapid shipping and coordination
Waymo
Mentioned by Kat as favorite recent product discovery; autonomous vehicle service improving productivity
Figma
Mentioned as source for design system integration via MCP for Claude-generated design assets
Salesforce
Mentioned as data source integrated into internal sales tools for customer context
Gong
Mentioned as data source for sales team's custom Claude-built app for customer context
Bedrock
AWS service mentioned as alternative model platform some customers use instead of Claude
Vertex AI
Google's AI platform mentioned as alternative some customers use instead of Claude
People
Kat Wu
Guest discussing Anthropic's rapid shipping velocity, PM role evolution, and AI product strategy
Lenny Rachitsky
Podcast host conducting interview with Kat Wu about product management and AI
Boris
Co-creator of Claude Code; ships PRs at extreme velocity; sets product vision 3-6 months out
Amol
Previously appeared on podcast; shared insight that engineers moving fast creates need for more PMs
Ben Mann
Previously on podcast; discussed Claude's character and constitution as core to product success
Diane
Leads research PM team responsible for gathering customer feedback and shepherding model launches
Sarah
Part of tight launch process; turns around marketing announcements for features within one day
Alex
Leads PMM team; coordinates with engineering and docs for rapid feature launches
Amanda
Molds Claude's character; excellent at evaluating model personality and defining success criteria
Sid
Designed to-do list feature for Claude Code to help model track multi-step refactoring tasks
Karpathi
Tweeted about divide between AI skeptics and power users; referenced in discussion of AI adoption
Quotes
"I think it is very hard to be the right amount of AGI-pilled. It's very easy to build the product for the super AGI-strong model. The hard thing is figuring out, for the current model, how do you elicit the maximum capability?"
Kat WuEarly in episode
"We want to remove every single barrier to shipping things. The timelines for a lot of our product features have gone down from six months to one month and sometimes to even one day."
Kat WuOpening segment
"As code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write."
Kat WuMid-episode
"I think mission means that teams are willing to make sacrifices that hurt their own goals and their own KRs in service of Anthropic's goals. If Claude Code failed, but Anthropic succeeded, I would be extremely happy."
Kat WuDiscussion of mission alignment
"There's a lot of value in first principles thinking. If you know what you're optimizing for and you have strong first principles, then you can normally deduce what the right course of action is and be able to clearly articulate that to all stakeholders. And then you should just do it."
Kat WuLightning round
Full Transcript
I think it is very hard to be the right amount of AGI-pilled. It's very easy to build the product for the super AGI-strong model. The hard thing is figuring out, for the current model, how do you elicit the maximum capability? I've never seen anything like the pace folks at Anthropic are shipping at. We want to remove every single barrier to shipping things. The timelines for a lot of our product features have gone down from six months to one month and sometimes to even one day. You're interviewing hundreds of PMs and you just keep feeling like they're approaching it very incorrectly. The PM role is changing a lot. It's changing really quickly. The thing that is extremely important for building AI native products is iterating so quickly, figuring out a way for you to actually launch features every single week. What do you think are the emerging skills PMs need to develop? It comes back to product taste. As code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write. Today, my guest is Kat Wu, head of product for Cloud Code and co-work at Anthropic. Kat is at the center of everything that is changing in AI and product and building, and she and her team are building the product that is most changing the way that we all build our products. She is so full of insights and wisdom and lessons. This is an episode you cannot miss. Before we get into it, don't forget to check out Lenny's Product Pass.com for an insane set of deals available exclusively to Lenny's newsletter subscribers. With that, I bring you Kat Wu. Kat, welcome to the podcast. Thanks for having me. I have so many questions. I'm so excited to have you on this podcast. I want to start with giving people an understanding of your role alongside Boris. Everybody knows Boris. His episode is the number one most popular episode on this podcast, no pressure. He created Cloud Code. He leads the Eng team, ships a bazillion PRs a day from his phone, just like I don't even know what the number is anymore. I think people don't give you enough credit for the success that Cloud Code has had and co-work and all the things you all are building. Help us understand your role on the team, how you work with Boris, how you split responsibilities. Just like what does the PMR look like on the Cloud Code team? I feel very lucky to work with Boris. He's been an amazing thought partner. He's our tech lead. He's very much the product visionary. And he is great at setting like, this is what the product needs to be in like three months, six months from now. This is like what the AGI-pilled version of the product is. And a lot of my role is figuring out, okay, what is the path from where we are today to like that vision three to six months from now? And I spend more of my time on the cross-functional. So making sure that our marketing team, sales team, finance, capacity, etc. are like bought in on the plan and that we're all rowing the same direction. And that once the feature is ready, that there aren't any blockers to shipping it. I think in many ways it works well because we kind of like mind meld, but it is actually like remarkably blurry of a line. Like I think we're like 80% mind meld. And then there's like this 20% of things that like, maybe I care a lot more about them for us. So like I'll drive those and like 20% where he cares a lot more than me. And he just like drives those. This episode is brought to you by our season's presenting sponsor, WorkOS. What do OpenAI, Anthropic, Cursor, Vercel, Replit, Sierra, Clay, and hundreds of other winning companies all have in common? They are all powered by WorkOS. If you're building a product for the enterprise, you've felt the pain of integrating single sign-on, SCIM, RBAC, audit logs, and other features required by large companies. WorkOS turns those deal blockers into drop-in APIs with a modern developer platform built specifically for B2B SaaS. Literally every startup that I'm an investor in that starts to expand upmarket ends up working with WorkOS. And that's because they are the best. Whether you are a seed-stage startup trying to land your first enterprise customer or a unicorn expanding globally, WorkOS is the fastest path to becoming enterprise ready and unblocking growth. It's essentially Stripe for enterprise features. Visit WorkOS.com to get started or just hit up their Slack where they have actual engineers waiting to answer your questions. WorkOS allows you to build faster with delightful APIs, comprehensive docs, and a smooth developer experience. Go to WorkOS.com to make your app enterprise ready today. Something that you shared actually before we started recording is the fact that you're interviewing hundreds of PMs all the time. Like if I had a nickel, every time someone asked me for an intro to someone at Anthropic to go work at Anthropic as a PM, I'd be, I'd be, I'd have 30 billion in ARR. It's just like the number one place people want to go work at. So I can only imagine how many PMs you're interviewing. You told me that you're just seeing people doing it, doing it wrong, the way they're approaching what they think it takes to be a successful AI PM. Talk about what you're seeing and what people need to understand about what it is, what it takes to be successful these days? I think before AI, technology shifts were a lot slower. So you could plan on the six to 12 month time horizons. And because you were shipping features at a bit of a slower rate, there was a lot more emphasis on coordinating with all the other partner teams to make sure that their shipping features that unblock your features, because code at that time was very expensive to make. I think now with AI and with how much that has accelerated engineering and with how quickly the model capabilities are improving, the timelines for a lot of our product features have gone down from six months to one month and sometimes to one week or even one day. And with that, we actually need to make sure that products ship quite quickly. And what that means is as a PM, there should be less emphasis on making sure that you're aligning your like multi-quarter roadmaps with your partner teams and more emphasis on, okay, how can we figure out the fastest way to get something out the door? How can we figure out how to make like a concept corner of our product suite where we can just, an engineer has an idea or a PM has an idea. And like by the end of the week, we're able to get into our user's hands. I think the PMs who do the best on AI native products are the ones who can figure out how can I like shorten the time from having this idea to actually getting the product in the hands of users and help define what are the most important tasks that need to work out of the box for my product. So what I love about this is what you're saying is just like people haven't grasped how fast they need to move and what how much of the job now is just moving, is helping the team move fast. What what helps do that? What do you what do you do? what does your PM team do to help them move this fast other than have access to the most advanced models? I think the first thing is to set clear goals. Because LLMs are so general, that actually creates a lot of ambiguity in who we're building for, what problems we're trying to solve, what the top use cases are. And so I think a great PM is able to say, okay, our key user is professional developers. The main problem that we want to solve for this feature is maybe there's like too many permission problems and people are feeling fatigue. And the use case is we want professional developers at enterprises to safely get to zero permission prompts. And that actually sets a pretty clear goal because it rules out a lot of potential approaches for reducing permission prompts so that people can get a lot more done with one prompt. And then I think the second thing that's very important is figuring out some repeatable process for getting these features shipped. So for Cloud Code, what we do is we actually ship almost all of our features in Research Preview. We clearly brand this when we ship something so that users know that this is an early product. This is just an idea. This is just something that we're trying to get feedback on and iterating on, and that this might not be supported forever. And what this does is it reduces our commitment for shipping something. We can just get something out in a week or two. And the third thing that a PM should do is help create the framework for the team so that they know when to pull in cross-functional partners and what those cross-functional partners' expectations are. So, for example, we have a really tight process between engineering, marketing, and docs. So when engineers have a feature that they feel is ready and that we've dogfooded internally, they post it in our evergreen launch room. And then Sarah, who leads our docs, and Alex, who leads PMM, and Tarek, and Lydia, and DevRel, just jump in and can turn around the marketing announcement for it the very next day. And because we have this really tight process, it lowers the friction for any engineer to ship something. PM is the role that should be setting this up. How do PRDs fit into this? The fact that you said that goals are a really important part of it, just being aligned on what does success look like? Who is this for? Who is this not for? Are you writing PRDs? Is it just a couple of ballpoints? How has that evolved in the world of a VM? So there's two things that we do. One is we have very rigorous metrics and we do metrics readouts with the entire team every week. The goal of this is to make sure that everyone deeply understands all the facets of our business, what our key goals are, how they're trending and what drives them. The second thing that we do is we have this list of team principles and this includes who our key users are, why those are our key users. And the reason that we articulate all of this is so that everybody on the team feels like they understand how our business works. They understand what's important to us and what we're willing to trade off. And it lets people make decisions by themselves without feeling like they're blocked on PM or any other stakeholder. I love how so much of this is like, OK, we still need PMs in the future. And there's so much talk of like, why do we need PMs? We're just going to ship and build leading engineers. Oh, we actually do PRD sometimes. sometimes. So I think for features that are like particularly ambiguous, it does help to write out just a one pager on what the goals are, what the delightful use cases are, what the failure modes currently are that we need to fix. And there are occasionally some projects, especially things that require heavy infrastructure that do take many months. And for those situations, we do write PRDs still. I want to drill a little bit further into just how you're able to move so fast. I've never seen anything like the pace folks at Anthropic are shipping at. Like someone made this calendar of launches across Anthropic and it was literally every day there was like a major feature or product. So one question people had online is you guys just launched this, not launched, but built this incredible model mythos that is still in preview because it's so powerful. People are a little afraid of what it can do. Have you guys been using this? Is this part of the reason you've been able to move so fast? We've been moving pretty fast for several quarters now. So I think it's not fully Mythos. Mythos is an incredibly powerful model. We do use the models internally. And I think this has increased our rate of shipping a little bit, but I don't think it explains the bulk of the increase. I think a lot of it is the process and the expectation on the team. So we're very low on process. We want to remove every single barrier to shipping things. We want to make sure every single person on the team feels empowered to take their idea from just an idea to like out in the world in less than a week, sometimes even in a day. Cool. Oh, man. What an advantage to have the best model and also be building product at Sokol. We are very lucky to be able to work with the Frontier models. Oh, my God. What an awesome advantage. Just like build a thing and then use it and then accelerate faster. It's so interesting. There's a couple of these other side things I want to just kind of go on these side quests on this conversation. There's so much happening with Anthropic, and I'm so curious to get your insight. One is, a week ago or so, the whole source code of Claude code leaked. Somebody got it out there. I think it was a mistake someone made. Is there anything you comment there, just like what happened, what went wrong, what should people know? So we immediately looked into this when we saw it. We realized that this was the result of human error. There was a human working with Claude to write PR. This was just an update to how we release our packages. And it actually went through two layers of human review. And so this was a result of human error. And we've hardened our processes to make sure that it doesn't happen in the future. Is this person still adanthropic? Are they doing all right? Yes, yes. It's a process failure. And the most important thing is to just learn from it and to add more safeguards so that doesn't happen again. And so that's what we've been focused on. And most of those have shipped. Okay. Okay. Another question I had is OpenClaw. So recently there's been this move to keep people from using Claude's subscription with their OpenClaws. People got really upset. They're confused why this is happening. It feels like there's like, you know, harm caused to the open source community. What do people need to understand about kind of what went into this decision? So we've been seeing a lot of demand for Claude, and we've been working very hard to both scale our infrastructure and also to make our harness more token efficient so that you can get more usage out of it. It wasn't designed for third-party products, which have different usage patterns than our first-party ones. We spent a bunch of time trying to figure out what is the most seamless transition that we can offer. And so I was very happy to be able to say that everyone gets some credits alongside their subscription. But yeah, we did have to make the hard decision that we needed to prioritize our first product products and our API. And so this is the decision that resulted from that. Yeah, to me, it makes so much sense. You guys are subsidizing this usage at like 200 bucks a month. It's basically unlimited use of this. And I think people don't understand. This is they're trying to make money. We're trying to be profitable here. We can't just give away compute when it's so in demand. So I get it. Coming back to the PM team, What does just like the PM team look like at Anthropic? How many PMs are there? How are they kind of organized? Yeah, so we have a few PM teams. I think we're maybe around 30 or 40 PMs right now. So we have the research PM team who Diane leads. And this team is responsible for understanding all of the feedback from our customers for our models and then feeding that to the best research team to act on it. And they also shepherd the model launch. There's the Cloud Developer Platform team that maintains the APIs that Cloud Code is built on top of. And they also release things like Managed Agents, which is a way for you to build your agents and we can host it on your behalf. And then there's Cloud Code that works on both Cloud Code and the Cowork core products. There's Enterprise that helps make Cloud Code and Cowork easier to adopt for all of our Enterprise customers. And so this is everything from cost controls, RBAC, security controls, and just making sure that these enterprises feel very confident and comfortable using our tools. And then we also have our growth team that is responsible for growing across our entire product suite. So we work very closely with them on quad code and co-work growth. And I know they also work with our other teams on CDP growth, so growth of people who use the quad API. So speaking of growth, so Amol was just on the podcast. He had this really interesting insight that most people haven't been sharing. There's always the sense that we need fewer PMs in the future. What do we need PMs? Engineers can just ship. His take is that because engineers are moving so fast, PMs and designers are squeezed. There's less time to stay on top of everything that is happening. There's a feature shipping every day. So his take is he needs more PMs because it's hard to keep up. What's your take there? Do you feel like there will be an increase in hiring of PMs? What do you think is going on with the PM profession long term? I think all of the roles are merging. PMs are doing some engineering work. Engineers are doing PM work. Designers are PMing and also landing code. You can either hire a lot more engineers who have great product taste, or you can keep your engineering hiring the same and hire a lot more PMs to help guide some of their work. on our team we're pretty focused on hiring engineers with great product taste this this way we can reduce the amount of overhead for shipping any product like there are many engineers on our team who are fully able to end to end go from see user feedback on twitter through to like ship a product at the end of the week with almost no product involvement and this i think is actually like the most efficient way to ship something. So I think like engineer and PM are kind of overlapping and you will get a lot of benefit from having more of either. I think product taste is still a very rare skill to have and we'll pretty much hire anyone who we feel has demonstrated this strongly. And your background was in engineering, right? Yeah, I was an engineer for many years. I was then a VC very briefly before joining Anthropic. And actually almost all the PMs on our team have either been engineers or ship code here on QuadCode. And so that's one of the things that I think helps build trust with the team and also just enables us to move a lot faster. And then actually our designers also have been front-end engineers before. Wow. Because that's the big question. Like there's definitely this merging that's happening. the Venn diagrams are combining. I think the big question for a lot of people is if you're coming from engineering or product or design, which of those core skills is going to be most valuable? I could see it on Anthropic and on Cloud Code. Engineering is very valuable. I'm curious if other companies, if you have a design background becoming a PM is more valuable or just a PM PM. I still think it comes back to product taste. As code becomes much cheaper to write, the thing that becomes more valuable is deciding what to write. What is the right UX for this feature? What is the most delightful way that a user can experience it? We get tens of thousands of GitHub issues asking for every single thing under the sun. And it takes a lot of care and taste to figure out, okay, which of these is worth building and what is the right way to build it? And I think that that skill set can come from any background, but I think that's the most important thing. I think the reason why an engineering background is particularly useful, at least for the next few months, is if you have an engineering background, you have a better sense for how hard something should be. And that's often a factor in what you choose to build. So like if something is very easy to build, then maybe instead of debating it, you just spend an hour doing it. But if something is harder to build, and you know that upfront, then you know that, okay, this will just like cost a lot more for our team to get this out the door. So it helps a bit with the prioritization. You said in the next for the next few months, is that just like because the models will get so good potentially in the next few months, you may not even need to know that as much. I think the valued skill sets does change quite frequently. And so it's really hard to predict more than a few months out. So it's less a commentary on what shift I think will happen and more of a commentary that I think large shifts will happen. So you're not saying that's when Mythos comes out and will change everything and we don't need to know anything about engineering. No, I'm just saying that every few months, it seems like there's a large increase in coding capability, which then changes what other roles are valuable. I think the most important thing is to be able to have this first principles thinking where you can figure out how the tech landscape is changing, what the team really needs from you, and to jump in and fix that hole. because I think the work is becoming more amorphous, which means that a great PM is able to understand what all the gaps are, to figure out what the highest priority ones are, and then to just like figure out, okay, how do I learn that skillset or what is like the skillset that I have that I can like apply to this challenge? So I think the current environment values people who are able to wear a lot of hats are able to swap them and are like very low ego about what work they do to help the team move faster. I love this answer. There's this question I've been asking people in your shoes, folks that are kind of at the bleeding edge of what AI is capable of and building with the latest tools, which is just like, where will human brains continue to be useful and necessary for a while until we get to super intelligence What I hearing here is essentially picking the things to work on knowing where the market going and figuring out what to prioritize essentially And then it's knowing if the thing you've built is good and right and getting it out there in some early version at least. Does that sound right? Is there anything else of just like where human brains will continue to be useful for at least the next few months? I think humans still provide a level of common sense that the models don't. and there's like a thousand moving pieces to any product launch. Some of them are very small, but there's always a lot that could potentially go wrong. I think the model doesn't always have a great sense of who all the stakeholders are, how they relate to each other, what their preferences are, what are the right venues to communicate with them to keep them on board. I think a lot of this more tacit, common sense, EQ kind of knowledge is still very valuable. Of course, we want the models to get better at this, and I think they will be. But right now, I think there's still gaps. How do you just kind of deal as a human going through so much constant change, just like just being on the inside of the tornado? Maybe it's calm there. But just like, how do you how do you stay on top of what's going on? How do you stay sane through all this craziness that we're moving through? I think our team is full of people who lean into the chaos. So we try to face every challenge with a smile because there's always so much going on. And there's always so many risks and tricky situations that, you know, if you get too stressed about anything, you'll burn out. And so we really look for people who can kind of like look at a challenge, be like, whew, that's going to be hard, but I'm excited to tackle it. And I'm going to do the best that I possibly can. And I know I won't be perfect, but I'll be able to sleep at night knowing that I did my best. That's an interesting answer to just like what skills will be important in this future because it's, I forget who said this, maybe Ben, man, that this is the most normal the world will ever be. Yeah, it definitely gets harder. Like, I feel like there are a lot of weeks where maybe Sunday night there's some like P0 and then by Monday there's like a P00 and by Monday afternoon there's a P000 and you're like, wow, I can't believe I was so worried about that P0 from Sunday. But I think you just have to acknowledge that there's only so much that you can do that you need to sleep well so that you can make good decisions next day. And just like brutally prioritize where you spend your time, what's the most important thing to get right and be okay letting things go. Like there's products that we ship that aren't as polished as I wish they were. But, you know, our top goal is to help empower professional developers. And if a product isn't successful, as long as it's not blocking the core use case, it's okay because we'll hear the feedback and we'll fix it in the next release. Launching a feature that is buggy is the kind of thing that would have kept me up at night, but it is something that I am now able to live with knowing that, okay, we're going to get that quick feedback and we're going to fix it in the next release. What I'm imagining is there's that gif, I think it's maybe from Pirates of the Caribbean where it's this guy walking down a pair of stairs on a ship and the whole ship is just being demolished around him and he's so chill, just strolling down the staircase as everything's falling apart. And that's interesting because everyone I've met from Anthropic is just so chill and just so optimistic. You have to be. Yeah, I think that's a really interesting insight is just like having this calmness and optimism versus just like, oh my God, everything's crazy and going nuts. Yeah. I think if you don't have it, you'll get pretty burnt out. But I think we also tend to hire people who have been in the industry for a while and have experienced lots of ups and downs and have a good sense for what gives them energy and how to maintain their energy over time. And I think that's helped us a lot. So interesting. Something that I wanted to ask about is so there's these roles blurring. Engineers are becoming PMs. Everyone's dogs are cats. Everyone's everyone. What do we lose in that world? Do we lose like career ladders and clear career paths? Do we lose design consistency, code quality? You know, there's probably some downsides. What are some things you find are just like, okay, that's something we're sacrificing for the greater good? We're sacrificing product consistency. Historically, when code was expensive to write, you would carefully plan out everything in your product suite, how every product relates to each other, what the use case for every single one is, how they integrate. and you would pretty much have one product for each use case. And now with AI moving so quickly and with so many ideas that we need to test out, we do sometimes have features that overlap with each other. A lot of the times it's because there's two form factors that we love internally and we want the external audience to tell us which one is better. What that means for someone who's a new user though is a new user might not know, okay, what is the best path to accomplish X? There is more education we need to do to help people understand what the core features are and what the best practices are for using them. I think this is the cost of launching a lot of features. I think users also feel like it's hard to keep up with the latest. Usually in traditional PM, you ship a feature every like month or quarter. And so it's really easy for a user to understand, okay, I just need to check in on this once a month and I'll learn some new things. And if I ignore it for six months, it's fine. I don't feel like I'm missing out. I think with these agentic tools, not just call code and cowork, but like across the whole ecosystem, people feel this need to like check Twitter every single day to see what the absolute latest thing is. and I think there's more we can do to help people feel less like they're on this ever increasingly fast treadmill and that they feel like I would love people to feel like they can just open these tools the tools will educate them or like teach them what they want to know and that they can just feel more bought along yeah I saw you launch this really interesting feature the other day I I think it's slash powerup, where it basically walks you through all the cool ways and basically all the best practices to use cloud code. Is that kind of all in these lines? Yeah, exactly. So in the past, we didn't actually want to do something like powerup because we felt like the product should be intuitive enough that you don't actually need to go through any tutorial. and over time we've just realized that there's just so many features and there's so much demand for a built-in onboarding experience that we we diverged a bit from our original principle saying no no onboarding flow and added this because there's just so many users who wanted to know there's a hundred features what are the 10 that i absolutely need to use and so we put that together yeah it's such a bizarre world so anthropic has been really successful with b2b enterprises where traditionally you don't launch a bunch of stuff. You just kind of have a quarterly release maybe. And it's like the opposite of every day we got some new. So just maybe following that thread. The run Anthropic has been on is just otherworldly. Anthropic was way behind when it started. It was, Amol shared this, just like one of the least funded companies didn't have distribution, wasn't the first to go. OpenAI was way ahead. It was just like, no way. Anthropic has any chance to compete significantly long-term. Now it's just killing it, just beating the biggest companies, teams, so much, just like the growth is just like $11 billion in ARR in one month, groups of growth. By the time this comes out, it'd probably be even higher. Just being on the inside, what are some ingredients that have allowed Anthropic to be this successful and kind of come from behind and do this well? The two most important things are, one, this unifying mission. It's hard to state how important this is. We hire people who care most about bringing safe AGI to all of humanity. And this is actually something that we reference frequently in our decisions about what our entire product org should focus on shipping. And because we put this like mission above any individual product line, we're able to make very fast decisions that cut across the entire org and like execute on them in a unified way. So I think this is like something that I've never seen at a company of our scale. And so just to make sure that's clear. So essentially having the number one mission is safety, alignment, making sure AI is good for the world. And you're saying just having that as a clear mission makes decisions a lot easier to make. If there's two competing priorities, we'll talk about which one is more important for Anthropics mission. And it makes it a lot easier to decide which of the two we prioritize. And then everyone will stand behind the one that we decide. And so sometimes that means that like, hey, we want to ship something on cloud code, but this other thing is more important. And so we deprioritize shipping this and we just wait until later. What's really interesting about that is that explains, I think, versus another company, maybe rhymes with Vopen BI, did a lot of different things. And what I'm hearing here essentially is like, okay, we're not going to launch a social network. We're not going to launch a feed of interesting information because it's not aligned to this mission. And that has kept Anthropic focused, which seems to be a core ingredient to the success. Well, when I think about mission, I think about putting Anthropics goals ahead of any individual org or any individual product. And so for me, it's I think the second thing that we're very good at is focus. I think mission to me is slightly different. Mission means that teams are willing to make sacrifices that hurt their own goals and their own KRs in service of Anthropics goals and Anthropics KRs. and people are very happy to make those trade-offs. So like an extreme example is if Claude Code failed, but Anthropic succeeded, I would be extremely happy. And like the whole team is very willing to make decisions that follow that chain of thought. I don't know if you can talk about this in depth, but do you feel like the open Claude decision is a part of this? Just like, okay, this is not furthering the mission of Anthropic. We need to stop this because it's not working. in the way we want it to work? I think one of the most important things for Anthropic is to grow the number of users that we're able to reach. One of the ways that we're able to do this is with the cloud subscriptions with our first-party products. And so we just very much want to double down on that, but that does come at the expense of third-party products sometimes. So we've been talking about cloud, co-work, all these things, something that I want to make sure people get. And I'm curious just how you use these tools. So there's cloud code, there's cloud desktop slash web, there's co-work. What's the best way to understand when to use which? When do you use each of these three? So I tend to use Cloud Code in the terminal when I'm just kicking off like a one-off coding task and I want all of the latest features. The CLI is our initial product surface and it's also the one where our features often land first. And so it's the most powerful of all the tools. So that's what I tend to use when I'm just like trying to kick off one or like maybe like a handful of tasks at a time. I think desktop really shines when you're doing something that requires front-end work. And so one thing that I love to do is to use our preview feature. So if I'm building a web app, I'll often use Cloud Code and desktop. I'll have the preview pane open on the right-hand side so that I can actually see the web app that I'm making in real time as I'm chatting with Cloud. It's also really great for people who want something a bit more graphical. A terminal can feel very unfamiliar to someone who is non-technical. You get a bunch of these like scary pop-ups on your machine and you can't click around the way that you're used to in pretty much every other product that you use. So there's a lot of people who just like don't feel comfortable in a terminal. And if that's you, I would highly recommend checking out Cloud Code on desktop. Desktop is also great for getting an at-a-glance view of everything that's happening. So you can see your CLI terminal sessions in desktop. You can see your other desktop sessions. You can see your sessions that you kicked off on web and mobile. So it's a one-stop control plane where you can see all of your tasks. I think the benefit of web and mobile is that it's really great for kicking things off on the go. So CLI and desktop both require you to be on your local laptop. And this is constraining because sometimes you're out and about. You're like touching grass. You're going on a walk. And you don't have your laptop open. I can't count the number of people who I've seen like holding their laptop open, like tethered to their phone while they're outside. And this just means that we're missing a product that solves that need. And so for me, what mobile lets you do is kick off these tasks on the go so that you don't need to bring your laptop everywhere and make sure that your laptop's open wherever you are. I love that. I've seen people on plane, like it's just like such a meme now. Just I need to finish, let this agent finish. I can't shut this down. And then I think for co-work, the role that this fills is there's a lot of work that everyone does where the output isn't code. So whether that's like getting to Slack zero or inbox zero, or whether that's creating a slide deck for some customer meeting that's coming up, or whether that's writing a quick doc on what the goals of a feature are or what the launch plan for a feature is. All these tasks produce outputs that are non-code and co-work is best positioned for that. So the way that I split the products in my mind is if I'm building something where the output is code, I'll use code code or desktop or code code on mobile. And if the output is anything that's not code, I'll use co-work for it. People are just like sleeping on the success that co-work is having. It's just like growing incredibly fast. And I think people still don't understand maybe what it's for. And so what if you give us a couple of use cases just in your work as a PM? What are some like really interesting, maybe unexpected ways to use Co-Work to save you time, get more work done? If you're getting started on Co-Work, the first thing that you really need to do is connect all the data sources that are relevant to your role. Because Co-Work can only do a great job if it has access to all the context that it needs to be able to curate the output for you. So what that means for me is I connected to my Google Calendar. I connect it to my Slack, to my Gmail, to my Google Drive, so that it just knows, it has the flexibility to find relevant context, to ask questions, to pull in threads. And this, this like substantially improves the quality of the result. The kinds of things I use it for are, like last night, I was working where we have this Code with Cloud conference coming up. And there's a few talks that I'm giving there. and one of the talks that we're doing talks about the transition of cloud code from an assistant to like a full-on agent and one of the things that I wanted to do in this talk was to showcase all the products that we've been shipping that enable this transition and also to figure out okay what are the success stories that people have had internally that we can use as demos and so I I have my Google Drive connected. I have Slack connected. Alex, who's our product marketer, put together like a draft of what the points that he thinks we should cover are. And so I just like fed this all into Cowork. I told Cowork the narrative that I want to tell. And it actually just worked for an hour. It looked through Twitter to see what we launched. It looked through our evergreen launch room. It looks in our quad code announce channel, which is where our team posts demos of how they've been getting the most value out of Cloud Code. And it synthesized all this together to this 20-page deck that I woke up to this morning, and I read through it, and it was pretty good. There were a few tweaks, so I did have to give it a round of feedback. I like my slides to have extremely minimal words, and it was a little too wordy. But it was far faster than what I would be able to produce. and because Co-Work has access to our whole design system, it actually looks like an anthropic designer put it together. Like when you visually see it, you're like, oh, this is like incredibly polished. So these are the kinds of things that are so much faster. Like making this slide deck would have taken me hours, but instead it like turns out a draft that is actually quite good. So I could focus on making sure that the demos are amazing that we plug into it. This sounds like a dream come true to PMs that putting decks together is so annoying. It's so slow. And I love people will see this deck whenever you present this. This will be out in the world. Like obviously it's not the one-shotted version, but you've iterated on it. So just to help people try this for themselves. So step one is connect their, what did you say, Slack? What else do you suggest they connect? Slack, Google Calendar, Gmail, G Drive. You should connect your communications tools and where you store your source of truth data for what your team cares about, what you care about, and what you're working on. Okay. And then what was the prompt roughly that you put in there to generate this deck? So I just wrote, make me a slide deck for the Code with Cloud conference. This is what our PMM suggested it should cover. This is the current draft that I made that I don't like. This is one that I made manually that I don't like, but I linked it. can you start by creating a proposed outline with details? Also, make sure it doesn't overlap too much with a keynote talk, which is more important. And then Claude read a bunch of the links that I sent to it and created a proposed outline. So then I read through its proposal and all the different ideas that it generated for what we could cover. And I just made a decision on what I wanted to actually be in the final deck. And I think this is like an example of what the role of the PM still is today. It's like, Claude is a great brainstorming partner. It's able to synthesize a massive amount of information really quickly and present all of the possibilities to you. But the role of the PM is still to make the end decision of, okay, what should belong in the final product? So for this, what I ended up deciding was that I wanted the talks to cover the progression from making local tasks successful to making every PR green to like helping engineers land more PRs. And for each of these, which demo would be the most compelling? And then after this decision about the outline, Co-Work just like went off for a few hours and built the whole side deck. This is so awesome. What an awesome part of the job to not have to do anymore. And it feels like you're talking to essentially a deck designer that also has like actual knowledge about what you've worked on and can make it actually the content which you want it to be, not just make it look really nice. How did you do the design system piece? How does that work? How does it know the design system of Anthropic? So what I did for this is we actually already have a standardized deck that we use across all of our external engagements. And so I just gave Claude access to that. And so it's able to see what colors we use, the fonts we use, the different kinds of, what's it called? Like slide formats that are possible. And so it has like 20 of these example slides. So give an example, got it. So you like upload, here's our template work from this. Yeah. You can also connect like your Figma MCP if you have your side format saved there and it can pull that in. Along those lines, something I'm always curious about is what's kind of in your stack of tools as a PM and Anthropic. Obviously Cloud Code and Cowork and all the Anthropic tools. What else are you using? What other Slack you mentioned? Is there anything else? So my stack is pretty heavily QuadCode, Co-Work, and Slack. Anthropic largely runs on Slack. I feel like it's like the core OS of our company. And day-to-day, like a lot of, I would say maybe 30% of my time is pushing the boundaries of what Co-Work and QuadCode can do so that I have a very strong sense of what we're not good at. And I spend a lot of time talking with the model to understand why it makes mistakes that it does. We actually have a lot of internal tools that we make Like I think one of the things that QuadCode has really unlocked for our entire company is it really lowers the barrier to making any custom app that you want And so we seen this like surge in personalized work software that people are building for like custom use cases instead of using tools that don't perfectly fit the use case. I gotta hear more. What are some examples? What are things you've built other people built that are really popular and useful? One of the sales folks on Cloud Code, he realized he was making these repetitive decks over and over and over again. And so he actually has this web app that he built with the examples of the core Cloud Code decks that we know work well. So like a 101, 201 and mastering Cloud Code. And then he has a way to input specific customer context that pulls from Salesforce, that pulls from Gong, that pulls from other notes so that we can customize the decks for specific customers. And so we'll pull out things like, okay, this customer is using like Bedrock or called for Enterprise or Console, which affects what features are available to them. It'll pull out things like, okay, this customer is concerned about like the code review stage of the SLC. And so we'll add a slide about our code review features there. It'll pull out things like, this customer needs to be HIPAA compliant or needs XYZ security controls. And so we'll make sure to add a slide or two in their deck about that. And then, for example, if this is a customer that's on Vertex or Bedrock and doesn't want to use Cloud for Enterprise, then we'll just take out some of the slides that are Cloud for Enterprise only features. And so normally this is like manual work that could take 20, 30 minutes. and so people either like spend that time doing it or they'll just decide not to do it and use the general deck. With this, it takes like a few seconds and you get a tailored deck. What's interesting about it, it's like Slack is like the tool that nobody's, it's just like nobody's trying to create their own. Slack just continues to win and it's just like the way you describe it as kind of the OS of so many companies. It's so interesting. Like people talk about Salesforce as just like SaaS. We don't need SaaS software anymore. We're going to build our own. Slack is an adorable tool that nobody wants to try to compete with and build a better version. I think it's pretty important communications infrastructure. And I think they do the core task of helping everyone get real-time updates incredibly well. Yeah, like people hate on Slack, but it's really great at what it's trying to do. And like the most cutting edge teams are hooked on it. So interesting. Yeah. And I also love how easy they've made to customize it. And so we love making Slack bots. And this kind of like hackability means that we're able to integrate with Slack the way that we want to. So really appreciate Slack's work on that. Time to buy some CRM stock. I am so excited to tell you about this season's supporting sponsor, Vanta. Vanta helps over 15,000 companies like Cursor, Ramp, Duolingo, Snowflake, and Atlassian earn and prove trust with their customers. Teams are building and shipping products faster than ever thanks to AI. But as a result, the amount of risk being introduced into your product and your business is higher than it's ever been. Every security leader that I talk to is feeling the increasing weight of protecting their organization, their business, and not to mention their customer data. Because things are moving so fast, they are constantly reacting, having to guess at priorities, and having to make do with outdated solutions. Vanta automates compliance and risk management with over 35 security and privacy frameworks, including SOC 2, ISO 27001, and HIPAA. This helps companies get compliant fast and stay compliant. More than ever before, trust has the power to make or break your business. Learn more at vanta.com slash Lenny. And as a listener of this podcast, you get $1,000 off Vanta. That's vanta.com slash Lenny. Okay, so you talked about all these different teams and how they use CloudCode and CoWork to operate. Which teams do you find other than engineering? I imagine engineering is the biggest token spender, but if not, that'd be really interesting. What's kind of like the second place function right now for tokens? Oh, Applied AI is amazing at pushing the boundaries of what CloudCode and CoWork can do. A lot of our Applied AI team spends time with our customers, helping them adopt our API. And so sometimes our Applied AI team will, for example, make prototypes on behalf of these customers, which Cloud Code makes so much faster than it used to be. They also have the dual goal of needing to manage a lot of customer comms, a lot of like customer inbound and historical context, call notes. And so they're both extremely heavy on co-work and on Cloud Code. And just to understand Applied AI, is that like, does that like forward to play engineering sort of role? How would most people describe what the Applied AI team is doing? Yeah, it's helping our customers adopt the latest API and model features across their company, both for powering their company's products and also for internal acceleration. Got it. So it's like customer success, go-to-market-y, kind of like forward-to-play engineering sort of thing. Exactly. It's like a very technical go-to-market person. Got it. Okay, awesome. So you're saying that might be the second org that uses the most tokens. Yeah. And then we also see them pushing the boundaries of what Co-Work can do. So for example, a lot of these folks cover multiple customers and in any given day can have like five to 10 customer engagements on a high day. And so what they often use Co-Work to do is the night before, they'll ask it to summarize, okay, what are all my customer meetings that are coming up the next day? What are all the things that this customer has asked me for? What's top of mind for them? What are the action items from the past meetings? And Co-Work will just put together this dossier, this brief of what they should be aware of going into the next meeting. And Co-Work can also research answers. So if a customer asked, okay, when is feature X going to launch? um co-work and help the pi day i person research through slack to get the latest eta add that to the add that to the notes so that during the customer call the pi day i person has the absolute latest and these are just workflows that people are building for themselves and sharing with other people on their team so cool something that kind of this question this trend uh i don't know question topic comes up a lot recently which is um token spend exceeding people's salary where people just use AI and it costs more than how much they're making. Are there any numbers floating around Anthropic of just like how much tokens spend, say engineers spend, I don't know, a month, a day, PMs, anything like that? It is clear to us that as the models get better, people delegate far more tasks to it and they spend a lot more hours in tools like CloudCode and Cowork. And so we do see the token cost per engineer or like per any knowledge worker increase every time that there is a model jump or like a substantial product improvement. I think it's still much lower than what the average engineer salary is, but we see the percentage increasing over time. It's such an interesting, like we talked about how you have access to the most cutting edge models another advantage of working in Anthropic. I believe you guys have basically unlimited tokens. you can use as much as you want. Is that right? We can use a lot of tokens. Some people do run into limits. So there's a limit. Okay. Boris, shut it down. Okay. It's so interesting how many advantages come from having the most advanced model. It's such an interesting flywheel that starts to kick in. I think we also believe a lot in empowering our internal teams to build as fast as possible. And we also trust that everyone understands how much capacity that serving these models truly costs. And we trust our team to use the tokens responsibly. So it's very frowned upon to waste tokens, but we do trust individuals to make that judgment call. Coming back to the PM role, we talked a little bit about this, but I think this will be really interesting for people to hear. Just what I want to understand is what do you think are the kind of the emerging skills that PMs need to develop slash you most look for, AI companies most look for when they're hiring PMs these days? I think the hardest skill is being able to define what the product should look like a month from now. I think there's a lot of ambiguity in what models are capable of in that timeline and how user behavior will change. But I think there are patterns that the best PMs can see based on how users are abusing the limits of the existing product. And the best PMs can sense that, can set a direction, and can steadily execute towards it and change the path if the model capabilities are much better than or worse than what they'd originally expected. I think it is very hard to be the right amount of AGI pilled. So I think everyone can see this future where the models are extremely smart and can do almost everything, in which case you actually don't need that complicated a product. You can actually just have a text box again where you tell the model what you want. And it's so smart that it can add any tool or add any integration that it needs to like get the job done. It knows when it's uncertain. It can ask clarifying questions. Like it's kind of very easy to build the product for the super AGI strong model. I think the hard thing is figuring out for the current model, how do you elicit the maximum capability? How do you help users get onto the golden path? How do you guide users to interact with the model's strengths and patch its weaknesses? This skill is pretty rare. and how do you build that skill is it just using each like basically understanding the limits of each model having like you talked about taste understanding having taste into what the model maybe is capable of what it's great and not great at where it's changed i think it's spending a ton of time talking and using the model one of the things i really like to do is to ask the model to introspect on its own behaviors so sometimes when i notice that the model does something unexpected. Like for example, there's like situations where the model will make a front end change and run tests, but not actually use the UI. It's actually pretty useful to ask the model to reflect on why I did this. And sometimes they'll say that, hey, there was like something confusing in the system prompt, or I didn't realize that the front end verification was like part of the task, or hey, I delegated the verification to this subagent and the subagent didn't do the test and I didn't check its work. A lot of times just like being very curious about why the model made the decision that it did will show you what misled it so that you can fix the harness in order to close this gap. The other thing that helps is to figure out who the taste, who are the users who you trust the most to give you accurate feedback about the model. Usually there's like a handful of people who are much better than others at articulating what makes a specific model or model harness combination good. And there's a lot of people who will give you feedback, but not everyone's feedback is as qualified. And so finding a group of those like five people you trust is really important for getting very fast feedback. I think the third thing that is useful, but not everyone loves doing is building evals. you don't need to build hundreds of evals for them to be useful. Just building 10 great evals is important for helping the team quantify what the goal is and what their progress towards it is and what they're missing. And so I think evals is this like underappreciated thing that more PMs, more engineers should be working on. We've covered evals a bunch. There's this trend of just like that is the future of product management is writing evals because essentially it's what does success look like? Okay, cool. Let me actually concretely define it and then we'll know. How much of your time are you spending writing evals, would you say? I think the importance of evals varies a bit based on the feature that you're working on and or like what the problem you're trying to solve is. So there are a lot of folks on our team who do spend a lot of time working on evals. We have a small pod of folks who collaborate very closely with research to more precisely understand our cloud code behaviors and what the largest areas of improvement are and trying to measure those pretty concretely. I personally jump into evals when there's a feature that I think needs a bit more product definition. And often the output of this is, okay, here are like five evals that I made. This is how you run them. These are the ones that succeed and these are the ones that don't. And this is like the prompt that I've used to increase the success rate. It varies a lot, though, based on the exact feature. Not every feature needs it, but I think features such as memory benefit a lot from it. This point you made about people being very good at evaluating models is so interesting. It's almost like a human eval of just like, okay, they understand where it's spiking or it's maybe lacking. Is there anyone specific that you want to shout out that's very good at this uh two people who i think are incredible at this are um one amanda who who molds claude's character it's just like such a hard role because the task is so ambiguous even coding is easier because you can verify the success whereas crafting the character requires a very strong sense of conviction and what who claude should be and i think she has like an incredible ability to not only mold the character, but also to like articulate what the goals are, what the character, what's successful and what's not. The other group of people who I really trust is just like the Cloud Code team. So we often have team lunches and whenever there's a new model we're testing, one of the fastest ways for us to get feedback is to just like at these team lunches, just like go to every single person and just be like, hey, what is your vibe on the model? And oftentimes we'll get feedback like, okay, this model is like not fully explaining its thinking. It's like too abrupt or like, hey, this model is like just like loves writing a ton of memories, but like, we're not sure if the memories are high quality or not. Or like some people will notice that, okay, this model loves to test itself, which is great. Or like this model isn't testing itself enough. So that informs what data we look at to verify, okay, is this a larger pattern? So we have a ton of data, but it is very hard to extract insights. And so the feedback from this group helps us inform, okay, what are the hypotheses we want to test? And then we're able to extract data to test that. This point you made about the character of Claude, I had Ben Mann on the podcast, co-founder, and he talked about this, just like the character, the constitution of Claude is such an important part of of of claude and i didn't realize until afterwards just like like people like with open actually one of the example one of the reasons people are sad is like the personality of your claw is like because claude's personality is so good and fun and interesting unlike other models and there's and the way he put it is the personality is what makes claude so good at so many things it feels like this like trivial side thing okay it's gonna be funny and interesting and talk in a fun way, but it's like so core to the success of Claude. Is there anything good sure there about just like what people may not understand about why the character, as you described, and the personality is so key? When you reflect on everyone you've worked with, there's just some people where you're like, I really like their energy. Like, I really like their vibe. And when people think about Claude and Claude Code, this is one of the things that people bring up the most where they just really love that Claude is like it's it's like lighthearted and fun um but it also is extremely competent at your task people really like that Claude's low ego and so if you tell it hey you did this thing wrong it's like truly sorry it's like oh shoot like thanks for telling me like let me fix it let's work together it's also very positive so So if you're feeling like, oh, this is like an insurmountable task, I don't know how to get started. Claude is like, okay, it's okay. These are like the steps that I think we should take. Like, do you want me to get started on it for you? I think part of what makes a great coworker is this positivity, this like bias towards action, this ability to give you like earnest feedback, not just agreeing with every single thing that you say. And so we try to imbue this into COD because we think it makes it a lot more enjoyable to work with. There's something I want to come back to. You talked about how when new models come out, you often have to kind of revisit things you've built. That's so interesting and so like frustrating, maybe just like, oh, goddammit, we ship this thing now, we have to rethink it. Talk about just like how often you have to come back with a new model and they're like, okay, we have to redo this product that we launched a few months ago. A lot of the changes that we make with a new model is removing features that are no longer needed. So a lot of times we add features to the product as a crutch for the model because it's not naturally doing itself. So the classic example for this is the to-do list. When we first launched QuadCode, people would ask it to do these large refactors and QuadCode would say, okay, cool. I need to change these like 20 call sites and it would go and change five of them and then stop. And then we were like, okay, how do we like force it to remember to get every single one of these 20? And so Sid on our team was like, okay, what if we just like think about what a human would do? A human would like make a list of everything that they need to change similar to how in VS Code, you would look up all the call sites and it'll be a list on the left side and you would like go through them one by one and replace all. How do we give this kind of like a tool to Quad? And so he added the to-do list and we found that with that, Quad was actually able to fix all these 20 call sites. But then with Opus 4 and later models, we realized that we didn't need to force it to use this to-do list. It would like naturally use it itself. For the earlier models, we had to keep reminding it, hey, did you finish everything on the to-do list? You can't finish until you're done with everything on the to-do list. And for the later models, without prompting, it just like naturally thinks to do everything on the to-do list. These days, the to-do list is still nice to have as like a user because then you can more clearly see what Cloud is working on. But honestly, it's such a de-emphasized part of the product right now that the model may use it, the model may not use it. It's like really not necessary for it to make thorough changes anymore. I forget who said this on the podcast, that the model will eat your harness for breakfast. And what I'm hearing here, essentially you remove things over time that you've had to add on top of the model where it was not operating the way you wanted. And essentially as the models get smarter, it becomes simpler and simpler for it just to do the thing you want it to do. Yeah, we can remove a lot of prompting interventions every time the model gets smarter. And we actually do this every time we launch a model. We read through the entire system prompt and we reflect on, okay, for each of these sections, does the model really need this reminder anymore? And if not, we'll remove it. The most exciting thing that new models unlocks though, is just like entirely new features. So there's a lot of features that we've been testing out with prior models and the accuracy wasn't high enough for us to want to launch them. And so one example of this is code review. We tried to build a code review product a few times and we've launched like simpler versions of code review, which is the slash code review command in the past. And it was only with the most recent models that we felt like, okay, this code review is so good that our engineering team relies on this code review to pass before we merge PRs. And we found that this was, we've always dreamed of Claude being able to be a reliable code reviewer that can actually that we can like confidently feel catches the majority of bugs And it was only with like Opus 4 and 4 and Sonic 4 that we felt like okay we are now able to like run multiple code review agents simultaneously to traverse the entirety of the code base and to synthesize a set of like real issues that an engineer needs to address before merge. And so this is like a new capability that the newest models have unlocked. This is another trend that is very common on this podcast of build something that will possibly be possible in the next six months, be kind of at the edge of what's working sort of, and then it'll catch up and then it'll be an amazing product and you'll be ahead of everyone. Yeah, exactly. It's pretty important to build products that don't necessarily work yet so that You know, okay, what is missing for this product to work? And then with the newest model, you can just swap it into the prototype you've already made and see, okay, does this new model close that gap? How much are you able to speak to just kind of where things are going with Claude and Co-Work as kind of the vision of it? I imagine you don't want to give away too much about the goal, but it feels like there's all these awesome features being added on top, dispatch, control from phone, and all these mobile app, all these things. What's kind of just like a way to understand the vision for all these things long term? We think about this in terms of building blocks. So for both Cloud Code and Co-Work, the core building block is making individual tasks successful. So you want it to produce some output. You give it a clear prompt description. Is it able to consistently produce acceptable output that you're able to either merge or share with your colleagues or external audience? So the task is the core building block. As the models get smarter, the task success rate gets a lot higher. And then we see people moving towards doing multiple tasks at the same time. So multi-clotting was this big thing towards the end of 2025, and it's only increased since then. And so we see this as, okay, great. One task works, and now you can do like six tasks at a time. As the models get even smarter, the way that we're extrapolating this is, okay, next, maybe you're going to run like 50 clots at a time or hundreds of clouds at a time. And so what is the infrastructure we need to build to enable that? At that point, you're probably not going to run everything locally on your machine anymore. There's just like not enough RAM to do it. And so we're thinking about how do we make it easier for you to manage all these? These will probably run remotely. How do we build the interface so that you as a human know which tasks you need to look into? How do we make sure that the agent is fully verifying its work so that when you look at a task and it says it's done, you can very quickly verify and fully trust that it is done to your spec. And how do you make sure that this process is self-improving so that when you do see a task that isn't done to your liking, you can give it feedback and the model will know for every future run to incorporate that feedback so it never makes that mistake again. So this is the progression that we're bringing our users along for. There's a lot of people listening, a lot of product managers, a lot of maybe founders, a lot of other cross-functional folks listening. There's a lot of worry about just how their role, just the future of their careers. What advice would you have for just people to not just survive this transition to this very AI-driven world, but to be really successful to essentially just to thrive in this future? What are just like things people need to hear, need to be doing? I think AI gives everybody a ton more leverage than they used to. And so I would push you towards anytime you realize that you're doing some manual task multiple times, think about how you can use quad code, co-work, or other AI tools to automate that for you. Most people have like creative parts of their job that they absolutely love, and then like tedious parts of their job that they really hate doing. I think the beauty of AI is that it can do those tedious parts for you. It can learn from every time that you've done that manual task and generalize and then run it automatically. And so that you can focus on the creative parts and that means you can do a lot more than you used to be able to do. So I think my like immediate push for people is figure out the repetitive parts that you can pass to Claude, iterate on those automations until the success rate is very high and then focus on, okay, what more can you be doing for your team, for your product, for your company that people haven't had the bandwidth to pick up so far? Or what is that pet project that you always thought the company should do that you've never had bandwidth to do? If AI can take care of the grunt work, then you have this extra 20% time now that you might not have before. So my push is to lean into these tools, hand off the work that you're not excited to do, figure out how it can accelerate you, and then as a result, you'll be able to do so much more. Something core to what you just shared, which I fully agree with, is find problems to solve with AI. There's all this potential, what all these tools can do. Some of the hard, like for a lot of people, the hardest part is just like, what should I actually do? And what you're saying here is just pay attention to things that you are doing constantly. You can automate, pay attention to just like ideas that have been floating around that you haven't had time to do. It's basically, it's like solve a problem for yourself is kind of the core advice there. Exactly. I would also push listeners towards focusing on bringing your automations from, okay, this is a cool concept to like, hey, this actually works 100% of the time. Like sometimes I see users trying to automate something, getting it to like 90, 95% accuracy and then giving up on it. And this, if an automation doesn't work 100% of the time, it's not really an automation. and that last five to 10% does take more time. Also building the automation is often a lot slower than you doing it yourself. I would encourage listeners to put in that time to scope some automation that you really want to get to 100%, put in the elbow grease to teach quad your preferences, to like give it feedback so that it can improve its skill so that it can get to that 100%. And then like really, then you'll be able to rely on it. There's just not much value in a 95% there automation. I am super guilty of that. This is really good advice for me. I am guilty of this too. I've been teaching it. I've been teaching Coork to try to get me to inbox zero for Gmail. And it has not been, it has been very time consuming. And it is definitely not there as you probably realize. Yeah, funny enough, that's exactly where my mind goes. I have this workflow I set up where every email I get, it looks for things that are spammy, which is just like all these like, hey, can I come on your podcast? Or what about this one? Like all these things. I'm just like, I don't have time for these sorts of things. And I have it categorized it into a folder called spammy. And it's just like, it's 95% great. But then there's like, oh, man, I missed an email because it went in there. So this is a good push for me to like, I'm going to work on this. I'm going to get it to perfect. Yeah. We also are working on making the flow for customizing these commands a lot easier. Because right now I think you have to like know too many concepts. You have to know to define a skill. You have to know to like use this skill and give it feedback. And then you have to know to tell coworker to update the skill based on all the feedback that you gave. And then you also have to know where to read the skill to like make sure that the feedback was incorporated the way that you want. It's also our job to make this flow really seamless so that it doesn't feel painful to do. Amazing. Is there anything else, Kat, you wanted to share? Anything else you wanted to leave listeners with? Anything you wanted to double down on? that we haven't already touched on before we get to our very exciting lightning round. I see a lot of people playing around with AI and building like prototype apps and tinkering with building workflows. I would really push people towards building apps that you're actually using every single day because I think only through that usage are you actually getting the value. Like if you build a prototype app that isn't helping you get more done, then the AI isn't really adding value to your day. And there's only so much you learn from that when it's like, OK, I just one shot at something. Oh, that's cool. And then you never come back to it. Like you're not learning a lot. And you're not getting like much leverage from it. And actual leverage. Yeah, that's such a good point. I also think there's a lot of people who spend a lot of time like customizing their workflow. So there's like I think there's like two ends of the spectrum. One is like people who never customize or never build automations, but there's like this polar opposite end of people who like obsess around customizing their tool, like adding a ton of skills and MCPs and these like workflow improvements. And I think sometimes that can even distract from your core goal of like launching some product or building some feature. I think there's a lot of fun in customizing and we definitely want to make our products very hackable so that you can make it work really well for you. But there is a limit to how much it's useful. And I think there's a camp of people who maybe spend so much time customizing that they're like not sleeping and not doing the like core task that they originally set out to do. I see a lot of that on Twitter. Just like look at my setup. It's out of control. It's so optimized. then what are you actually building? No, but my setup is so awesome. I could get so much done. I think the simple setups actually work better. Slash power up, level up a little bit. Yeah, yeah. There's this Karpathi tweet that just came out yesterday where he talked about this divide that's interesting between people that tried ChatGPT, Claude back in the day. It was like, okay. And they're like, nah, this is terrible. And they kind of gave up on like what AI could do for them. And they're just like so cynical of like, no way, it's not actually that big of a deal. And then there's people that are using it to code, essentially, who see the full intense power of it and how good it is. And people on both sides don't understand the other side and how they see the world. And so your advice is really good here. Just actually use it for real things and see how good it actually has gotten. Yeah. I think the big shift is that the 2024 generation of products were chat-based and the Cloud Code generation of products is action-based. And the like big aha moment people have is when Cloud can just like do things on your behalf. It is an amazing feeling to know that the agent is capable of doing so much more than telling you what to do. Like the agent can actually just do it itself. And when people feel that, I think that's the eye-opening moment. Shout out Chrome Extension, the cloud called Chrome Extension, which you can just watch it doing stuff. You'd be like, fill out this form for me. And I'm like, all right, here I go. Exactly. Okay. Anything else before we get to our very exciting lightning round? No, let's do it. Let's do it. Kat, I've got five questions for you. Welcome to the lightning round. There's this animation in that place. I have to make sure to say it. Are you ready? I'm ready. First question. What are two or three books that you find yourself recommending most to other people? I really like how Asia works. It's a story about economic development and what are the policies and governments that make long lasting successful economies. The other books that I'm really into are The Technology Trap. So this is actually about the past few technology revolutions, so the industrial revolution and the computer revolution, and how this has affected workers. The reason that I really like this is because I think there's a lot we can learn from history to make sure that this transition goes well. And maybe on a fun note, I really like Paper Menagerie. It's just like a book of short stories about coming of age and AI and just self-discovery. Favorite recent movie or TV show you have really enjoyed? I really like Drive to Survive. There's no like deeper meaning to it. I just, there's just something very satisfying about people being so obsessed with like a singular engineering goal and just like the purity of the pursuit. And I also really love Free Solo, which is about Alex Honnold climbing El Capitan without a harness. And I think similarly, it's just such a pure achievement to be able to climb this extremely challenging, dangerous route and to be able to have the mental focus to do it, knowing that if you make a single mistake, you die. It's insane. Yeah, that movie is out of control. And it's interesting how these relate in some way to the work you do. I actually am a rock climber. I first watched Free Solo before I climbed rocks. And so I thought it was impressive, but I didn't understand how impressive it was. It's one of the rare movies where the more you know about it, the more you're you're blown away by how insane this is like the kinds the kinds of moves he's doing on the wall are things that like i don't think i will ever be able to do in my lifetime if it were set in a gym like one feet off the ground with a rope with a rope did you see the documentary and that other guy the younger one that went on like ice i did that one was very sad but that was that was wild okay uh favorite product you recently discovered that you really The product that is like most changed my life outside of Claude products is probably Waymo. Like I'm a diehard Waymo user. Use it twice a day, get to and from work. So the two things that I really like about it are one, I don't feel bad if a Waymo is waiting for me. And so I feel like I feel less pressure to be right at the curbside the moment it arrives. And the second thing is, I feel like it lets me be a bit more productive. um when when i'm in the car with another human i i typically try not to like do any work calls i i feel a little rude if i'm like on my laptop the whole time but one thing i really appreciate about the waymo is i can call into a work call i'm not worried about someone overhearing me i'm not worried about hey is this like rude am i talking too loud do i need to tell ask someone to like change the music and so this has been like i feel like this has given me back like 30 minutes every day. All these second order effects of technology. It's so interesting. Yeah. I always thought Waymo needed to be priced lower than Uber and Lyft to succeed. But actually, I'm like very happy to pay a 2x premium for it. I love Waymo. It's just like once you see it, you're just like, this is insane. And then you get used to it. Like you get in there and you're like, this is crazy. And then you forget about it. Totally. And I think it's also changed the vernacular. Like a lot of people at Anthropic love Waymo. And I think in the past, you'd be like, hey, like, let's call like, blah, blah, right here app. And now like, everyone's just like, okay, is Waymo here? Okay, two more questions. Do you have a favorite life motto that you often come back to in work or in life? Just do things. That tracks, that tracks. I think there's a lot of value in like first principles thinking. And if you like, if you know what you're optimizing for and you have like strong first principles, then you can normally deduce what the right like course of action is and be able to clearly articulate that to all the stakeholders. and then you should just like do it. Like I think jobs are fake. If you understand the constraints, you can figure out what you can do and then just like try to do it quickly, learn from the mistakes and apologize or fix them if you did something wrong. You could just do things, whoever said that. I think it's liberating actually to like tell people this. I think in a lot of companies, like roles are very strictly defined. Like, okay, this is what the PM does, this is what the designer does, this is what the engineer does. And then even team scopes are very rigidly defined. So, hey, like this corner of the code base we touch and this corner, like we're not allowed to touch. And I think what Just Do Things lets people do is they feel like empowered to make these decisions, empowered to operate across team boundaries just to like get something done. That feels like a big, important skill to be good at. People call it agency. Just like do the things that need to be done. Bias towards action. All these ways of describing just like no way for permission. Yeah. I think this is my favorite reason to work at a startup at some point in your life, because like one thing that was like very life changing for me was actually working at scale when we were 20 people. And so there was just no process and we have like really big problems that we needed to solve. And it was like I really appreciate Alex and the rest of the team for like empowering me and the rest of the team to just like figure things out without any boundaries for what sales is supposed to do, what ops is supposed to do, what engineers are supposed to do. Just like you have all the tools at your disposal. You have some like ambitious, hairy problem statement and you can do whatever you need to like get to a good solution. You almost need that experience to build that skill, to feel comfortable doing that. Because a lot of people, you know, they go through school or in college and all these like do the thing we tell you to do and then you will get a good grade. And you have to kind of unlearn that of like, OK, I'm just going to do the thing that needs to be done. And even if people think it's dumb, I think it's the right thing to do. Yeah, exactly. Okay, actually, I have two more quick questions. Two more final questions. One is, when Claude thinks, there's all these, I don't know if you call them verbs. What's the term for these things? Thinking words. Thinking words. And interestingly, these all leaked in the source code. Do you have a favorite thinking word? I really like manifesting. It's also like the sticker that I have on my laptop. Oh, amazing. Clearly the winner. Okay, final question. And as Forrestis too, with AGI potentially arriving in our lifetime, when you don't potentially have to work, what are you going to do? What are you going to do with all your time? I think it will take a long time for AGI to diffuse across society. So I think the immediate thing is actually just like helping bring the world along. I think my like non-serious answer for after this happens is I'll probably just do a lot of rock climbing. I'll probably just like live in some, I'll probably move to like fountain blue and just like live amongst 10,000 boulders and climb for a bit. There's also so many books I want to read that my goal is to be able to read one or two books a week. And I'm currently at probably like 0.5. The backlog is pretty big. I think there's just like so much we can learn from history and so much that I don't understand as well as I would love to. Like, I don't know anything about physics and or like robotics or like any hardware or like aerospace or there's just so many interesting topics. So I'm excited to learn even even knowing that the AGI will already know it. Kat, this was amazing. You're awesome. Do you have all the questions? Where can folks find you online if they want to reach out and just follow what you're up to? And how can listeners be useful to you? The best way to reach out is I am underscore Kat Wu on Twitter. Feel free to like tag me in things. Feel free to DM me. I read all my DMs. I don't always respond to every single one, but I will read them all. And then the thing that is most helpful is tell us where Cloud Code and Co-Work aren't working well for you. We are very grateful for the amount of positive feedback. but the thing that we thrive on is edge cases, errors, like specific tasks that we can reproduce where code code or co-work fail. Because if you're able to share that with us and we're able to reproduce it, then this is something that we're able to actively improve for our next generations of models and for our next harnesses. Extremely cool. Everyone on, people on Twitter are not shy with sharing this feedback, so keep it coming. Share us, please. please share the problems that you're having with us. Yeah, and it's really cool to see all your team being so active on Twitter and responding to people. And so like what I'm hearing, like this is actually stuff you guys actually see and react to, so. Yeah, we appreciate everyone being so engaged with us. It gives the team a ton of energy. We have this channel of like user love. And so whenever you guys share a success story, we post it there. And whenever you guys share like issues with our product, we put it into our feedback channel. That way our broader team is able to act on it. That is so cool to know. Thanks for sharing that. Well, Kat, thank you so much for being here. Thanks for having me. Bye, everyone. Thank you so much for listening. If you found this valuable, you can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. Also, please consider giving us a rating or leaving a review, as that really helps other listeners find the podcast. You can find all past episodes or learn more about the show at Lenny's podcast.com. See you in the next episode.