The Trillion Dollar AI Software Development Stack
38 min
•Oct 10, 20256 months agoSummary
A16z partners discuss the trillion-dollar opportunity in AI-powered software development, exploring how AI coding tools are disrupting the entire development stack from planning to deployment. They analyze the shift from human-centric to agent-centric development workflows and the emergence of new infrastructure needs for AI agents.
Trends
Shift from human-centric to agent-centric development workflowsRising infrastructure costs for AI-powered developmentAcceleration of legacy code migration projectsEmergence of multi-agent development orchestrationCustomization becoming more accessible through AI codingTraditional development metrics becoming obsoleteNeed for real-time, high-frequency repository systemsSoftware becoming self-extending through AI integrationAgents requiring specialized tooling and infrastructureMassive market opportunity creating startup ecosystem disruption
Topics
Companies
People
Full Transcript
AI coding is the first really large market for AI. When do we say this is all agents? We just at the end of the value chain, we're like, does this work or not work? Click yes or no. Agents more than ever need an environment to run these things. Context engineering for both humans and agents. Every single part of it is getting disrupted. It's not that there's just somebody writing code like your classical developers being disrupted, but everybody along the value chain. So, Yoko, we just launched this, I think amazing new dev stack for the AI coding environment. And I'm really, really excited about this. Yeah, I mean, let me start with a very high order pitch why I think this is so incredibly exciting. I think AI coding is the first really large market for AI we've seen. There's a ton of investment that has flown and the question now to some degree is where's the value? Why are we doing all of this? AI coding can create an incredible amount of value. If you think about this, we have about 30 million developers worldwide, roughly, let's say each of them generates $100,000 in value. In the United States it may be low because many of them get paid a lot more, but internationally it might be a little high, but I think order of magnitude it holds. So in aggregate, the value we're creating here is about 30 million times 100,000. So $3 trillion, I will argue even more because that's just developers. But then there's also people who are development curious. That's right. Not developers. Maybe they're, I mean, design engineering. Now it's a big thing. Every designer, product managers, you know, write code doc writers. Yeah, I mean there's so many disease effects, but if you just take the $3 trillion figure, that's about the GDP of France. So the claim we're making here, as crazy as it sounds, is that we're saying the entire population of the seventh or eighth, I think, largest economy on the planet generates about as much value as a couple of startups that are reshaping the AI software development ecosystem. Plus the LLMs underneath. Yeah, that's crazy. Everything we see and touch and use nowadays are all software. That's right, yeah. So software has disrupted everything in the world and now software itself is getting massively disrupted. Totally. And then what you mentioned in the blog post is really interesting just because like we now are more capable at using LLM to generate code and coding and produce software. But then as a result there's no. It's not like there's less jobs, it's actually more and more software being produced before, maybe it's a SaaS service, you know, catering to hundreds of people's needs, to thousands of people's needs. Now you can really just vibe code things, software by one for one. Yeah, yeah, I vibe code. I do that. Yeah, exactly. You do that exactly. And then I vive code my own email filter. I don't do so much of using LLM to reply to my email, but I have a filter where it categorizes the labels, things like that. Only to some emails. Only to some emails. So the first question becomes, how do you think the development loop is shifting? I think the answer is complex and very frankly, it's so early. I think in this AI revolution we don't have the full answer yet, but I mean we have our little stack, our little software development lifecycle post, AI in the blog post. And I think the biggest learning from that is probably that every single part of it is getting disrupted. It's not that there's just somebody writing code like your classical developers being disrupted, but everybody along the value chain is getting disrupted. What's the most surprising part for you? What's the most disruptive field today in coding and what do you think it will be? What AI will come after next? So I think, well, we've seen the biggest growth, I think it's safe to say in the, in the classic sort of coding, ide, integrated coding assistance or more agent coding assistance. Right. You know, the cursors and Dev Ins and GitHub copilots and cloud codes of the world. Right. I think that's where we see the most traction, where we see an incredible revenue growth possibly. I mean, I want to say that segment possibly has the fastest revenue growth of any startup sector we've seen in the history of startups, which is again an incredible statement. So I think this is currently the vanguard, right. And everybody's aware of it. We're seeing billion dollar acqui hires or takeover offers. So that's an incredibly vibrant sector. Now, which one is next? That's a really good question. So to be very specific, in a blog we wrote about the basic loop, the basics is you plan, you code, you review. Yeah. Where does LLM coming in? Where do you think more of the loop would be disrupted? Like, do you think the loop will still be the same as what we used to have, the basic, or do you think it will look very different? I think at this point it's very hard to speculate about the end state. But let's assume you've seen the first Email, Yeah, I'll get there. But if you looked at the first email sent over the Internet, you can sort of predict that probably we'll have websites and these things. Maybe if you're good you can. Right. And but saying like, hey, the net effect of this is that everybody can rent out their house and compete with hotels and this is going to be the biggest hotel company on the planet, you'd be like, well that's a little far fetched. But now we have Airbnb. Right. So these secondary effects I think are really hard, are really hard to guess. Look, my current hypothesis is I think we'll still have software developers. I think they're not going anywhere. Right. I think what they do will look completely different. Right? Yeah. I think the CS education, frankly any CS class taught today at any major university is probably best seen as this historical relic from a bygone time. Right. I mean if you look at the best of breed startups, what they're doing, the loop that the developer is in looks so different from what you did before. You have multiple agents that you're prompting that you're telling things and pull that back into ui. You're trying to understand what they did, you're trying to put them back on the rails. It's a lot more thinking at a higher level. I mean all of coding sort of has been higher levels of abstraction, but I think we're making a huge leap here. So how it's going to look like, I have no idea. My gut feeling is we'll probably have more developers. Look, this basic plan, execute cycle, there's probably going to be some flavor of that that's still around is my guess. One of my top questions is that would this be a step by step loop or would this meshed into just one step? So one example is if my agent is writing a code, do I still need to review it or do I have another agent that just reviews it? If it's the same agent, implementation detail is one thing. You can separate out the agent generating code from the agent reviewing code. But then if it's all agent both generating and reviewing code, is it just the same step? Do we actually disaggregate the step process and have human in the loop? Whereas if as a human I write code and I want agent to review code, that makes sense to me. So I do wonder when do we pull out something as this is an individual tool and individual step an agent takes care of and when do we say this is all agents, we just at the end of the value chain or like, does this work or not work? Click yes or no. I think the time periods over which an agent can work autonomously will get longer. You know, still, if somebody says, like, look, I want to write a complete ERP system for my multinational enterprise, go, there's no way I could imagine that it'll just run and backcount software that actually fits the requirements. And in part, I think it's a problem that models are still very, very far away from being able to run autonomously for that long. But the other problem is let's assume this was an all human team. We wouldn't understand all the challenges yet at the beginning we'd have to revisit the design, we have to revisit the architecture that has cost implications and so on. So at some point you sort of need to go back to the architects and the product managers and say, hey, we had a plan A, didn't quite work, or we have found new challenges. So here's our updated plan A. Is that the, you know, plan B? Right. Is this what you want to do? So I think the loop will still be there. The timescales will probably change, but yeah, it's very, very hard to guess right now. Yeah. Another thing we start to see more often is contrary to how much humans need to come and intervene in the loop, we actually start to give agents tools for them to know what they have to do. One example is this loop I see so often, like the agent wants to implement, say Clerk Auth in their app. Now they need to go to Medlify and Context 7 to say, what is the latest version of Clerk and how can I implement it correctly and in what file? I'm not going to copy, paste it to cursor or give it to the agent because I'm too lazy as a human. Now the agent should be able to call the API themselves to put stuff in the context to make it work. This is just one example of what behavior change we're seeing. Because before as developers, we're so used to go back to the docs and refer to the docs and tell the agent what to do. Now agents can obviously. So we cut off the middleman, right? We cut off the middleman. I don't need to route all these requests for the agents anymore. And then I think there's other examples, which is verification. As a human, before when I write code or review other people's code, I pull out the code and then the first thing I do is actually not to review because I don't like reading code. I'm not a human compiler. The first thing I do is to fork the change and see if it still works. If it doesn't work, I just do not review it. Nowadays there are opportunities to give just agents and native environment to first see does this work? Does the UI still look good? Do all the requests still check out? Did it break my build before the human needs to come in and review? Maybe that manifests itself being part of the local development process, maybe manifest itself in the PR review process. But in any case, now agents more than ever need an environment to run these things. When I used to write something just for myself, like a little script I need somewhere in the past, I usually didn't include unit tests, right? For production code it's different, but for, for just personal things, you know, it's like, yeah, that's a single developer. I know what I'm doing here. With agents, I now start including unit tests because they're so much easier to write and they allow an agent, as you said, right, to understand if the changes that they did broke anything else. And they may not have the context how this original was built anymore in an easily digestible form. So that's super valuable on a grand scheme of how much economic value this generates. Where in the value chain do you think it's growing the fastest? Like you see where the agents is producing so much more value than other, other areas. But what are the areas you feel like would be the next takeoff? So look, I'm, I've, I'm, I'm talking about a hundred or so enterprises about this per year. Just when we, you know, take our portfolio companies to them as potential customers, what I'm hearing from them is that the number one use case in terms of ROI right now is legacy code porting, right? It's not super surprising. Like one of the first papers in the space from Google, right? They, they wrote a fantastic paper on, you know, where they're detailed on, you know, just doing very mundane things like replacing a Java library across a very large code base, right? Not like millions of lines of code, right? So it's a very logical base. What do you consider as legacy stack? What do you consider as new stack? It totally depends on you, but I mean for the banks, it's often COBOL or FORTRAN to do Java. Cobol, I haven't heard that word for a long time. You know, I actually wrote COBOL code once in the 90s. It probably dates me at this point. But actually horror LLMs with Cobol, they're apparently extremely good I'm a surprise. So here's the thing, right? One of the hardest things if you implement code with LLMs is just getting the specification precise. Right. If I can specify something very precisely, then usually the LL can do a good job at implementing it. So many of these companies do is they take legacy code, they have an LLM, write a specification that fits the legacy code and then they say re implement the specification and you may look at the code if there is a tiebreaker, if something is not clear. Right. And that seems to work incredibly well. So I'm hearing that today, I heard from several sources now that you can get about a 2x speed up versus traditional processes for that. Right. And that's amazing. And what this has led to is that actually of those enterprises that I've talked to, the majority says they're currently accelerating. At least the majority of those that are sophisticated about this, they're saying they're accelerating the developer hiring. We don't know if this is a long term trend, but right now they're basically saying, look, because we found so many low hanging fruit type projects where with a little bit upfront investment we can then save infrastructure costs. And that's super exciting. So what this will mean for how much long is the mainframe business going to be around? I don't know. But there's definitely a shift there where suddenly legacy code migration is much, much easier than it was before. I think they'll change a lot of the dynamics in the classic enterprise software space. That's interesting. I do wonder if we will get new mainframe code because now, because before no one knows how to program those things. That could also be. Yes, right. And now you realize you can program mainframe using natural language. Yeah, totally. So another possibility is that we get renaissance of like the underlying legacy coding languages. It's crazy to me how versatile these coding assistants are. Right. I mean we're seeing them write CUDA kernels which is like that is difficult stuff to write by any metric. Yeah, I've tried them with a language which basically has no usable training data set and they're still able to sort of abstract, you know, with a couple of examples, you know, of how the code will have to look like it's not perfect, but so I think it's a very, very broad technology for sure. Recently just like what we were talking about before, code reviews. Because like to your point, LLMs are so good at coding and generating code, sometimes it's beyond our comprehension. We'll take more time to review the code than the Coding agents. That's not a controversial opinion. Just like how it is, that's the reality. Yeah. So it does make me wonder how our development chain and steps are going to evolve from here. How are we going to do PRs when we can't possibly review thousands of lines of code as humans? So does that mean the right abstraction now is still code, or does it mean the right abstraction now is for us to review plans? If that's the case, is GitHub review still the right abstraction for that? I think there's still a role for review in general. I think the question is, will humans do the review? Right now, most of the code that LLM generates, unless if you're deep in vibe coding territory, you're just like, oh, this is a one off, I just want to try something out. Maybe then you don't review it, you just hit Accept and hope for the best. But anything that you know, anything else, you do, review the ISO, review the code line by line. You know, that said, we're starting to see really good tools that plug into your backend system, your GitHub. Whenever a pull request comes in, they analyze it, they comment on it, they point at security vulnerabilities, they point out that the spec is different from the implementation. You know, they, they point out that this creates dependencies which may not be desired. They, you know, enforce coding guidelines, which is very, very powerful. I haven't met anybody yet. Tell me if you have. But I haven't met anybody yet who basically has said we're going to rely purely on AI to review code. Right? Anything can go in if the AI checks off on it. But I have seen, for example, companies that are saying before we had two developers review code and now it's one developer or cases where basically just the AI hangs out in the GitHub discussion and comments on things. So you can basically delegate tasks. Like, can you look at this? Are we using this library somewhere else? So basically you certainly have somebody who can help with these tasks. My hot take on these is actually if PR is supposed to give us a context as developers on what these other coding agents or my colleagues change that I should be aware of. I don't think code reviewing code is the right abstraction because it might be feature level, it might be performance level. So now it might just become a wineliner. This, you know, this agent improved your CUDA implementation. And maybe I don't even know cuda, but I know the improvement when I can verify it. So the question for me is, do I still need to go to the pr review every line. Or should I just be given two sentences to know how this works and the environment to test it out and I'll just. If it's the right two sentences that works. Yeah. Will the LLM always pick the right two ones? I don't know yet. Yeah, that's true. I mean if you give it an environment, the question is like can you verify the environment against the two sentences? If the answer is yes, that would be easier to solve. If the answer is no, that's harder. But I think there's a bigger picture here though, which is that LLMs are also very good in generating the documentation and description for the code. Right. So when I use some cursor for coding and it generates code, I often ask it afterwards and now take the internal documentation, update it because the internal documentation is important for me, but also for the coding agents because they want to be able to refer to. You don't want every single time to take the entire code base and stick it into the context window. It's massively inefficient and slow. So if instead you can just say read this document, that'll explain to you the class hierarchy and then based on that implement this new subclass or so it's much, much faster. I think there's a real opportunity there to get to much better documented code than previously. Right. You can almost. A compiler is great and then it takes a high level abstraction, translates it into a lower level one. But now we have the ability that if somebody then hand optimizes the lower level one, we can use that to update the higher level one. That's true. What is the new compiler? In the age of AI, LLMs are sort of a compiler, right? In some way they take a higher level description and filter down. I mean, I think the big thing, I guess what's missing is it doesn't have a natively have an environment compiler. Give you does this work or does it compile or does it not compile? It doesn't tell you does this work? Which is a very subject thing. I think it's right. A compiler enforces is a strict enforcement of certain things. Right. I can rely on that. I don't know. I'm using Rust and things will be typed. That's a huge step forward and I can exclude certain bugs with an LLM. That's not the case now that said, you sort of wonder is this just an initial thing? Is this just a. I mean LLMs over time, like for example, we can give LLMs tools that allows them to syntactically parse code. And then now suddenly they can start reasoning above code. They can ask, are we sure that the representation of object X is the same across these different modules, even if it gets serialized in between? Or something like that. I don't mean to make this episode disaggregating GitHub, but GitHub is so central to so much of our workflow, right? Everything goes through it, from the social aspect of discovering what other developers are doing, to distributing the software you can download things, to keeping track of what you have changed Git all integration with the build system on the backend. Yeah, yeah. So now another interesting thing we start to see is people use git repos very differently now. Before it was like, oh, humans will make some changes, we'll commit it and then other people will see it. And then when you open a pr, people see different revisions. Now agents are doing so many changes, it's kind of counterproductive to commit everything. And then repos usually have very low rate limits because it's designed for human to use. So now the question becomes, what is the new repo like abstraction that handles a distributed infrastructure, but also caters to high frequency commits? And sometimes when agents doing these commits, they don't even want to preserve this forever. It's more like an intermediate step so that they can explore five different paths, but then revert to any of them. And then when they're happy with it, they come back to GitHub. So one thing I've been playing around with is there's this company called Relays and then their docs has this repost feature where you can give a repo to the agent, the agent will come in. Stuff like very high frequency works really well with bytecoding agents. And that has been such a great experience. And in the wild I start to see people building this internally too. It makes total sense. Look, if we completely change how humans write code, or we're shifting from humans to delegating most of the writing to agents, I think it would be foolish to assume that the underlying services that were a good fit for the human world are still a good fit for this new agent for sure. That's almost certainly not the case where we can do better in our cases. I think you're completely right. For source code repositories, we want something that's much more real time. Honestly, imagine. Let's take this to the limit. Let's assume I have now just me personally running 100 agents in parallel that all try to implement features in My code base. You probably need some kind of coordination mechanism between them. They're all not trying to edit the same file. The rebase only carries you so far. You just get too many collisions. They all need shared memory on the repo they're working on because you don't want to reinstall the dependencies every time. That's right. Yeah, exactly. And so you probably need something that's much, much more flexible and more real time. And, you know, I think we were still figuring out what that is. What Relays is doing there is amazing. And I think it's not just. It's not just true for GitHub and GitHub is one of the big platforms we use. But you know, take the specification writing with, you know, say a Confluence or Jira for example. What you could use there. If you would develop these systems bottoms up with AI in mind, they would probably look very, very different. Right. I mean, it's like, you know, should my story tracker have a function that looks at code and updates stories accordingly? Yeah, that would be very natural. Right. I think we've seen this for documentation. Right. I mean, something like Mintlify looks very, very different at the documentation problem than previous generations. I think we're seeing it for testing, we're seeing it for PR reviews. We're rethinking this entire stack. And that's exciting, right? That's true. Another behavior change we have seen, which is really interesting because humans like developers who are very lazy. Yeah. So if we don't have to read it, we do not read it. We only read the relevant parts in documentation. We only skim through things. So the NET new behavior we see all the time with documentation hosted on minlify is that users who are. NET new just go in and ask questions. And agents do not just ingest it anymore, they will actually just do a query against this context. So context engineering for both humans, because our brains are like LLMs, we need the context and agents who need context, which is a very critical part. It's such a critical part of development from here point on. And then the question becomes, what are the other tools you think agents will need? We had this huge market map from the blog and then there's contrary to where all the previous market map suggest, which is like, here are the developer tools, now there are agent tools to make it better. In some cases the same tool caters to both. Right. I mean that's. We need generate documentation for agents and for humans. So look, early days, but I think we're seeing a couple of big categories emerge, right? I mean, sandboxes are helpful to try out, you know, code snippets or build things. How do you define a sandbox? I think we can record the whole episode on it, but I think we will at some point. But look at the end of the. You need an environment with certain safety guarantees, right, where LLMs hallucinate. LLMs can be with clever, you know, if they use external sources, they can potentially be maliciously prompted to do bad things. And you just want to have something that basically limits how much the blast radius if anything bad happens. I think we're seeing interesting search tools and parsing tools like something like Source Graph or so, right. If I'm writing my code in a couple of files, we don't need that. But if you have a really large code base and suddenly the question is, look, we're looking for, we're trying to replace, we're trying to add a parameter to a function in a library that's widely used. Where is this library used? This is a really hard problem suddenly, right? I mean, it's like if you're in Python or so you can import something as something else. So like a simple find operation will no longer find these things. You probably need syntactic parsing to find those. So I think we're seeing documentation tools that are optimized for agents and allow agents to look up these things. I think it's good if an agent can do things like web search. So we're seeing companies in that space. What am I missing here? There's so many more. I think there's the other part of more. We're seeing more specialized models. Yeah, I mean for things like code editing or re ranking files and things like that, that's definitely shaping up as a market if you take a big step back. If we assume there's massive value creation here, that probably creates an opportunity to create a very large number of startups. I mean, if you would have asked me 18 months back, if you had asked me 24 months back, I would have said, Look, DevTools, it's a smaller kind of market. How big can this be? That's not very exciting. If you ask me today, it suddenly looks like this is a market that could go in the hundreds of billions of dollars in theory, could it go to a trillion? I don't know. So how many companies can you create in that space? I have no idea, but probably. And over time it'll consolidate. But I expect this to be an ecosystem, not a business model. I have a fun Question which is going forward. So when GitHub came out, they have this commit charts. It was on T shirts. People compare their commit charts. People commit specific messages. So the commit charts look like has good graphics. What? Because before commits are so tied to the value developers bring. Like how many commits do you make? How many lines of code do you change? I mean, we all know it's not the best proxy for value. Yes. Not at all. Right, so what would be the next Commit chart on GitHub? What would that look like? What's the next. That's an amazing question. Maybe how many tokens you burn? Do you come to the office and look, I burned like 10 million tokens over the weekend. So token spurn could also be very ineffective prompting or context engineering. Yeah, I just stuck my entire code base exactly in the context window. Maybe. Is it the number of agents you use? Is it. That's even more gainable. Right. What is the unit of value that's the closest approximation to what you've delivered as value as developer Non cached input tokens. No. Might be too complicated. I don't know. Yeah, I've taken this to a higher level. The metrics, how we evaluate software development are changing where potentially a big complex refactoring isn't that much work anymore. Because I can let the LLM do this and it's structurally easy. A specific optimization in a fairly obscure area where the LLM has no training data I may have to do by hand and it's vastly more valuable. So yeah, it's complicated. Maybe it's number of apps vibe coded. I think something else which is there's on the market map agent toolboxes. There's actually a box I want to double click on. There's the agent orchestration. You can now use not just one agent, but multiple agents, even different copies of the same agent for them to parallelly do things. Yeah. What's the implication of this? What can you do with multiple agents orchestrating them together that you couldn't do before? I mean we're all very ADHD developers. I mean, I think there's a couple of things work faster obviously. Right. But you can also try out multiple approaches in parallel and see which one works best. I've seen approaches where people, for example, they want to optimize a certain code, they fire off a couple of agents with slightly different approaches and then just measure which one works best. I see. And there's some startups proposing doing that in a more automated way even. Right. Where you would just do this, you know without human intervention you just say optimize this and then you know, this gets kicked off in the background. This all takes a crazy amount of tokens at the end, right? So I mean I think this is a really, another really interesting trend where three months ago I don't recall anybody talking about the cost of coding assistance. Right? Yeah, three months ago, honestly, nothing, right? You go to a Reddit forum on cursor. So there was, you know, pretty much nothing posted there today. That's one of the number one topics in those forums. Maybe the number one topic, right. Because I think we figured out how with high powered reasoning models and very large context windows we can have single tasks that suddenly cost dollars. Not sure about tens of dollars, but at least you know, dollars is very, very doable and that adds up, right? It depends. You know, if you're a super high high end programmer, maybe it doesn't matter if you're anybody else, a couple of dollars an hour, you know, that's a substantial expense. If you're in a low cost location, it may end up costing more than what you are making. It used to be that if I was writing software, oversimplifying slightly had one expense, which is people, right? Okay, they needed a laptop and some connectivity in an office. But the grand scheme of things, the cost that really mattered, at least in the your more high cost locations was the compensation of the person. That seems to have changed. Now we suddenly have infrastructure cost for a software engineer. They need the constant feed of LLM tokens to keep them happy, otherwise they're not productive. They'll probably change the industry somehow. I'm not sure we understand how. We know people will be building more, that's for sure. That's right. And the question becomes does building more correlate with more tokens burned to some extent, right. And I met so many great engineers who are burning. They're the top token burning engineer of the company and they're just so effective. Have like two laptops side by side and around coding agents. That way it's like the shift from digging versus driving the excavator that does the digging, right? That's the first search. But that changes the industry. Probably the person's slightly happier, right? I mean that's what happened. In that case you need less of them. People will. But you can also build a lot more. So I think it'll change the industry. Then I think there's more customization to software to your point about building more. Because there's always one bespoke tool for any business out there there's HR software. It covers 80% of what every company needs. The 20%. I know this really well because I used to be a PM on these enterprise softwares. We build APIs so internal teams build their version. That's right, yeah. And then now we just have turtles all the way down. Like we build the base layer, the internal team builds next layer and then developers are like, oh, still doesn't work, let me build something else. But now with vive coding, I actually think customization is just easier and easier than ever. You may or may not even need a centralized team to build that layer. Using a commercial solutions APIs, you can just code it up yourself. Like what I did, something I've been thinking about is what's the next workflow or automation going to be like? Before we have Zapier and other great RPA tools of the world to make it work. Now it's by coding. Obviously you still need to know code to some extent to make it work. How would that change? I think we'll just end up having more workflows running somewhere. The question is how can we unlock the Net new developers who are not traditionally technical but now are writing code to implement these. Maybe they don't need a graphical interface. Or if they do need a graphical interface, it can be represented by JSON which is more agent friendly. Yeah. In fact we're starting to see almost self extending software. Software where a user with a prompt can add additional functionality. Yeah, that's so true. Yeah, yeah. Which is crazy. Is that a trend? I think it is a trend. Will the next version of Microsoft Word, not the next version, but maybe a couple of versions down the road have a add feature button and they help menu. So software, I think the takeaway point is software is having more affordance than before because of the ability to integrate LLM. And what I mean by that is before if I'm a marketing company, I ship a feature so people can visualize six charts. Now I ship a chat session with the LLM. LLM can reach back to my data and the LLMs generate code to materialize whatever charts people want to see. So it's more than six, it's like thousands and hundreds and thousands of things that people will want to see. So the interaction model between end user of the marketing software and LLM becomes it can materialize net new features using their own words. So prompt, which is very different from before. Software teams, they're shipping feature by feature. So now with all that, what do you think people want to build or what do you think developers should be building? What do you think the world needs? It needs so much. I mean, there's two things I'm sure about. One is this is over the last three, four decades, probably the best moment in time to start a company in the development space if you have such a massive disruption. This is what allows a startup to really grow and scale and pick a battle with the incumbents. I think we've seen that with GitHub Copilot for Microsoft. First in the market relationship to the number one model company with OpenAI, they have the number one source code repo, they have the number one IDE, they have the best enterprise salesforce. And still, you know, we're seeing, you know, a swarm of competitors that are all doing very, very well against them. So this is really the time. The second thing that I'm 100% convinced of is that the good ideas are not coming from the VCs but from the entrepreneurs. So, you know, if you spot an opportunity to do something better with AI right now, you can probably add value. Right. And then it's about fast execution, it's about building a great team. You know, it's about running very, very fast. But you know, my prediction is I think we will, we will fund dozens of companies in this space going forward. Yeah, we are excited to just fund the next wave of startup. But if you're looking for ideas, here are two general directions. The first one is what are the traditional workflows that you can now reinvent? It might not be one to one. So like a better git may not be git exactly, it might be git and something el. Right. And then if you just map out the value chain like what we have on the blog post, you can pick a box and decide what you want to do with it. I think that's right. Yeah. That's one way to do it. The other way to do it is very differently from before. Like as product people we used to only build for humans other developers. Now we actually build a lot for the agents. Agents are the customers. Does the agent need a better context? That's right, you should build for that. Does the agent want lower latency for certain models? Well, there are companies shipping code apply models that operates way faster with higher accuracy. That's also a need. And when you look into where agents don't yet work, there's just plenty of things to work on from just easily reusable sandbox. I know there are great companies already building that or to how do you enable the agent to smash PR review process with the development process. There's just so much more there to reinvent how agents could work. So treat agents as your customer. Yes. And build for them the classic infrastructure, right? Absolutely. No, I think this is really an amazing time to start a company in this space. Yeah, Thanks for listening. If you enjoyed the episode, let us know by leaving a review. We've got more great conversations coming your way. See you next time. As a reminder, the content here is for informational purposes only, should not be taken as legal, business, tax or investment advice, or be used to evaluate any investment or security, and is not directed at any investors or potential investors in any A16Z fund. Please note that A16Z and its affiliates may also maintain investments in the companies discussed in this podcast. For more details, including a link to our investments, please see a16z.com disclosures.