All things AI w @altcap @sama & @satyanadella. A Halloween Special. 🎃🔥BG2 w/ Brad Gerstner
74 min
•Oct 31, 20256 months agoSummary
Brad Gerstner interviews Microsoft CEO Satya Nadella and OpenAI CEO Sam Altman about their restructured partnership, including Microsoft's 27% stake in OpenAI valued at $130 billion. The discussion covers OpenAI's $1.4 trillion compute commitments, the creation of a $130 billion nonprofit foundation, and the future of AI development including potential AGI timelines.
Insights
- Microsoft's OpenAI investment represents one of the most successful tech partnerships ever, with a $13.5 billion investment now worth $130+ billion
- The AI industry faces a fundamental shift from traditional SaaS models to agent-based architectures that will require new business models and pricing structures
- Compute constraints, not demand constraints, are the primary limitation for AI growth, with power and infrastructure buildout being the biggest bottlenecks
- The partnership structure allows OpenAI exclusivity on Azure through 2030 while creating the world's largest nonprofit foundation focused on AI safety and health research
- AI is driving a reindustrialization of America with trillions in infrastructure investment, fundamentally changing productivity and economic growth patterns
Trends
Shift from traditional SaaS applications to AI-powered agent architecturesMassive infrastructure investment cycle driving US reindustrializationCompute constraints becoming primary growth limiter rather than demandEvolution from search-based to conversational AI interfacesIntegration of AI agents into enterprise workflows for productivity gainsPower and data center buildout as critical infrastructure bottlenecksRegulatory fragmentation at state level creating compliance challengesAI-driven scientific discovery becoming reality in 2026Consumer device evolution toward always-on AI assistantsTransition from subscription to consumption-based AI pricing models
Topics
Microsoft-OpenAI Partnership RestructuringAI Compute Infrastructure InvestmentAGI Development TimelineAI Safety and RegulationEnterprise AI AdoptionSaaS Business Model DisruptionAI Agent DevelopmentScientific AI DiscoveryConsumer AI DevicesCloud Computing GrowthAI Revenue Sharing ModelsPower Grid InfrastructureFederal vs State AI RegulationAI Productivity GainsReindustrialization of America
Companies
People
Quotes
"I think this is one of the great tech partnerships ever. And certainly without Microsoft and particularly Satya's early conviction, we would not have been able to do it."
Sam Altman
"There has not been a single business plan that I've seen from OpenAI that they're put in and not beaten it."
Satya Nadella
"If we could 10x our computer, we might not have 10x more revenue, but we'd certainly have a lot more revenue simply because of lack of compute power."
Sam Altman
"Nothing is a commodity at scale."
Satya Nadella
"I hope that Satya makes a trillion dollars on the investment, not 100 billion, you know, whatever it is."
Sam Altman
Full Transcript
I think this has really been an amazing partnership through every phase. We had kind of no idea where it was all going to go when we started, as Satya said. But I think this is one of the great tech partnerships ever. And certainly without Microsoft and particularly Satya's early conviction, we would not have been able to do it. What a week. What a week. Great to see you both. Sam, how's the baby? Baby is great. That's the best thing ever, man. Every cliche is true and it is the best thing ever. Hey Satya, with all your time, the smile on Sam's face whenever he talks about his baby, it's just so different. And Compute, I guess when he talks about Compute and his baby. Well, Satya, have you given any dad tips with all this time you guys have spent together? I said just enjoy it. I mean, it's so awesome that, you know, I, you know, we had our babies or what our children so young and I wish I could redo it. So in some sense it's just the most precious time and as they grow, it's just so wonderful. I'm so glad Sam is, I'm happy to be doing it older, but I do think sometimes, man, I wish I had the energy of when I was like 25. That part's harder. No doubt about it. What's the average age at OpenAI? Sam, any idea? It's young. It's not crazy young. Not like most Silicon Valley startups. I don't know, maybe low 30s, average. Are babies trending positively or negatively? Baby's trending positively. Oh, that's good. That's good. Yeah. Well, you guys, such a big week. You know, I was thinking about, I started at Nvidia's GTC, you know, just hit $5 trillion. Google, Meta, Microsoft. Satya, you had your earnings yesterday. You know, and we heard consistently, not enough compute, not enough compute, not enough compute. We got rate cuts on Wednesday. The GDP's tracking near 4%. And then I was just saying to Sam, the President's cut these massive deals in Malaysia, South Korea, Japan, sounds like with China, deals that really incredibly provide the financial firepower to re industrialize America. 80 billion for new nuclear fission. All the things that you guys need to build more compute. But certainly wasn't. What wasn't lost in all of this was you guys had a big announcement on Tuesday that clarified your partnership. Congrats on that. And I thought we'd just start there. I really want to just break down the deal in really simple plain language to make sure I understand it. And others. But you know, we'll just start with your investment, Satya. You know, Microsoft started investing in 2019, has invested in the ballpark of 13, $14 billion into OpenAI. And for that you get 27% of the business ownership in the business on a fully diluted basis. I think it was about a third. And you took some dilution over the course of last year with all the investment. So does that sound about right in terms of ownership? Yeah, it does. But I would say before even our stake in it. Brad, I think what's pretty unique about OpenAI is the fact that as part of OpenAI's process of restructuring, one of the largest nonprofit gets created. I mean, let's not that in some sense, I say at Microsoft, we are very proud of the fact that we are associated with the two of the largest nonprofits, the Gates foundation and now the OpenAI Foundation. So that's, I think the big news. You obviously are thrilled. It's not what we thought and as I said to somebody, it's not like when we first invested our billion dollars that oh, this is going to be the hundred bagger that I'm going to be talking about to VCs about. But here we are. But we are very thrilled to be an investor and an early backer. And it's a, and it's really a testament to what Sam and team have done, quite frankly. I mean, they obviously had the vision early about what this technology could do and they ran with it and just executed in a masterful way. I think this has really been an amazing partnership through every phase. We had kind of no idea where it was all going to go when we started, as Satya said. But I think this is one of the great tech partnerships ever. And certainly without Microsoft and particularly Satya's early conviction, we would not have been able to do this. I don't think there were a lot of other people that would have been willing to take that kind of a bet, given what the world looked like at the time. We didn't know exactly how the tech was going to go. Well, not exactly. We didn't know at all how the tech was going to go. We just had a lot of conviction in this one idea of pushing on deep learning and trusting that if we could do that, we'd figure out ways to make wonderful products and create a lot of value and also, as Satya said, create what we believe will be the largest nonprofit ever. And I think it's going to do amazingly great things. I really like the structure because it Lets the nonprofit grow in value while the PBC is able to get the capital that it needs to keep scaling. I don't think the nonprofit would be able to be this valuable if we didn't come up with the structure and if we didn't have partners around the table, that we're excited for it to work this way. But, you know, I think it's been six. More than six years since we first started this partnership, and a pretty crazy amount of achievement for six years. And I think much, much more to come. I hope that Satya makes a trillion dollars on the investment, not 100 billion, you know, whatever it is. Well, as part of the restructuring, you guys talked about it, you have this nonprofit on top and a public benefit corp below. It's pretty insane. The nonprofit is already capitalized with $130 billion. $130 billion of OpenAI stock. It's one of the largest in the world out of the gates. It could end up being much, much larger. The California Attorney General said they're not going to object to it. You already have this 130 billion dedicated to making sure that AGI benefits all of humanity. You announced that you're going to direct the first 25 billion to health and AI, security and resilience. Sam, first, let me just say, you know, as somebody who participates in the ecosystem, kudos to you both. It's incredible, this contribution to the future of AI. But, Sam, talk to us a bit about the importance of the choice around health and resilience, and then help us understand how do we make sure that you get maximal benefit without it getting weighted down, as we've seen with so many nonprofits with its own political biases. Yeah. First of all, the best way to create a bunch of value for the world is hopefully what we're doing we've already been doing, which is to make these amazing tools and just let people use them. And I think capitalism is great. I think companies are great. I think people are doing amazing work getting advanced AI into the hands of a lot of people. And companies, they're doing incredible things. There are some areas where the. I think market forces don't quite work for what's in the best interest of people. And you do need to do things in a different way. There are also some new things with this technology that just haven't existed before, like the potential to use AI to do science at a rapid clip, like really truly automated discovery. And when we thought about the areas we wanted to first focus on, clearly if we can cure a lot of disease and make the data and information for that broadly available. That would, that'd be a wonderful thing to do for the world. And then on this point of AI resilience, I do think some things may get a little strange and they won't all be addressed by companies doing their thing. So as the world has to navigate through this transition, if we can fund some work to help with that, and that could be cyber defense, that could be AI safety research, that could be economic studies, all of these things helping society get through this transition smoothly. We're very confident about how great it can be on the other side, but I'm sure there will be some choppiness along the way. Let's keep busting through the deal. So models and exclusivity SAM OpenAI can distribute its models, its leading models on Azure, but I don't think you can distribute them on any other big clouds for seven years until 2032. But that would end earlier. If AGI is verified, we can come back to that, but you can distribute your open source models, SORA agents, codecs, wearables, everything else on other platforms. So Sam, I assume this means no ChatGPT or GPT6 on Amazon or Google? No. So we have a cat. First of all, we want to do lots of things together to help create value for Microsoft. We want them to do lots of things to create value for us. And there are many, many things that'll happen in that category. We are keeping what Satya termed once, and I think it's a great phrase of stateless APIs on Azure exclusively through 2030 and everything else we're going to distribute elsewhere, and that's obviously in Microsoft's interest too. So we'll put lots of products, lots of places, and then this thing we'll do on Azure and people can get it there or via us, and I think that's great. And then the rev share, there's still a rev share that gets paid by OpenAI to Microsoft on all your revenues. That also runs until 2032 or until AGI is verified. So let's just assume for the sake of argument, I know this is pedestrian, but it's important that the rev share is 15%. So that would mean if you had 20 billion in revenue, that you're paying 3 billion to Microsoft and that counts as revenue to Azure. Satya, is that, does that sound about right? Yeah, we have a rev share and I think as you characterized it, is either going to AGI or till the end of the term. And I actually don't know exactly where we count it, quite honestly. Whether it goes into Azure or somewhere else. That's a good question. It's a good question for Amy. Given that both exclusivity and the Rev share end early in the case, AGI is verified, it seems to make AGI a pretty big deal. And as I understand it, if OpenAI claimed AGI, it sounds like it goes to an expert panel and. And you guys basically select a jury who's gotta make a relatively quick decision whether or not AGI has been reached. Satya, you said on yesterday's earning call that nobody's even close to getting to AGI and you don't expect it to happen anytime soon. You talked about this spiky and jagged intelligence. Sam, I've heard you perhaps sound a little bit more bullish on when we might get to AGI, So I guess the question is to you both, do you worry that over the next two or three years, we're gonna end up having to go call in the jury to effectively make a call on whether or not we've hit AGI? I realize you've got to try to make some drama between us here. You know, I think putting a process in place for this is a good thing to do. I expect that the technology will take several surprising twists and turns, and we will continue to be good partners to each other and figure out what makes sense. That's well said, I think, and that's one of the reasons why I think this process we put in place is a good one. And at the end of the day, I'm a big believer in the fact that intelligence, capability wise, is going to continue to improve. And our real goal, quite frankly, is that, which is, how do you put that in the hands of people and organizations so that they can get the maximum benefits? And that was the original mission of OpenAI. That attracted me to OpenAI and SAM and team, and that's kind of what we plan to continue on. Brad, to say the obvious, if we had superintelligence tomorrow, we would still want Microsoft's help getting this product out into people's hands. And we want them like, yeah, of course, of course. Yeah. No, again, I'm asking the questions I know that are on people's minds, and that makes a ton of sense to me. Obviously, Microsoft is one of the largest distribution platforms in the world. You guys have been great partners for a long time, but I think it dispels some of the myths that are out there. But let's shift gears a little bit. You know, obviously OpenAI is one of the fastest growing companies in history. Satya, you said on the pod a year ago, this podcast, that every new phase shift creates a new Google. And the Google of this phase shift is already known and it's OpenAI. And none of this would have been possible had you guys not made these huge bets. With all that said, OpenAI's revenues are still a reported $13 billion in 2025. And Sam, on your livestream this week, you talked about this massive commitment to compute 1.4 trillion over the next four or five years with, you know, big commitments. 500 million to Nvidia, 300 million to AMD and Oracle, 250 billion to Azure. So I think the single biggest question I've heard all week and hanging over the market is, you know, how can a company with 13 billion in revenues make 1.4 trillion of spend commitments? You know, and you've heard the criticism, we're doing well, more revenue than that. Second of all, Brad, if you want to sell your shares, I'll find you a buyer. I just enough like, you know, people are. I think there's a lot of people who would love to buy OpenAI shares. I don't, I don't think you would. Including myself. Including myself who talk with a lot of like, breathless concern about our compute stuff or whatever that would be thrilled to buy shares. So I think we, we could sell, you know, your shares or anybody else's to some of the people who are making the most noise on Twitter, whatever about this very quickly. We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it's going to continue to grow and that not only will ChatGPT keep growing, but we will be able to become one of the important AI clouds, that our consumer device business will be a significant and important thing. That AI that can automate science will create huge value. So, you know, there are not many times that I want to be a public company, but one of the rare times it's appealing is when those people are writing these ridiculous OpenAI's about to go out of business and whatever. I would love to tell them they could just short the stock and I would love to see them get burned on that. But we carefully plan. We understand where the technology, where the capability is going to go and how the products we can build around that and the revenue we can generate. We might screw it up. Like, this is the bet that we're making and we're taking a risk along with that. A certain risk is if we don't have the compute. We will not be able to generate the revenue or make the models at these, at this kind of scale. Exactly. And let me just say one thing, Brad. As both a partner and an investor, there is not been a single business plan that I've seen from OpenAI that they're put in and not beaten it. So in some sense this is the one place where in terms of their growth and just even the business, it's been unbelievable execution, quite frankly. I mean, obviously OpenAI, everyone talks about all the success and the usage and what have you, but even I would say all up the business execution has been just pretty unbelievable. I heard Greg Brockman say on CNBC a couple weeks ago, right, if we could 10x our computer, we might not have 10x more revenue, but we'd certainly have a lot more revenue simply because of lack of compute power. Things like. Yeah, it's just, it's really wild when I just look at how much we are held back and in many ways we have, you know, we've scaled our compute probably 10x over the past year, but if we had 10x more compute, I don't know if we'd have 10x more revenue, but I don't think it'd be that far. And we heard this from you as well last night, Satya, that you were compute constrained and growth would have been higher even if you add more compute. So help us contextualize, Sam. Maybe like how compute constrained do you feel today? And when you look at the build out over the course of the next two to three years, do you think you'll ever get to the point where you're not compute constrained? We talk about this question of is there ever enough compute a lot? I think the answer is the only, the best way to think about this is like energy or something. You can talk about demand for energy at a certain price point, but you can't talk about demand for energy without talking about at different, you know, different demand at different price levels. If the price of compute per like unit of intelligence or whatever, however you want to think about it, fell by a factor of 100, tomorrow you would see usage go up by much more than 100 and there'd be a lot of things that people would love to do with that compute that just make no economic sense at the current cost. But there would be new kind of demand. So I think the. Now on the other hand, as the models get even smarter and you can use these models to cure cancer, discover novel physics, or drive a bunch of humanoid Robots to construct a space station or whatever crazy thing you want, then maybe there's huge willingness to pay a much higher rate cost per unit of intelligence for a much higher level of intelligence that we don't know yet, but I would bet there will be. So I think when you talk about capacity it's like cost per unit and capability per unit and you have to kind of without those curves it's sort of a made up number. It's not a super well specified problem. Yeah, I mean I think the one thing that Sam, you've talked about, which I think is the right way is to think about is that if intelligence is whatever log of computer, then you try and really make sure you keep getting efficient. And so that means the tokens per dollar per watt and the economic value that the society gets out of it is what we should maximize and reduce the costs. And so that's where if you sort of where the Jevons paradox point is that which is you keep reducing it, commoditizing in some sense intelligence so that it becomes the real driver of GDP growth all around. Unfortunately it's something closer to log of intelligence equals log of compute. But we may figure out better scaling laws and we may figure out how to do this yet. We heard from both Microsoft and Google yesterday. Both said their cloud businesses would have been growing faster if they have more GPUs. I asked Jensen on this pod if there was any chance over the course of the next five years we would have a compute glut and he said it's virtually non existent chance in the next two to three years. And I assume you guys would both agree with Jensen that while we can't see out five, six, seven years, certainly over the course of the next two to three years, for the reasons we just discussed, that it's almost a non existent chance that you have excess compute. Well, I mean I think the cycles of demand and supply in this particular case you can't really predict. Right. I mean even the point is what's the secular trend? The secular trend is what Sam said, which is at the end of the day because quite frankly the biggest issue we are now having is not a compute glut, but it's a power and it's sort of the ability to get the builds done fast enough, close to power. So if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. In fact, that is my problem today. Right. It's not a supply issue of chips, it's Actually the fact that I don't have warm shells to plug into. And so how some supply chain constraints emerge, tough to predict because the demand is just going, you know, is tough to predict. Right. I mean I wouldn't, you know, it's not like Sam and I would want to be sitting here saying, oh my God, we are less short on compute. It's because we just were not that good at being able to project out what the demand would really look like. So I think that that's. And by the way, the worldwide side red one. It's one thing to sort of talk about one segment in one country, but it's about, you know, really getting it out to everywhere in the world. And so there will be constraints and how we work through them is going to be the most important thing. It won't be a linear path for sure. There will come a glut for sure. And whether that's like in two to three years or five to six, Satya and I can't tell you, but like it's going to happen at some point, probably several points along the way. Like this is. There is something deep about human psychology here and bubbles. And also as Satya said, like there's, it's such a complex supply chain, weird stuff gets built. The technological landscape shifts in big ways. So you know, if a very cheap form of energy comes online soon at mass scale and a lot of people are going to be extremely burned with existing contracts, they've signed it. If, if we can continue this unbelievable reduction in cost per unit of intelligence, let's say it's been averaging like 40x for a given level per year. You know, that's like a very scary exponent from an infrastructure build out standpoint. Now again, we're taking the bet that there'll be a lot more demand as that gets cheaper. But I have some fear that it's just like, man, we keep going with these breakthroughs and everybody can run like a personal AGI on their laptop. And we just did an insane thing here. Some people are going to get really burned like has happened in every other tech infrastructure cycle at some points along the way. I think that's really well said. And you have to hold those two simultaneous truths. We had that happen in 2000, 2001, and yet the Internet became much bigger and produced much greater outcomes for society than anybody estimated in that period of time. Yeah, but I think that the one thing that Sam said is not talked about enough, which is for example, the optimizations that OpenAI has done on the inference stack for a given GPU. I mean, it's kind of like we talk about the Moore's Law improvement on one end, but the software improvements are much more exponential than that. Someday we will make an incredible consumer device that can run a GPT5 or GPT6 capable model completely locally at a low power draw. And this is like so hard to wrap my head around. That will be incredible. And that's the type of thing I think that scares some of the people who are building obviously these large centralized compute stacks. And Satya, you've talked a lot about the distribution both to the edge as well as having inference capability distributed around the world. Yeah, I mean, the way at least I've thought about it is more about really building a fungible fleet. I mean, when I look at sort of in the cloud infrastructure business, one of the key things you have to do is have two things. One is an efficient, like in this context, and a very efficient token factory. And then high utilization. That's. That's it. There are two simple things that you need to achieve. And in order to have high utilization, you have to have multiple workloads that can be scheduled. Even on the training. I mean, if you look at the AI pipelines, there's pre training, there's mid training, there's post training, there's rl. You want to be able to do all of those things. So thinking about fungibility of the fleet is everything for a cloud provider. Okay, so Sam, you referenced, you know, and Reuters was reporting yesterday that OpenAI may be planning to go public late 26 or in 27. No, no, no, we don't have anything that specific. I'm a realist. I assume it will happen someday, but that was. I don't know why people write these reports. We don't have like date in mind. Great to know to do this or anything like that. I just assume it's where things will eventually go. But it does seem to me if you guys were, you know, are doing in excess of $100 billion of revenue in 28 or 29, that you at least would be in position. What? How about 27? Yeah, 27. Even better. You are in position to do an IPO and the rumored trillion dollars. Again, just to contextualize for listeners, if you guys went public at 10 times 100 billion in revenue. Right. Which would be, I think a lower multiple than Facebook went public at. A lower multiple than a lot of other big consumer companies went public at, that would put you at a trillion dollars. If you floated 10 to 20% of the company that raises 100 to $200 billion, which seems like that would be a good path to fund a lot of the growth and a lot of the stuff that we just talked about. So you're not opposed to it. You're not, but you guys are making the company with revenue growth, which is what I would like us to do, but no doubt about it. Well, I've also said I think that this is such an important company and you know, there are so many people, including my kids, who like to trade their little accounts and they use ChatGPT and I think having retail investors have an opportunity to buy one of the most important and largest companies, honestly, that, that is probably the single most appealing thing about it to me. That would be really nice. One of the things I've talked to you both about shifting gears again is part of the big beautiful bill, you know, Senator Cruz had included federal preemption so that we wouldn't have this state patchwork. 50 different laws that mires the industry down in kind of needless compliance and regulation. Unfortunately, it got killed at the last second by Senator Blackburn because frankly, I think AI is pretty poorly understood in Washington and there's a lot of doomerism. I think that has gained traction in Washington. So now we have state laws like the Colorado AI act that goes into full effect in February. I believe that creates this whole new class of litigants. Anybody who claims any unfair impact from an algorithmic discrimination in a chatbot. So somebody could claim harm for countless reasons. Sam, how worried are you that, you know, having this state patchwork of AI, you know, poses real challenges to, you know, our ability to continue to accelerate and compete around the world. I don't know how we're supposed to comply with that California, sorry, Colorado law, I would love them to tell us and you know, we'd like to be able to do it. But that's just from what I've read of that, that's like I literally don't know what we're supposed to do. I'm very worried about a 50 state patchwork. I think it's a big mistake. I think it's, there's a reason we don't usually do that for these sorts of things. I think it'd be bad. Yeah, I mean, I think the fundamental problem of, you know, this patchwork approach is quite frankly, I mean, between OpenAI and Microsoft will figure out a way to navigate this, right? I mean, we can figure this out. The problem is anyone starting a startup and trying to kind of this, it's sort of, it just goes to the exact opposite or I think what the intent here is, which obviously safety is very important. Making sure that the fundamental, you know, concerns people have are addressed. But there's a way to do that at the federal level. So I think the, if we don't do this again, you know, EU will do it and then that'll cause its own issue. So I think if US leads, it's better as you know, as one regulatory framework for sure. And to be clear, it's not that one is advocating for no regulation, it's simply saying let's have agreed upon regulation at the federal level as opposed to 50 competing state laws which certainly firebombs the AI startup industry. And I think it makes it super challenging even for companies like yours who can afford to defend all these cases. Yeah. And I would just say quite frankly my hope is that this time around, even across EU and the United States, that'll be the dream, right? Quite frankly for any European startup, I don't think that's going to happen. Satsu, what is that? That would be great. I wouldn't hold your breath for that one. That would be great. But I really think that if you think about it, right, if sort of, if anyone in Europe is thinking about their how can they participate in this AI economy with their companies, this should be the main concern there as well. So therefore I hope there is some enlightened approach to it. But I agree with you that today I wouldn't bet on that. I do think that with Sachs as the AIs are, you @ least have a president that I think might fight for that in terms of coordination of AI policy, using trade as a lever to make sure that, you know, we don't end up with overly restricted European policy. But we shall see. I think first things first, federal preemption in the United States is pretty critical. You know, we've been down in the weeds a little bit here Sam, so I want to telescope out a little bit. You know, I've heard people on your team talk about all the great things coming up and as you start thinking about Much more, unlimited compute, ChatGPT 6 and beyond robotics, physical devices, scientific research. As you look forward to 2026, what do you think surprises us the most? What are you most excited about in terms of what's on the drawing board? I mean you just hit on a lot of the key points there. I think Codex has been a very cool thing to watch this year. And as these go from multi hour tasks to multi day tasks which I expect to happen next year what people be able to do to create software at an unprecedented rate and really in fundamentally new ways. I'm very excited for that. I think we'll see that in other industries too. I have a bias towards coding. I understand that one better. But I think we'll see that really start to transform what people are capable of. I hope for very small scientific discoveries in 2026, but if we can get those very small ones, we'll get bigger ones in future years. That's a really crazy thing to say is that like AI is going to make a novel scientific discovery in 2026, even a very small one. This is like, this is a wildly important thing to be talking about. So I'm excited for that. Certainly robotics and computer and new kind of computers in future years, that'll be. That'll be very important. But yeah, my personal bias is if we can really get AI to do science here, that is, I mean, that is super intelligence in some sense. Like if this is expanding the total sum of human knowledge, that is a crazy big deal. Yeah, I mean, I think one of the things, to use your Codex example, I think the combination of the model capability, I mean, if you think about the magical moment that happened with ChatGPT was the UI that met intelligence that just took off, right? There's just, you know, unbelievable, right, form factor. And some of it was also the instruction following piece of model capability was ready for chat. I think that that's what the codecs and these coding agents are about to help us, which is what's that, you know, coding agent goes off for a long period of time, comes back and then I'm then dropped into what I should steer like one of the metaphors I think we are all sort of working towards is I do this macro deleg and micro steering, what is that UI meets this new intelligence capability. And you can see the beginnings of that with codecs. Right. The way at least I use it inside a GitHub copilot is just a different way than the chat interface. And I think that, I think would be a new way for the human computer interface. Quite frankly, it's probably bigger than that. Might be the departure. That's one reason I'm very excited that we're doing new form factors of computing devices, because computers were not built for that kind of workflow very well. Certainly a UI like ChatGPT is wrong for it. But this idea that you can have a device that is sort of always with you but able to go off and do things and get micro steer from you when it needs and have like really good contextual awareness of your whole life and flow. I think that'd be cool. And what neither of you have talked about is the consumer use case. I think a lot about. You know, again, we go onto this device and we have to hunt and peck through a hundred different applications and fill out little web forms, things that really haven't changed in 20 years. But to just have a personal assistant that we take for granted, perhaps that we actually have a personal assistant. But to give a personal assistant for virtually free to billions of people around the world to improve their lives, whether it's ordering diapers for their kid or whether it's booking their hotel or making changes in their calendar, I think sometimes it's the pedestrian that's the most impactful. And as we move from answers to memory and actions and then the ability to interface with that through an earbud or some other device that doesn't require me to constantly be staring at this rectangular piece of glass, I think it's pretty extraordinary. I think that that's what Sam was teasing. Yeah. Yeah. I hope we get it right. I got to drop off, unfortunately. Sam, it was great to see you. Thanks for joining us. Congrats again on this big step forward and we'll talk soon. Thanks for letting me crash. Thanks. See you, Samuel. Take care. See ya. As Samwell knows, we're certainly a buyer, not a seller. But sometimes I think it's important because the world, we're a pretty small. We spend all day long thinking about this stuff. Right. And so conviction comes from the 10,000 hours we've spent thinking about it. But the reality is we have to bring along the rest of the world. And the rest of the world doesn't spend 10,000 hours thinking about this. And frankly, they look at some things that appear overly ambitious. Right. And get worried about whether or not we can pull those things off. So you took this idea to the board in 2019 to invest a billion dollars into OpenAI. Was it a no brainer in the boardroom? You know, did you have to expend any political capital to get it done? Dish. Dish for me a little bit like what that moment was like, because I think it was such a pivotal moment. Not just for Microsoft, not just for the country, but I really do think for the world. Yeah. I mean it's interesting when you look back, the journey, when I look at it, it's been, you know, we were involved even in 2016, when initially OpenAI started. In fact, Azure was even the first sponsor, I think. And then they were doing a lot more reinforcement learning at that time. I remember the Dota 2 competition I think happened on Azure and then they moved on to other things and I was interested in rl, but quite frankly, it speaks a little bit to your 10,000 hours or the prepared mind. Microsoft since 1995 was obsessed. I mean Bill's obsession for the company was natural language, natural language. I mean after all we're a coding company, information work company. So it's when Sam in 2019 started talking about text and natural language and transformers and scaling laws, that's when I said, wow, like this is an interesting. I mean this was a team that was going in the direction or the direction of travel was now clear. It had a lot more overlap with our interest. So in that sense it was a no brainer. Obviously go to the board and say, hey, I have an idea of taking a billion dollars and giving it to this crazy structure which we don't even kind of understand. What is it? It's a nonprofit, blah blah, blah, and, and saying go for it. There was a debate. Bill was kind of rightfully so skeptical because. And then he became like once he saw the GPT4 demo, like that was like the thing that Bill talked about publicly where when he saw it he said it's the best demo he saw after what Charles Simony showed him@xerox parc. But quite honestly, none of us could. So the moment for me was that, you know, let's go give it a shot. Then seeing the early codecs inside of copilot, inside of GitHub copilot and seeing just the code completions and seeing it work, that's when I would say I felt like I can go from 1 to 10 because that was the big call, quite frankly. 1 was controversial, but the 1 to 10 was what really made this entire era possible. And then obviously the great execution by the team and the productization on their part, our part. I mean if I think about it, right, the collective monetization reach of GitHub, Copilot, ChatGPT, Microsoft 365 Copilot and Copilot, you add those four things, that is it, right? That's the biggest sort of AI set of products out there on the planet. And that's what obviously has let us sustain all of this. And I think not many people know that your cto, Kevin Scott, you know, an ex Googler, lives down here in Silicon Valley. And to contextualize it, right, Microsoft had missed out on search, had Missed out on mobile, you become CEO. Almost had missed out on the cloud. Right. You've described it. Caught the last train out of town to capture the cloud and I think you were pretty determined to have eyes and ears down here. So you didn't miss the next big thing. So I assume that Kevin played a good role for you as well. Absolutely. Behind Deep Seek and OpenAI. Yeah, I mean, I mean it's in fact I would say Kevin's conviction and Kevin was also skeptical like that was the thing I always watch for people who are skeptical who change their opinion because to me that's a signal. So I'm always looking for someone who's a non believer in something and then suddenly changes and then they get excited about it. I have all the time for that because I'm then cur. Why? And so Kevin started with all of us were kind of skeptical. Right. I mean in some sense it defies the, you know, we're all having gone to school and said God, you know, there must be an algorithm to crack this versus just let's scaling laws and throw compute. But quite frankly Kevin's conviction that this is worth going after is one of the big things that drove this. Well, we talk about, you know, that investment that's now worth 130 billion I suppose could be worth a trillion someday as Sam says. But it really in many ways understates the value of the partnership. Right. So you have the value in the Rev share billions per year going to Microsoft. You have the profit you make off the $250 billion of the Azure compute commitment from OpenAI. And of course you get huge sales from the exclusive distribution of the API. So talk to us how you think about the value across those domains, especially how this exclusivity has brought a lot of customers who may have been on AWS to Azure. Yeah, no, absolutely. I mean so to us, if I look at it, aside from all the equity parts, the real strategic thing that comes together and that remains going forward is that stateless API exclusivity on Azure. That helps Quite frankly both OpenAI and US and our customers because when somebody in the enterprise is trying to build an application, they want an API that's stateless. They want to mix it up with compute and storage, put a database underneath it to capture state and build a full workload. And that's where Azure coming together with this API. And so what we're doing with even Azure Foundry, because in some sense, let's say you want to build an AI application but the key thing is how do you make sure that the evals of what you're doing with AI are great? So that's where you need even a full app server in Foundry. That's what we have done. And so therefore I feel that that is the way we will go to market in our infrastructure business. The other side of the value capture for us is going to be incorporating all this ip. Not only we have the exclusivity of the model in Azure, but we have access to the ip. I mean having a royalty free, let's even forgetting all the know how and the knowledge side of it, but having royalty free access all the way till seven more years gives us a lot of flexibility business model wise. It's kind of like having a frontier model for free in some sense if you're an MSFT shareholder. That's kind of where you should start from is to think about. We have a frontier model that we can then deploy whether it's in GitHub, whether it's in M365, whether it's in our consumer copilot, you know, then add to it our own data, post train it. So that means we can have it embedded in the weights there. And so therefore we are excited about the value creation on both the Azure and the infrastructure side as well as in our high value domains, whether it is in health, whether it's in knowledge work, whether it's in coding or security. You've been consolidating the losses from OpenAI. You know, I think you, you just reported earnings yesterday. I think you consolidated 4 billion of losses in the quarter. Do you think that investors are, I mean they may even be attributing negative value, right, because of the losses, you know, as they apply their multiple of earnings. Satya, whereas I hear this and I think about all of those benefits we just described, not to mention the look through equity value that you own in a company that could be worth a trillion unto itself, you know, do you think that the market is kind of misunderstanding the value of OpenAI and as a component of Microsoft? Yeah, that's a good one. So I think the approach that Amy is going to take is full transparency because at some level I'm no accounting expert, so therefore the best thing to do is to give all of the transparency. I think this time around as well, I think that's why the non GAAP gap, so that at least people can see the EPS numbers. Because the common sense way I look at it Brad, is simple. If you've invested, let's call it $13.5 billion. You can, of course, lose $13.5 billion, but you can't lose more than 13 and a half billion dollars. @ least the last time I checked, that's what you have at risk. You could also say, hey, the $135 billion, that is, you know, today our equity stake, you know, is sort of illiquid, what have you. We don't plan to sell it, so therefore it's got risk associated with it. But the real story I think you are pulling is all the other things that are happening. What's happening with Azure growth? Would Azure be growing if we had not sort of had the OpenAI partnership to your point, the number of customers who came from other clouds for the first time? Right. This is the thing that really we benefited from. What's happening with Microsoft 365? In fact, one of the things about Microsoft 365 was what was the next big thing after E5? Guess what? We found it in Copilot. It's bigger than any suite. Like, you know, we talk about penetration and usage and the pace. It's bigger than anything we have done in our information work, which we have been added for decades. And so we feel very, very good about the opportunity to create value for our shareholders and then at the same time, be fully transparent so that people can look through the. What are the losses? I mean, who knows what the accounting rules are? But we will do whatever is needed and people will then be able to see what's happening. But a year ago, Satya, there were a bunch of headlines that Microsoft was pulling back on AI infrastructure. Right. Fair or unfair? They were out there, you know, and perhaps you guys were a little more conservative, a little more skeptical of what was going on. Amy said on the call last night, though, that you've been short power and infrastructure for many quarters and she thought that you would catch up, but you haven't caught up because demand keeps increasing. So I guess the question is, were you too conservative, you know, knowing what you know now? And what's the roadmap from here? Yeah, it's a great question because see, the. The thing that we realized, and I'm glad we did, is that the concept of building a fleet that truly was fungible. Fungible for all the parts of the life cycle of AI, fungible across geographies and fungible across generations. Right. So, because one of the key things is when you have. Let's take even what Jensen and team are doing. I mean, they're at a pace. In fact, one of the things I like is the speed of light, right? We now have GB3 hundreds that we are bringing up. So you don't want to have ordered a bunch of GB2 hundreds that are getting plugged in only to find the GB3 hundreds are in full production. So you kind of have to make sure you're continuously modernizing, you're spreading the fleet all over, you are really truly fungible by workload and you're adding to that the software optimizations we talked about. So to me that is the decision we made and we said, look, sometimes you may have to say no to some of the demand, including some of the OpenAI demand, right? Because sometimes Sam may say, hey, build me a dedicated big whatever multi gigawatt data center in one location for training. Makes sense from an OpenAI perspective, doesn't make sense from a long term infrastructure build out for Azure. And that's where I thought we did the right thing to give them flexibility to go procure that from others while maintaining again a significant book of business from OpenAI, but more importantly giving ourselves the flexibility with other customers. Our own one p. Remember, like one of the things that we don't want to do is be short on. We talk about Azure. In fact, some of times our investors are overly fixated on the Azure number. But remember, for me, the high margin business for me is copilot. It is security copilot, it's GitHub copilot, it's the healthcare copilot. So we want to make sure we have a balanced way to approach the returns that the investors have. And so that's kind of one of the other misunderstood perhaps in our investor base in particular, which I find pretty strange and funny because I think they want to hold Microsoft because of the portfolio we have. But man, are they fixated on the growth number of one little thing called Azure. On that point. Azure grew 39% in the quarter on a staggering $93 billion run rate. And you know, I think that compares to GCP that grew at 32% and AWS closer to 20%. But could Azure, because you did give compute to 1p and because you did give compute to research, it sounds like Azure could have grown 41, 42% had you had more compute to auto. Absolutely, absolutely. There's no question. There is no question. So that's why I think the internal thing is to balance out what we think again is in the long term interests of our shareholders and also to serve our customers well and also not to kind of, you know, one of the other things was, you know, People talk about concentration, risk, right. We obviously want a lot of OpenAI, but we also want other customers. And so we are shaping the demand here. You know, we are in a supply, you know, we're not demand constrained, we are supply constrained. So we are shaping the demand such that it matches the supply in the optimal way with the long term view to that point. Satya, you talked about 400 billion. It's incredible number of remaining performance obligations. Last night you said that that's your booked business today, it'll surely go up tomorrow as sales continue to come in. And you said you're going to your need to build out capacity just to serve. That backlog is very high. You know, how diversified is that backlog to your point? And how confident are you that that 400 billion does turn into revenue over the course of the next couple years? Yeah, that 400 billion has a very short duration. As Amy explained, it's the two year duration on average. So that's definitely our intent. That's one of the reasons why we are spending the capital outlay with high certainty that we just need to clear the backlog. And to your point, it's pretty diversified Both on the 1P and the 3P. Our own demand is quite frankly pretty high for our one first party and even amongst third party. One of the things we now are seeing is the rise of all the other companies building real workloads that are scaling. And so given that, I think we feel very good. I mean obviously that's one of the best things about RPO is you can be planned for, quite frankly. And so therefore we feel very, very good about building. And then this doesn't include obviously the additional demand that we're already going to start seeing, including the 250, which will have a longer duration and we'll build accordingly. Right. So there are a lot of new entrants in this race to build out compute, Oracle, Coreweave, Crusoe, et cetera. And normally we think that will compete away margins, but you've somehow managed to build all this out while maintaining healthy operating margins at Azure. So I guess the question is for Microsoft, how do you compete in this world? That is where people are levering up, taking lower margins while balancing that profit and risk. And do you see any of those competitors doing deals that cause you to scratch your head and say, oh, we're just setting ourselves up for another boom and bust cycle? I mean, at some level the good news for us has been competing even as a hyperscaler every day. There's a lot of competition between us and Amazon and Google on all of these. Right. I mean, it's sort of one of those interesting things, which is everything is a commodity, right? Compute storage. I remember everybody saying, wow, how can there be a margin except at scale? Nothing is a commodity. And so therefore, yes, so we have to have a cost structure. Our supply chain efficiency, our software efficiencies, all have to kind of continue to compound in order to make sure that there's margins. But scale. And to your point, one of the things that I really love about the OpenAI partnership is it's gotten us scale, right? This is a scale game. When you have the biggest workload there is running on your cloud, that means not only are we going to learn faster on what it means to operate with scale, that means your cost structure is going to come down faster than anything else. And guess what, that'll make us price competitive. And so I feel pretty confident about our ability to, you know, have margins. And this is where the portfolio helps. I've always said, you know, you know, I've been forced into giving the Azure numbers, right, because at some level I never thought of allocating. I mean, my capital allocation is for the cloud from whether it is Xbox Cloud Gaming or Microsoft 365 or for Azure, it's one capital outlay and then everything is a meter as far as I'm concerned. From an MSFT perspective, it's a question of, hey, the blended average of that should match the operating margins we need as a company. Because after all, otherwise we're not a conglomerate. We're one company with one platform logic. It's not running five, six different businesses. We're in these five, six different businesses only to compound the returns on the cloud and AI investment. Yeah, I love that line. Nothing is a commodity at scale. You know, there's been a lot of ink and time spent even on this podcast with my partner Bill Gurley talking about circular revenues, including Microsoft stasher credits right to OpenAI that were booked as revenue. Do you see anything going like the AMD deal, you know, where they traded 10% of their equity, you know, for a deal, or the Nvidia deal? Again, I don't want to be overly fixated on concern, but I do want to address head on what is being talked about every day on CNBC and Bloomberg and there are a lot of these overlapping deals that are going on out there. When you think about that in the context of Microsoft, does any of that worry you again, as to the sustainability or durability of the AI revenues that we see in the world. Yeah, I mean, first of all, our investment of, let's say that 13 and a half, which was all the training investment that was not booked as revenue, that is the reason why we have the equity percentage, that's the reason why we have the 27% or 135 billion. So that was not something somehow that made it into Azure revenue. In fact, if anything, the Azure revenue was purely the consumption revenue of ChatGPT and anything else. And the APIs they put out that they monetized and we monetized to your aspect of others to some degree. It's always been there in terms of vendor financing. So it's not like a new concept that when someone's building something and they have a customer who is also building something, but they need financing, you know, for whether it is, you know, it's sort of some, they're taking some exotic forms which obviously need to be scrutinized by the investment community. But that said, you know, vendor financing is not a new concept. Interestingly enough, we have not had to do any of that. Right? I mean, we may have, you know, really either invested in OpenAI and essentially got an equity stake in it for return for compute, or essentially sold them great pricing of COMPUTE in order to be able to sort of bootstrap them. But you know, others choose to do so differently and I think circularity ultimately will be tested by demand because all this will work as long as there is demand for the final output of it. And up to now that has been the case. Certainly, certainly. Well, I want to shift, you know, as you said, over half your business is software applications. You know, I want to think about software and agents. You know, last year on this pod you made a bit of a stir by saying that much of application software, you know, was this thin layer that sat on top of a CRUD database. The notion that business applications exist, that's probably where they'll all collapse, right, in the agent era. Because if you think about it, right, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents. Public software companies are now trading at about 5.2 times forward revenue. So that's below their 10 year average of 7 times, despite the markets being at all time highs. And there's lots of concern that SaaS, subscriptions and margins may be put at risk by AI. So how today is AI affecting the growth rates of your software products of, you know, those core products? And specifically as you think about database Fabric Security Office 360. And then second question I guess is what are you doing to make sure that software is not disrupted but is instead super powered by AI? Yeah, I think that's a. Yeah, that's right. So the last time we talked about this, my point really there was the architecture of SaaS applications is changing because this agent tier is replacing the old business logic tier. And so because if you think about it, the way we built SaaS applications in the past was you had the data, the logic tier and the UI all tightly coupled and AI quite frankly doesn't respect that coupling because it requires you to be able to decouple. And yet the context engineering is going to be very important. I mean take something like Office 365. One of the things I love about our Microsoft 365 offering is it's low ARPU, high usage, right? I mean if you think about it, right? Outlook or Teams or SharePoint, you pick Word or Excel, like people are using it all the time, creating lots and lots of data which is going into the graph. And our ARPU is low. So that's sort of what gives me real confidence that this AI tier, I can meet it by exposing all my data. In fact, one of the fascinating things that's happened Brad, with both GitHub and Microsoft 365 is thanks to AI, we are seeing all time highs in terms of data that's going into the graph or the repo. I mean think about it, the more code that gets generated, whether it is Codex or Claude or wherever, where is it going? GitHub, more PowerPoints that get created, Excel models that get created, all these artifacts and chat conversations, chat conversations are new docs, they're all going in to the graph and all that is needed again for grounding. So that's what you turn it into a forward index, into an embedding. And basically that semantics is what you really go ground any agent request. And so I think the next generation of SaaS applications will have to sort of, if you are high arpu, low usage, then you have a little bit of a problem. But if you are, we are the exact opposite. We are low arpu, high usage. And I think that anyone who can structure that and then use this AI as in fact an accelerant because I mean like if you look at the M365 copilot price, I mean it's higher than any other thing that we sell and yet it's getting deployed faster and with more usage. And so I feel very good or coding who would have thought, in fact, take GitHub, what GitHub did in the first 15 years of its existence or 10 years of its existence, it was basically done in the last year. Just because coding is no longer a tool, it's more a substitute for wages somewhere. And so it's a very different type of business model, even kind of thinking about the stack and where value gets distributed. So until very recently, right, Clouds largely ran pre compiled software. You didn't need a lot of GPUs and most of the value accrued to the software layer, to the database, to the applications like CRM and Excel. But it does seem in the future that these interfaces will only be valuable, right, if they're intelligent, right? If they're pre compiled, they're kind of dumb. The software's gotta be able to think and to act and to advise. And that requires the production of these tokens, dealing with the ever changing context. And so in that world it does seem like much more of the value will accrue to the AI factory, if you will, to Jensen producing, helping to produce these tokens at the lowest cost and to the models and maybe that the agents or the software will accrue a little bit less of the value in the future than they've accrued in the past. Steelman, for me, why that's wrong? Yeah. So I think there are two things that are necessary to try and to drive the value of AI. One is what you described first, which is the token factory. And even if you unpack the token factory, it's the hardware silicon system, but then it is about running it most efficiently with the system software, with all the fungibility, max utilization, that's where the hyperscalers role is, right? What is a hyperscaler? Is hyperscaler, like everybody says, if you sort of said, hey, I want to run a hyperscaler, yeah, you could say, oh, it's simple, I'll buy a bunch of servers and wire them up and run it. It's not that, right? I mean it was that simple then there would have been more than three hyperscalers by now. So the hyperscaler is the know how of running that max util and the token factories. And it's not. And by the way, it's going to be heterogeneous. Obviously Jensen's super competitive, Lisa is going to come, you know, Hawk's going to produce things from Broadcom, we will all do our own. So there's going to be a combination. So you want to run ultimately a heterogeneous fleet that is Maximized for token throughput and efficiency and so on. So that's kind of one job. The next thing is what I call the agent factory. Remember that a SaaS application in the modern world is driving a business outcome. It knows how to most efficiently use the tokens to create some business value. In fact, GitHub Copilot is a great example of it, which is, if you think about it, the auto mode of GitHub Copilot is the smartest thing we've done. So it chooses, based on the prompt, which model to use for a code completion or a task handoff. You do that not just by choosing in some round robin fashion. You do it because of the feedback cycle you have, you have the evals, the data loops and so on. So the new SaaS applications, as you rightfully said, are intelligent applications that are optimized for a set of evals and a set of outcomes that then know how to use the token factory's output most efficiently. Sometimes latency matters, sometimes performance matters, and knowing how to do that trade in a smart way is where the SaaS application value. But overall, it is going to be true that there is a real marginal cost to software this time around. It was there in the cloud era too, when we were doing CD ROMs, there wasn't much of a marginal cost with the cloud, there was. And this time around it's a lot more. And so therefore the business models have to adjust and you have to do these optimizations for the Agent factory and the token factory separately. You have a big search business that most people don't know about, you know, but it turns out that that's probably one of the most profitable businesses in the history of the world because people are running lots of searches, billions of searches, and the cost of completing a search if you're Microsoft, is many fractions of a penny, right? Doesn't cost very much to complete a search, but the comparable query or prompt stack today when you use a chatbot looks different, right? So I guess the question is assume similar levels of revenue in the future for those two businesses. Do you ever get to a point where kind of that chat interaction has unit economics that are as profitable as search? I think that's a great point because see, search was pretty magical in terms of its ad unit and its cost economics because there was the index which was a fixed cost that you could then amortize in a much more efficient way. Whereas this one, each chat, to your point, you have to burn a lot more GPU cycles, both with the intent and the retrieval. So the economics are different. So I think you do. That's why I think a lot of the early sort of economics of chat have been the freemium model and subscription even on the consumer side. So we are yet to discover whether it's agentic commerce or whatever is the ad unit, how it's going to be litigated. But at the same time the fact that at this point, you know, I kind of know, in fact I use search for very, very specific navigational queries. I used to say I use it a lot for commerce. But that's also shifting to my, you know, copilot. I look at the copilot mode in Edge and Bing or copilot now they're blending in. So I think that yes, I think there is going to be a real litigation just like that. We talked about the, the SaaS disruption where in the beginning of the cheese being a little moved in consumer economics of that category. Right. I mean, and given that it's the multi trillion dollar, this is the thing that's driven all the economics of the Internet. Right. When you move the economics of search for both you and Google and it converges on something that looks more like a personal agent, a personal assistant chat that could end up being much, much bigger in terms of the total value delivered to humanity. But the unit economics, you're not just amortizing this one time fixed index. That's right. And so. That's right. I think that the consumer could be worse. Yeah, the consumer category. Because you are pulling a thread on something that I think a lot about. Right. Which is during these disruptions you kind of have to have a real sense of where is, what is the category economics, is it winner take all and both matter. Right. The problem in consumer space always is that there's finite amount of time. And so if I'm not doing one thing, I'm doing something else. And if your monetization is predicated on some human interaction in particular, if there was truly agentic stuff even on consumer that could be different. Whereas in the enterprise, one is it's not winner take all and two, it is going to be a lot more friendly for agentic interaction. So it's not like for example, the per seat versus consumption. The reality is agents are the new seats. And so you can think of it as the enterprise monetization is much clearer. The consumer monetization I think is a little more murky. You know, we've seen a spate of layoffs recently with Amazon announcing some big layoffs this week. You know, the Mag 7 has had little job growth over the last three years despite really robust top lines. You know, you didn't grow your headcount really from 24 to 25. It's around 225,000. You know, many attribute this to normal getting fit, you know, just getting more efficient coming out of COVID And I think there's a lot of truth to that. But do you think part of this is due to AI? Do you think that AI is going to be a Net job creator? And do you see this being a long term positive for Microsoft productivity? Like, it feels to me like the pie grows, but you can do all these things much more efficiently, which either means your margins expand or it means you reinvest those margin dollars and you grow faster for longer. I call it the golden age of margin expansion. I'm a firm believer that the productivity curve does and will bend in the sense that we will start seeing some of what is the work and the workflow in particular change. There's going to be more agency for you at a task level to get to job complete because of the power of these tools in your hand. And that I think is going to be the case. So that's why I think we are even internally, for example, when you talked about even our allocation of tokens, we want to make sure that everybody at Microsoft Standard issue, right, all of them have Microsoft 365 to the tilt in the sort of most unlimited way and have GitHub copilot so that they can really be more productive. But here is the other interesting thing, Brad, we're learning is there's a new way to even learn, right? Which is you know, how to work with agents, right? So that's kind of like when the first, when Word, Excel, PowerPoint all showed up in Office, kind of we learned how to rethink, let's say, how we did a forecast, right? I mean, think about it, right? In the 80s, the forecasts were interoffice memos and faxes and what have you. And then suddenly somebody said, oh, here's an Excel spreadsheet, let's put it in email, send it around, people enter numbers and there was a forecast. Similarly, right now, any planning, any execution starts with AI you research, with AI you think, with AI you share with your colleagues and what have you. So there's a new artifact being created and a new workflow being created. And that is the rate of the pace of change of the business process that matches the capability of AI. That's where the productivity efficiencies come. And so organizations that can master that are going to be the biggest beneficiaries, whether it's in our industry or quite frankly in the real world. And so is Microsoft benefiting from that? So let's think about a couple years from now, five years from now at the current growth rate will be sooner, but let's call it five years from now. Your top line is twice as big as what it is today. Satya, how many more employees will you have if you grow revenue by like. One of the best things right now is these examples that I'm hit with every day from the employees of Microsoft. There was this person who leads our network operations, right? I mean if you think about the amount of fiber we have had to put for like this, you know, this 2 gigawatt data center we just built out in Fairwater, right. And the amount of fiber there, the AI ran and what have you, it's just crazy, right? So, and it turns out this is a real world asset. There are I think 400 different fiber operators we are dealing with worldwide. Every time something happens, we are literally going and dealing with all these DevOps pipelines. The person who leads it, she basically said to me, you know what, there's no way I'll ever get the headcount to go do all this. Not forget, even if I even approve the budget, I can't hire all these folks. So she did the next best thing. She just built herself a whole bunch of agents to automate the DevOps pipeline of how to deal with the maintenance. That is an example of, to your point, a team with AI tools being able to get more productivity. So if you are question, I will say we will grow a headcount. But the way I look at it is that headcount we grow will grow with a lot more leverage than the headcount we had pre AI. And that's the adjustment I think structurally you're seeing first, which is one you called it getting fit. I think of it as more getting to a place where everybody is really not learning how to rethink how they work. And it's the how, not even the what. Even if the what remains the constant, how you go about it and has to be relearned. And it's the unlearning and learning process that I think will take the next year or so, then the headcount growth will come with max leverage. Yeah, no, it's a. I think we're on the verge of incredible economic productivity growth. It does feel like when I talk to you or Michael Dell, that most companies aren't even really in the first inning, maybe the first batter in the first inning in reworking those workflows to get maximum leverage from these agents. But it sure feels like over the course of the next two to three years, that's where a lot of gains are going to start coming from. And again, I certainly am an optimist. I think we're going to have net job gains from all of this. But I think for those companies, they'll just be able to grow their bottom line, their number of employees slower than their top line. That is the productivity gain to the company, aggregate all that up, that's the productivity gain to the economy. And then we'll just take that consumer surplus and invest it in creating a lot of things that didn't exist before. 100%. 100%. Even in software development. Right. One of the things I look at it is no one would say we're going to have a challenge in having more software engineers contribute to our society. Because the reality is you look at the IT backlog in any organization and so the question is all these software agents are hopefully going to help us go and take a whack at all of the IT backlog we have and think of that dream of evergreen software that's going to be true and then think about the demand for software. So I think that to your point, it's the levels of abstraction at which knowledge work happens will change. We will adjust to that. The work and the workflow that will then adjust itself even in terms of the demand for the products of this industry. I'm going to end on this, which is really around the reindustrialization of America. I've said if you add up the $4 trillion of CapEx that you and these and so many of the big large US tech companies are investing over the course of the next four or five years. It's about 10 times the size of the Manhattan Project on an inflation adjusted or GDP adjusted basis. So it's a massive undertaking for America. The President has made it a real priority of his administration to recut the trade deals. And it looks like we now have trillions of dollars. South Koreans committed $350 billion of investments just today into the United States. And when you think about what you see going on in power in the United States, both production, the grid, et cetera, what you see going on in terms of this re industrialization, how do you think this is all going to and maybe just reflect on where we're landing the plane here and Your level of optimism for the few years ahead? Yeah, no, I feel very, very optimistic because in some sense Brad Smith was telling me about sort of the economy around our Wisconsin data center. It's fascinating. Most people think, oh, data center, that is sort of like, yeah, it's going to be one big warehouse and there's fully automated. A lot of it is true. But first of all, what went into the construction of that data center and the local supply chain of the data center, that is in some sense the re industrialization of the United States as well. Even before you get to what is happening in Arizona with the TSMC plants or what was happening with Micron and their investments in memory or intel and their fabs and what have you. Right. There's a lot of of stuff that we will want to start building. Doesn't mean we won't have trade deals that make sense for the United States with other countries. But to your point, the re industrialization for the new economy and making sure that all the skills and all that capacity from power on down, I think is sort of very important. Right. For us. And the other thing that I also say, Brad, it's important and this is something that I've had a chance to talk to President Trump as well as Secretary Lutnick and others, is it's important to recognize that we as hyperscalers of the United States are also investing around the world. So in other words, the United States is the biggest investor of compute factories or token factories around the world. But not only are we attracting foreign capital to invest in our country so that we can re industrialize, we are helping, whether it's in Europe or in Asia or elsewhere, in Latin America and in Africa, with our capital investments, bringing the best American tech to the world that they can then innovate on and trust. And so both of those I think are really bode well for the United States long term. I'm grateful for your leadership. Sam is really helping lead the charge at OpenAI for America. I think this is a moment where I look ahead, you know, you can see 4% GDP growth on the horizon. We'll have our challenges, we'll have our ups and downs. These tend to be stairs, you know, stairs up rather than a line straight up and to the right. But I for one see a level of coordination going on between Washington and Silicon Valley, between big tech and the reindustrialization of America that gives me cause for incredible hope. Watching what happened this week in Asia led by the President and his team, and then watching what's happening here is super exciting. So thanks for making the time. We're big fans. Thanks, Satya. Thanks so much, Brad. Thank you. As a reminder to everybody, just our opinions, not investment advice.