Hi, listeners. Welcome back to No Priors. Today, I'm here with Neil Tiwari of Magnetar Capital. This is a $22 billion alternative asset manager at the center of the AI Compute buildup. We talk about the financial innovation, depreciation of GPUs, and what's next in AI Compute. Welcome. Thanks so much for doing this, Neil. Absolutely. Really happy to be here. So you are leading AI infrastructure at Magnetar. You're at the center of the build-out, enabling it, financing it. For any of our listeners who haven't heard, can you just explain a little bit what Magnetar is? Sure. So Magnetar has been around for actually this is our 20th year. We're an alternative asset manager, and that can mean a lot of different things. But we have three primary strategies. The first one is private credit. The second one is a venture strategy. And the third is more of a systematic or quantitative focused public strategy as well. And so I think, you know, when people look at us and why are we here in this moment, especially on building out AI infrastructure, I think a lot of it has to do with kind of our unique lens on helping to build capital intensive businesses and using creative financing, whether it's venture or other structures with unique elements. And I think we're going to talk a lot about that, but to build out and optimize the balance sheets for these capital intensive businesses. So I remember hearing about you guys originally. You're the first investor I think we've ever had on the podcast. I'm excited about this. Thank you. I remember hearing about you and Magnetar initially around, I was like, who's this big owner of CoreWeave? And also, you know, helping OpenAI with some of their early build outs. When did you guys first start looking at the problem and thinking about how to solve it? Yeah, so we actually stumbled across the compute problem before it was compute. We met CoreWeave back in 2021, and that was when they were actually transitioning from mining Ethereum into high-performance compute. And at that time, it was using the GPU as an instrument to mine cryptocurrencies. And interestingly, that same instrument could be used for high performance computing applications. And the first one was visual effects. So think of like things like movies, Marvel movies and things like that. And so they were transitioning at that point between crypto mining into the first kind of high performance compute use case. And this was all before AI. And so we made our first investment before the AI trade started. But we added a lot of optionality where, you know, we could envision a world where the GPU could be used for a lot of different high performance kind of computing applications. I think, you know, AI was on the radar. Machine learning was on the radar for us. But I wouldn't say that we could foresee everything that happened. We just happened to be, you know, at the right place at the right time. And we continue to double down as the company progressed and started shifting into more workloads that were machine learning and kind of AI training based. Did you have like an existing significant data center investing footprint? No. I mean, I think, you know, interestingly at Magnetar, there, you know, we have invested across asset classes. So we've done a lot of property investing, real estate investing, as an example, investing in energy. We had an energy business historically. And so a lot of the elements for what constitutes a data center, power, energy, land, real estate, we had a lot of the background in those spaces. I think we were new to compute, right? Like that was a new sector for us. And so kind of those two worlds merging, we obviously came up on the curve on the compute side, but we had a lot of background on the elements that constitute what it means to build a cloud. So you guys just really you're in this company, you saw the demand and you said, like, it's going to grow and we're going to make this a big part of our business. Exactly. I think, you know, what was interesting was we made our first investment in 2021. And then about a year later, we continue to see expansion of use cases for at that time. It was called high performance compute. And then it was kind of towards the end of 22, the whole AI discussion started. And as we entered 2023, CoreWeave started to train models for OpenAI. And that's when things really started growing because the sheer amount of compute that was needed to train an LLM, this was like the first time it had ever been done. And what was interesting was what kind of allowed them to take advantage of that opportunity was the historical kind of backgrounds of a lot of the founders were in energy, asset management. And when you fast forward to today and you look and like what constitutes your ability to build a GPU cloud, it's your ability to manage these highly complex assets. And it fundamentally comes down to access to power and energy. And so they had these elements with them. They obviously brought on a lot of talent on the cloud side. And so you put all these together. And at that moment, it allowed them to build very large scale, reliable clusters for OpenAI and obviously many other customers since then. And I think the last comment I'll make is what really allowed them to kind of win this market early on was focus on two things. It was scale and reliability. And I think those were the two things that are really difficult for a lot of the new entrants since then, because scale has to do with your access to capital, your access to energy, power, data center. And then reliability really had to do with their ability to manage a giant fleet of GPUs, which is actually quite complicated. Whether it's reliability from GPU failures or software challenges, building a fleet that can healthily be online all the time at 99.9% reliability is incredibly difficult. And that's something that they had started back in 2017, 2018 timeframe. And they were at the right moment at the right place with the right technology stack to really build the optimal cloud for that moment. I've definitely experienced that with our portfolio of companies that are building large trading clusters. CoreWeave has a reputation for reliability that not everyone has reached. Can you just help characterize, if you fast forward like two and a half, three years now, like what is the scale of the problem today? Yeah. So if you look at kind of CapEx, right, let's starting with that. So CapEx for AI compute and infrastructure in 2026, you know, at least from the hyperscalers is projected to be between 660 and 690 billion dollars. And over the next several years, you know, that scales to trillions of dollars. Right. And so the scale of the problem is how do you build that size of CapEx efficiently? And I think a lot of that has to do with not only your ability to have access to those core elements, energy, power, and your ability to have data center space, etc. But I think one of the things that's not talked about as much is capital and access to capital and how is capital structured. And what I mean by that is this is, you know, billions to trillions of dollars of CapEx. And just using equity dollars alone is not an efficient way to scale this. That's obviously massive dilution. You know, there's there's it's not an easy problem to solve. When we first met, I had like slowly come to this realization. I was like, I don't think we should take the dilution for the cluster. Yeah, right. Exactly. And so that's where I think, you know, when you and I have talked about like structuring and I can give a couple examples if that's helpful. I think the first one was DDTL structures or SPV debt structures that had a, think of it as like an SPV. Inside of the SPV is the capex, the collateral, which is the GPUs, and the contracts themselves. And so in this example, the actual asset or collateral was not really just the GPUs themselves. It was really the contracted cash flows from, in this case, investment-grade counterparties. And so I think the reason— This is the consumer of the compute. The consumer of the compute, exactly. You know, your Microsofts, your Metas, et cetera, of the world. And I think the reason that was done is really twofold. When you look at the scale of the problem, you know, those particular contracts needed billions of dollars of debt to finance the CapEx. You know, obviously for a nascent and new and growing company, that's really hard to raise. So part of structuring it this way is ensuring that you have kind of guaranteed offtake on the back end to minimize the risk for debt holders And I think that a lot of what the market got wrong especially when there was a lot of press about this early on where it was, there's billions of debt on these highly depreciating assets, and it's extremely speculative. And what was oftentimes characterized in the media was these debt structures had GPUs as collateral, And that's like putting a used car as collateral, which is obviously just going to depreciate incredibly fast. You know, that's a very risky kind of structure. And I think what got missed was the GPUs themselves were actually like the second or tertiary level of collateral in those instruments. The primary collateral was the contracted cash flows from investment grade counterparties. It's Microsoft or NVIDIA or somebody like that saying, I'm committed to pay you. Exactly. I know you can pay me. Take or pay contracts. And they're like five years in length. So I think that was like one feature that's unique to talk about. And then the second one really has to do with the debt itself and how it amortizes. And so like in simple terms, you know, when you have debt, you have principal and interest and you have to pay it off over time. And in these structures, typically the payback period on the CapEx was roughly two to three years. and the structures themselves, the debt was over five years, four to five years in length, where the entire debt amortized during the outstanding period that the debt was out. And so at the end, you ended up with zero balance for the debt and there was no balloon payment or anything that was really due on the back end. And so the question that often comes up is, isn't that a very risky type of structure because these things are depreciating incredibly quickly? So I think there's two comments here. First is on that depreciation question, in these kind of debt structures, it doesn't really matter because the debt's fully paid off by the end of the debt term against committed contractual contracts from investment grade counterparties. And at the very end, the actual upside or residual value, and I know there's a lot of questions on residual value, is held by the cloud player in this example, right? Courtweed, right? Or any others. And that's a really interesting prospect because you can see a world where all of this CapEx is paid off incredibly quickly, and there's an opportunity to redeploy it, where you can redeploy it without having to pay for any additional debt, obviously, against that redeployment. How have the instruments changed? They've changed in several ways where, you know, the first is, and when you look at these SPVs, I think you're starting to see ways to change the portfolio construction of who can go inside of one of these debt structures. And so, you know, early on in the early days, these were all only investment grade counterparties because there was the space was so nascent. The operators had no experience. And I think now what you're starting to see is a blend of investment grade and non-investment grade. So like, what does that actually mean? What that means is, you know, you're seeing these structures with investment grade counterparties like your hyperscalers and your other corporates that are IG mixed alongside some of the AI native companies. And so think of the AI model companies, the labs, software companies that are building AI startups. you're seeing those companies get mixed in alongside the IG companies to build a portfolio. Because now you have the history that you can do this. And now you have structures where you can kind of balance the risk with IG and non-IG. And we're continuing to see that kind of move to be able to help finance, you know, really the model companies and a lot of these startups. Obviously, that was difficult to do, you know, three or four years ago. That's starting to become easier as these companies have more runtime and ability to, make the compute fungible. All our portfolio companies that buy compute tell me it's a supply-constrained market today. One, is that true? And two, when you think about continuing to grow your business or grow this ecosystem, what's going to stop it? What could slow down and build out? Yeah. I mean, I think what's interesting is if you look at 2023, 2024, for we were very supply constrained and the supply constraint was chips. No one could get access to chips. Yes, we bought chips. We bought chips, right? And, you know, there was this thought that, okay, there's gonna be an overbuild of chips and then the supply constraints will go away. Well, you know, fast forward to 2026 and what we see is, you know, there is obviously more availability of chips, but to build and operate these, you know, data centers requires people, power infrastructure, a lot of these things that have a lot of bottlenecks. And so actually taking these chips and then making them into useful revenue generating assets is really the bottleneck now. It's also not clear that there is supply of chips at the latest generation at scale soon, which is how everybody wants them. Exactly. I think, you know, you see not only you're starting to see interesting. And not only just the high-end players want access to the latest chips, you're seeing the latest, you know, obviously startups want access to those. And I think it has to do with efficiency. You know, one of our friends or one of your friends as well, Dylan Patel over at Semi Analysis posted this interesting article last week on inference and inference spend and inference kind of performance. And, you know, there's a lot of, you know, jokes made about Jensen math. And it was interesting because the... Seems pretty good at math, honestly. He's actually great at math. And so for the hoppers, the H100 or H200 series of GPUs into the Blackwells, there was a claim made that it could be 30 times more efficient. And I think the data from, you know, some analysis showed that it was 90 to 100 times more efficient in terms of inference performance. And so I think part of the need to go to these new chips is not, yes, more computing power, but it's actually the, it can be cheaper to operate. It's price performance. Price performance, exactly. Yes, my favorite Jensenism is the more you buy, the more you save. Exactly. It's actually true. Yeah. Crazy. Help me address this criticism around circular financing. Yeah, I know. It's obviously a topic du jour. And I think the way we see it and frame it really has to do with the demand signals and who are the eventual buyers and how is this being used. And so, at least from our perspective, we continue to see insatiable demand. And if you go back to the previous kind of big tech build out back in the early 2000s, there was obviously a lot of fiber that was being built and you had dark fiber and overbuild happening. And I think what you see here is you don't see any dark GPUs. No, I've been looking. Exactly, any GPUs used. And then number two, you're starting to see actual economic value. So I think last year, Enterprise AI had about $37 billion of total TAM, and it's continuing to grow like crazy. And at least personally, and I'm sure you see this too, but I use these tools all the time, and I find it incredibly valuable. The actual tokenomics of positive ROI is actually here now, I think, from our perspective. And so the circularity comment, I think, applies when you're building speculative computing capacity, or if you're purely doing vendor financing and you're trying to do some type of unique, some type of revrec type item related to that. And that's not what we see. What we see is financing to support to build out the demand against use cases that are very positive in their ROI. And so our perspective is that that's not a real concern that we have. And it really has to do with who are the ultimate buyers here. The ultimate buyers have been at scale, the hyperscalers. They're deploying this at scale. And the economics are positive when you look at a unit economic basis in terms of deploying intelligence. And I think we're at a moment in time where you're really starting to see that. In my own experience, I have been a heavy AI user for several years. But reasoning advances the ability to scale up inference, especially around code. It means I'm up against my max limit all the time in a way that was not true initially. How does the inference workloads actually growing? I mean it a good demand signal that there is value but how does that change your business Yeah So I think one thing that interesting that we seeing is obviously there been the shift from training to inference you know over the last few years that that split continues to grow on the inference side as usable and ROI positive applications get developed. I think the two things I see on the inference side now is inference has is a lot more complex than I think initially thought. And what I mean by that is it's not as simple as you train a model and then it's easy to inference it. In certain cases, you can do that on similar infrastructure, but there are issues around latency, fungibility of that, and really optimizing the cost of your compute on the inference side. How do you manage peaks of inference demand? And obviously, it's not linear. Like training, your GPUs are on all the time, 100% of the time. And so with inference, you have a lot more variability. And so there's a lot more nuances in optimizing inference. I think the second thing that's observed that I've seen is inference is definitely a memory problem, a memory throughput problem. You know, on the inference side, you know, you have these kind of phases called pre-fill and decode, right? And how you optimize that across a fleet of GPUs is actually a unique technical problem. And then the third is what I would say is distribution. You know, a lot of times training infrastructure is quite centralized. What you're seeing with inference is in many use cases, as this becomes more ubiquitous, you're going to have more and more decentralized inference clusters. And actually, one of my favorite companies is one of your companies, Base10, which is really, you know, optimizing distributed inference at scale. And I think one thing that's interesting when you look at companies like that and other inference clouds is how do you optimize the compute and build out these clusters that could actually look very different than a training cluster, where training cluster might be 50, 100, 150 megawatts and one kind of four walls. I think you're starting to see distributed inference, which could be, you know, four or five megawatts and five separate data centers and stitching them together in different areas, right? And that looks very different from a kind of power perspective, how you, you know, the software matters a lot more when you're doing like distributed inference. And then in terms of your question, how it impacts us, I think one of the things that we've been, you know, focused on is, you know, where we started this conversation with you on financing compute, that was really obviously, it started with mostly training. A lot of those hyperscalers are now doing a lot of inference on that same infrastructure, but these are investment-grade counterparties. It's easier to lend money to build out these clusters to those customers. I think now that you have this new crop of inference clouds and application-layered companies that are needing tons of inference, I think the key question that we're really focused on is how can we finance the next build, which is distributed inference? And maybe the last one or two takeaways would be one thing I'm seeing is for every application layer company out there, the highest line item from COGS is compute. And then the inference companies and inference clouds out there, most of them are purchasing up compute from either other clouds or unused capacity. And when you look at margins for that, you've got layered margins. And so there's a push to kind of own your own infrastructure to really drive and increase profit margins. But also it's the ability to kind of have control of your own destiny. And I think a lot of folks are starting to, the application layer companies and inference clouds are grappling with how can we build and own and operate our own infrastructure. And that's something I'm really looking into. I am too. And I think one of the things that is going to make a big difference in this ecosystem is like, can the inference clouds like Base 10, can they deliver reliability that you would expect from a cloud, like a traditional cloud? Because the like distributed data center operations that they consume today do not offer that reliability. Right. And the other thing that's interesting is, you know, this is additional reporting from last week. If you're familiar with Silicon Data, they put together a lot of data on spot pricing and price per token performance. This is Carmen Lee's company. And one thing that I think was really interesting in an article she published last week had to do with how two pieces of compute that look identical on paper have wildly different performances. Everything from reliability to cost to speed. And I think as you distribute, you know, have distributed inference, how do you mash together very different types of compute and try to optimize reliability, I think is super interesting. And that gets to kind of one thing I find really interesting that NVIDIA is doing is this concept of AI factories and building AI factories behind corporates and AI companies. And maybe the way I unpack that is you've got kind of more large monolithic cloud players, the hyperscalers and the nino clouds that are building large scale cloud environments. And a lot of where I think NVIDIA and others see this going is, yes, those are going to be important components and those are going to be huge markets. But corporates, Fortune 500, AI companies that use a ton of compute will want dedicated AI factories associated with workloads that they run and that they have control over. And so I think you're starting to see the early indications of how do you finance and build out? So almost think of like literally AI factories that sit on-prem with a company that can operate their workloads. You're talking about my Mac mini farm. Exactly. No, but all joking aside, I think one thing that is another supporting factor for use of all of the compute we have is and can create over the coming years is power is clearly the limiting factor. it's easier to get more power in smaller units. I think that as inference demand is growing, anyone who has usable compute for inference is going to find a lot of partners for offtake. Exactly. Okay, let's look at the future a little bit while we have 10 minutes. Let's talk about the macro. People talk about energy. They talk about natural gas. The grid, the slowness of nuclear, like what do you think about over the next six or 12 months? Over the last year, I've been spending a ton of time in the power and energy markets and looking at interesting solutions that can help scale power for the gap that we see. I think a few observations that we've seen. The first is we do have a power problem, but I think it's a bit more nuanced than a lot of the reporting out there where. Just we can't generate. We can't generate. I think there's actually quite a bit of stranded power across the grid, across the country. And what I mean by that is a lot of the utilities are built in a way where they're focused on peak power. So they've got natural gas peakers and they're focused on providing peak power for those moments where demand is kind of off the charts. And that's obviously only for a few days out of the year. So there's lots of generating assets out there. The question is, they're a bit stranded. Right. And so there's kind of I look at the power problem as being kind of multiple fold. The first one is how can you take the power we have on the grid and actually make it usable? And a lot of that has to do with flexibility and storage. And so we've been spending a lot of time looking at an energy and the energy storage business and distribution. How can you store unused capacity, peak demand, shave capacity, store it and then distribute it when it's needed? We made an investment in a company called Taurus. I think I mentioned to you, which is building like this distributed utility layer, almost like this mesh infrastructure to store excess capacity or store capacity from a variety of sources and then distribute it at the time when it's needed. And so I think that's kind of a critical layer that needs to be built. And then longer term, there is a generation problem, but I think in the shorter term, it's really it's more on the distribution and storage. and then the other piece I would say is you know the true bottleneck at least in the short term the next six to twelve months is is incredibly I don't use the word simplistic but it's things like structural steel it finding electricians that can you know sorry there you can get enough you can get enough enough steel You can This is not something I was aware of Yeah you can get steel You can't get, you can't find enough electricians to build out, you know, the power infrastructure. Right. Substations, transformers, air chillers. These are like very specific power infrastructure needed to just get to a point where you can start to build a powered shell on a piece of land. And so the bottlenecks in the short term really are people, equipment. And then the other interesting thing is that on the generation side, what you're seeing is regulatory, obviously, is a big challenge. And so there's a combination of bring your own capacity. There's a lot of that that's interesting right now. And so a site that can potentially grow to 50, 100 megawatts might start with only 10 megawatts of grid interconnect. but can you add solar, nat gas, turbines, put these various bring your own capacity kind of pieces of technology together to make that site usable? And so I think a lot of what's being looked at and a lot of what I'm looking at right now is really on the bring your own capacity, at least in the short term. Yeah, I think if people don't know the origin story of Crusoe and Flurgas, like it's actually really interesting as an example of, you know, there is actually lots of energy. You know, some energy out there and you can make much more of it consumable. Yep, exactly. A couple topics to hit before we lose you. New players. How do you think about the sovereigns and what they're doing in their build outs? Yeah, I think they seem to be able to fund themselves. Exactly. Yeah. You know, you saw the news from India last week. Obviously, a lot of the news in the Mideast, Southeast Asia. I think, you know, we're continuing to see that sovereigns view, compute, and AI, you know, as and even we do here in the United States as a matter of national security. And obviously, the funding of those clusters is very different than funding like a private cluster. And so you've got, you know, government capital that can be used for that. So I think there's two things that, you know, I find interesting in that space. I think one is who are the partners that are going to build those, that capacity? And what are the cybersecurity kind of implications and environments for that? And so those are the two nuances, I think, with Sovereigns is they need to find players that can rapidly scale compute in their countries. And oftentimes they don't necessarily have these players that know how to build and scale GPU compute. And I think that's a great place for the United States to lean in and help build, you know, sovereign ecosystems around the world. And then there's a matter of cybersecurity and how do you make it into a truly safe ecosystem for those sovereigns. And so I think there's a lot of work to do still on the cyber side, especially as you look at scaling sovereign AI. What is your thinking on physical AI? If it works, CapEx Intensive Build. Absolutely. And, you know, maybe I'll just take a second to say one of the things that we observed from 2010 to like, you know, the early 2020s was we were in a very capital asset light mode of build. Like SaaS was, you never heard Magnetar in SaaS, right? Because it was just purely asset light. Compute and everything we saw starting in 2021 is asset heavy. That's where you started hearing a lot more about us. And I think physical AI is actually an extension of that. And so what you're seeing is part of the reason, I think, and I think we all have scars from the 2010s of hardware companies that did not make a lot of money for us. Part of the scars was it was so difficult to scale hardware companies because the software was so difficult to build. You needed to spend so much money building the hardware. The software was an afterthought. What you're seeing now is now that you have more general purpose software via AI, it can make the hardware easier to scale because you have software that can interact with more hardware. And so I think the natural kind of extension of what we see is kind of what happened in the compute markets where you really needed flexible capital, where it wasn't just equity, it was debt and a variety of project finance to really scale CapEx. you're going to see that same kind of need in physical AI. And it simply has to do with capital intensity, right? On the compute side for like CoreWeave as an example, they needed billions of capital to scale that cloud. And I think whether it's a robotics company or whether it's a manufacturing-focused company, drones, defense, all of these areas are incredibly capital intensive. And then now that you add AI into them, I think it can help them scale faster, quite frankly, and capital intensity is still there. And so there's a moment in time now where you have to really look at optimizing balance sheets for physical AI to really grow and scale. I think to your point of how the early AI compute contracts were structured, I went from learning to be an investor in an era and an environment where robotics was a great way to lose a lot of money for a long period of time. You remember that too. Now I sit on the board of two robotics companies, so let's hope it's not true anymore. But I'd say like it's just a question of capability to me, like, you know, whether it's in the home or industrial settings where like it is simply not a good human job or we don't have the labor. Yeah. You are going to have if I think the products will support investment grade buyers who are going to have contracts that say like we want it and you can raise debt against it. Exactly. Right. And so I think actually that that feels of a very similar shape. Last question for you, because it is so timely. What do you make of the general capital rotation out of software, the end of software? And it's all infrastructure labs and AI natives, I guess. Yeah, yeah. It's interesting to see that every day there's another industry that kind of tanks, whether it's, you know, you saw the wealth advisors tank for a few days, you saw the consulting companies, you saw real estate, payments, real estate, right? I mean, I think what you're seeing at least is, at least in my view, what I saw was towards the tail end of 2025 and into 2026, like there was, at least in my view, a big step up in performance of usable AI. And I think what Anthropic was doing really, and Claude, and like, we use it all, you know, obviously we use all the models, but, you know, there was a definite step up in performance in making AI usable. And seeing that it can, you know, truly disrupt these, you know, non-AI native industries. I think the reaction and rotation out of each of these names is a bit much because when you think there's two factors I look at. One is when you look at valuations, as an example, I think from a free cash flow perspective, SaaS companies are valued at the lowest they've been in years. And there's a huge margin difference between what those rev multiples are today and what they've been in the past. And so pre-cash flow margins have steadily increased significantly for SaaS as a whole over the last four or five years. And revenue multiples have stayed, you know, the same or gone down. And so to me, that's a bit of an exaggeration because it really has to do with individual names versus sectors. And I think that's kind of at least my take is like in all of these sectors, there are individual names that will learn how to maximize their value using AI. And there's those that won't. But what's happening right now is there's a hammer being hit across all names and not specific individual names that might not be using it as well. And then the second point, at least my view is there are a number of applications that on paper sound really interesting. Like, oh, AI could just rebuild Slack or you could rebuild Salesforce or could rebuild X, Y, and Z. I think it's not just the product. It's the way that's integrated across multiple services and systems across the enterprise that is a lot more difficult to replicate than I think some of the public markets are kind of reacting to. And I do think there's a fundamental question, in addition to what you said, which I agree with, of like, does anybody want to rebuild it and own it? And, you know, there are, to your point of like within the software sector in particular, there are companies where they're structurally more protected and there are companies that are at more risk. Right. And I think it's as simple as like you got to go select. Yeah, exactly. This has been so fun. Thanks so much, Neil. Yeah, I really appreciate it. Congratulations on all the innovation and on building out all the compute. Awesome. Thank you. Good to be here. Find us on Twitter at NoPriorsPod. Subscribe to our YouTube channel if you want to see our faces. Follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for emails or find transcripts for every episode at no-priors.com.