Odd Lots

Ray Wang on How AI Is Causing DRAM Prices to Surge

45 min
Feb 16, 20262 months ago
Listen to Episode
Summary

Ray Wong from Semi Analysis discusses how AI demand is creating a severe DRAM and HBM (High Bandwidth Memory) shortage, driving prices to surge and creating a potential four-year supercycle. The shortage is causing demand destruction in consumer electronics like gaming consoles and PCs, while hyperscalers prioritize securing memory chips for AI infrastructure.

Insights
  • AI demand is creating a structural supply constraint in memory chips that differs from historical cycles—HBM production is cannibalizing commodity DRAM capacity, creating a dual shortage rather than simple demand growth
  • Current DRAM spot prices show an unprecedented surge starting late 2024, with margins on commodity DRAM now exceeding HBM margins despite HBM's higher perceived profitability
  • The shortage is already causing visible demand destruction: PC price hikes, smartphone production cuts (MediaTek cutting 2026 outlook by 10-15%), and delayed product launches across consumer electronics
  • Memory chip makers face a clean room capacity constraint in 2026 that limits incremental wafer production, making node migration to advanced processes (1B, 1C) the primary lever for increasing supply
  • The AI-driven memory cycle is expected to last until mid-2027, significantly longer than historical 15-18 month cycles, due to sustained hyperscaler capex and expanding AI model context windows
Trends
AI infrastructure capex creating commodity-like price spikes in previously stable semiconductor segmentsHyperscaler competition for memory chips driving preemptive purchasing and inventory depletion at major suppliersChinese memory makers (CSMT, GHICC) gaining market share in commodity DRAM, particularly in domestic Chinese marketShift toward agentic AI and longer context windows driving sustained HBM demand growth through 2027Memory chip allocation prioritizing server DRAM and HBM over consumer electronics and mobile devicesNode migration acceleration as primary supply response to clean room capacity constraintsLong-term supply agreements becoming difficult to negotiate as suppliers prefer quarterly pricing negotiationsDemand destruction spreading from gaming consoles to PCs, smartphones, and cameras as prices riseEnergy and industrial commodity crowding out effect from AI infrastructure buildoutPotential structural shift in device architecture toward cloud-based compute with reduced local memory requirements
Companies
NVIDIA
Mentioned as driver of AI acceleration and demand for memory chips; Lubin 200/300 servers requiring 3x memory content
Apple
Managing memory chip shortage better than peers; expecting meaningful margin impact in H2 2026 rather than earlier
Nintendo
Unable to produce sufficient Switch consoles due to memory chip shortage; used as example of demand destruction
SK Hynix
Major DRAM supplier increasing capex for capacity expansion and HBM production; using 1B process for HBM
Samsung
Major DRAM supplier with inventory depletion; increasing capex for capacity and HBM technology development
Micron Technology
Major DRAM supplier announcing new fab in Singapore, US fab expansion, and acquisition of PSMC fab; increasing capex
Dell
Implementing price hikes on PCs due to memory chip shortage
Lenovo
Implementing price hikes on PCs due to memory chip shortage
ASUS
Implementing price hikes on PCs due to memory chip shortage
MediaTek
Cutting 2026 mobile outlook by 10-15% due to memory chip shortage impact on smartphone production
CSMT
Leading Chinese memory maker gaining momentum in commodity DRAM; 90-95% revenue from China/Hong Kong market
Nanya Technology
Legacy DRAM supplier mentioned as alternative to major Korean and US manufacturers
Winbond
Legacy DRAM supplier mentioned as alternative to major Korean and US manufacturers
GHICC
Chinese memory maker competing in commodity DRAM, particularly in domestic Chinese market
OpenAI
Mentioned as example of AI lab competing intensively for memory chip capacity and hardware
Anthropic
Mentioned as provider of Claude AI service requiring significant memory infrastructure
Microsoft
Hyperscaler with significant capex spending on AI infrastructure and memory chip procurement
Amazon
Hyperscaler with significant capex spending on AI infrastructure and memory chip procurement
PSMC
Taiwanese chipmaker whose fab was acquired by Micron for capacity expansion
People
Ray Wong
Analyst at Semi Analysis discussing DRAM shortage, HBM demand, and memory supercycle dynamics
Joe Weisenthal
Odd Lots co-host conducting interview; discussed AI crowding out effect and personal AI usage
Tracy Alloway
Odd Lots co-host discussing DRAM price charts and commodity cycle dynamics
Quotes
"all energy, industrial commodities, anything that we use for any purpose is just going to like feed the AI beast. And us humans, we're just going to get left out in the cold."
Joe WeisenthalOpening segment
"on the same wafer basis, you can produce three more bits if you do commodity DRAM, but you can only produce one bit of HBM"
Ray WongMid-interview
"the margin of the commodity right now is actually higher than HBM. So here, that created a real dynamic"
Ray WongMid-interview
"this is the key difference that we are seeing for this, right? Apparently, the demand accelerations is also last quite long"
Ray WongDiscussing supercycle duration
"it's like, nope, we've switched this line over to the data centers, over to AI, and it's going to become more – this crowding out effect is going to become more and more real to people directly."
Tracy AllowayClosing segment
Full Transcript
UKG. Their HR, pay and workforce management tools help business leaders empower their people. Because when work works, everything works. Learn more at ukg.com slash work. so you know what actually matters as the day gets going. From Brussels, I'm following the politics, policy and the people shaping the European Union right now. And from London, I'm looking at what all that means for markets, money and the wider economy. We've got reporters across Europe and around the globe feeding in as stories break. So whether it's geopolitics, energy, tech or markets, you're hearing it while it happens. It's smart, calm and to the point. And it fits into your morning. You can find new episodes of the Bloomberg Daybreak Europe podcast by 7am in Dublin or 8am in Brussels, Berlin and Paris. On Apple, Spotify, YouTube or wherever you get your podcasts. Hello and welcome to another episode of the Odd Lots podcast. I'm Joe Weisenthal. And I'm Tracy Alloway. Tracy, you know what I'm worried about? I don't know if I'm worried about it, but it kind of feels like the direction things are going, which is that like all energy, industrial commodities, anything that we use for any purpose is just going to like feed the AI beast. And us humans, we're just going to get left out in the cold. Nothing for you. We got to like feed. We got to feed it all to the AI. And maybe in 10 or 20 or 50 years, the AI is so powerful to just decide, why do humans get any of this? We just we should keep it all to ourselves. A legitimate concern, I would say. I mean, to some extent, we are already seeing this crowding out effect. Right. So energy prices in certain areas have been going up. And most recently, memory chip prices. So this has been all over earnings recently. You had companies like Apple saying that because of a crunch in memory chip supply, they might have to either raise their prices or like cut down on the amount of phones they use. Nintendo. Yeah. Like if you pull up a chart of Nintendo shares right now, they are just getting hammered. And supposedly it is all because of this memory chip shortage. By the way, do you remember like buying your first PC in like you bought one before I did, I think. But mine was in the mid-90s. Yeah, yeah. I think that's about when I got one. Do you remember how much memory those things had? Oh, like nothing. Yeah. Like, I looked it up. Something like less than 10 megabytes for a bunch of them. You know how much PCs have nowadays? Actually, I have no idea. I think it's, let's see, 16 gigabytes. Yeah. Isn't that crazy? It is really crazy. I remember, like, do you remember zip drives? Yeah. Oh, my God. And there were all these special, you know, obviously various memory peripherals. that you could buy and they came in sort of dedicated cartridges and stuff like that. I think they were called Zip Drive. Yeah, I think, right? They were Zip Drive? I don't remember. Yeah, yeah, it was. Yeah, that's right. Buying additional memory storage, et cetera, used to be this big part of the consumer process. I'm actually glad, you know, my son really wants a Nintendo. But if they can't produce them anymore because they can't get their memory, then I'll say, sorry, no video games. That's so mean. You're going to have to sit him down and be like, I'm so sorry, son. There's this thing called the supply shortage. I'm going to use this as a learning opportunity to explain how global supply chains actually work. Totally. You know what the other the one interesting thing is, if you look at spot DRAM prices, which we have on the terminal, this is, I think, maybe the interesting dynamic. There's a lot of AI related things on the terminal that have been surging for several years, most notably some of the big chip companies, NVIDIA, et cetera. If you look at a chart of spot DRAM prices, the big surge wasn't until late last year. And then since then, it's gone nuts. So for a long time, even as this AI story, the AI meme, the AI industry propagated, people were really paying attention to that area. And then suddenly it's just gone like, you know. Haywire. Haywire, yeah. And so obviously huge supply demand imbalances. We need to understand what's going on because I think the stakes are, you know, it's getting so much. As you mentioned, a lot of companies are losing out pretty big. Others are making a lot of money. We sort of need to get a handle on what's going on with memory. Let's do it. I'm really excited to say we really do have the perfect guest. We are going to be speaking with Ray Wong. He's an analyst at Semi Analysis, and he's recently published a brand new report on exactly this topic, talking about a memory super cycle and so forth. So truly the perfect guest, as I mentioned, he comes to us from Korea. Ray, thank you so much for coming on Odd Lots. Hey, nice to be with you, Balfe, and thanks for having me. So many questions. Really appreciate you joining us. Let's just start with the basics. Like, what is it about AI or what is going on? What is the core idea such that demand for memory in various forms is absolutely seem to be booming right now in a way that must have caught the supply side of the equation very off guard? Yeah, I think, you know, to really look at kind of the sort of imbalanced supply demand dynamics, really, we want to look at like, you know, what happened a few years ago, right? Because, you know, your capacity to check like your investment a few years ago. And what's happening in 2023, 2024, it's sort of a down cycle, right? Because remember, during the COVID, all the companies are trying to flock in to buy tons of DRAM, right? And there's a lot of purchases there, right? And there's sort of an up cycle, right? But it's a very, very short up cycle because when COVID is gradually getting better, right, you actually don't need that much DRAM. And back then, the company will be like, hey, you actually don't have a sustainable demand, right? So they don't want to over-invest to expand their capacity. And when you go into the down cycle, your capex typically being very constrained. Very like, you know, company from their perspective, they try to be conservative. So that leads to 2024 and 2025 that your incremental wafer capacity that goes through DRAM is actually quite limited. But what's happening on the demand end is actually the demand was accelerating so fast, right? You know, one way to look at it is to look at Amidia's earnings, right? And when you're on one side, your demand is accelerating so fast. On the other hand, on the supply side, your supply just couldn't catch up. And there's more nuances behind the supply, right? Because before, they are just supplying all the commodity DRAM, like DDR, LPDDR, right? All the memory that goes to PCs, goes to laptops, goes to your mobiles, goes to Nintendo switches, right? But, you know, what's happening in this AI is like there's a new thing called HBM. think about it as like a specialized memory directly for air-assurator. And that memory is extremely DRAM wafer intensive. To give you a sense, right, on the same wafer basis, you can produce three more bits if you do commodity DRAM, but you can only produce one bit of HBM, right? And that ratio will actually go in higher when you go to HBM4, HBM5 in the next few years, right? So in that way, on the same wafer basis, you can only produce that much HVM. And right now what's happening is like, hey, this HVM is super, super profitable. And why not we dedicate more wafer for HVM? But so here's the problem, right? You only have that much wafer and you need to try to do everything. So that's pointing out a lot of the supply going to the commodity dealer, right? So we have sort of both things is happening and essentially comes down to like, I think really, you know, since start really kind of people aware, it's like probably Q2 2025 that people realize, hey, we actually couldn't buy enough DRAM, both enough commodity DRAM and also HPN for their demand. So you mentioned the C word just then. Oh, wait, not that C word. Commodities. You mentioned commodities. And when I'm reading these articles about a memory chip shortage, people use commodity type language, right? So you hear super cycle a lot. You hear commoditized chips versus, I don't know, other types of chips. Are they customized? I don't think so. But anyway, how much does this particular industry actually resemble a commodities-like industry? And then maybe explain a little bit more why historically it has been really, really cyclical. Yeah, so I think a couple of reasons, right? Fundamentally, the cost per bit for D-Rand was actually coming down every single year significantly. Even in the most recent years, it was also coming down. But the extent that they are coming down for every single year is actually slowing down. over the past 10, 20 years. So the cost will keep going down. So the cost, it's hard to become the most competitive part. Another thing is there's a committee that set up the standard for DRAM. So you are doing similar products. You're just having, oh, this is a little better power efficiency. The performance was a little better here and there. So it's hard for a memory supplier to significantly differentiate their products compared to their competitor, right? So essentially, what's the things that will compete in the end will be the market prices, right? And then when you are selling the similar products and you are targeting similar audience, right? That's sort of like a commodity, you know, sort of semiconductor commodity products to me. And I think those are the two main characteristics I'm seeing for DRAM. And I think that's a little different for HBM. Tell me, explain this. So there's commodity DRAM and then there's HBM. HBM is a type of DRAM? Is that what you're saying? Yes. Can you explain this further and like sort of, yeah, walk us through what the commodity version and why there is this thing called HBM high bandwidth memory, what that's all about? Yeah, yeah. So essentially the emergence of HBM is really go down to a kind of model layer because, you know, this is when you like try to scale and try scaling your AI model, right? Your Forbes is increasing significantly every single year, right? Because your compute was increased a lot faster. But your memory bandwidth was actually sort of, it's increasing, but it's very limited. Especially if you are using the memory from kind of traditional commodity DDRM, then the bandwidth is actually very limited. So from a memory supply, they are thinking, hey, how can we expand the bandwidth to supporting the memory that's going to scale the AI, right? So that comes out to HBM. So the way they do HBM is they are stacking multiply 8 or 12 or even in the future 16 DRAM dice together. So that making you essentially having higher bandwidth and have enough capacity to support AI developments over the past few years and in the next few years. And that's very different. You are stacking all these dice together compared to, hey, we just have one DRAM product as a chip. And that also comes down to the point that HBM manufacturing also the packaging both front and back is a lot more complex compared to commodity DRAM And that why I say commodity DRAM is more like commoditized but I think HBM is less so because it a lot more complex right And for a supplier you can actually differentiate either from the front end or back end technology And when your yield is better, right, it gives you a better margin, better ability to compete with your competitor, right? You know, stuff like thermal, things like that is the thing that will really stand out, right? And that's very different commodity DRAM. Getting more and more layers on HBM, it kind of reminds me of the razor wars where like, you know, Gillette would unveil a three blade thing. And then the next day someone has like 12 or whatever. UKG. Their HR, pay, and workforce management tools help business leaders empower their people. Because when work works, everything works. Learn more at ukg.com slash work. So if I'm a company like Nintendo, how am I actually purchasing chips? Because I think this is going to be important when we talk about what's going on right now. But do I have like a forward contract with a major supplier or do I have, I don't know, a warehouse full of chips that I bought previously? How do they actually manage that? So there's a couple of ways they do it, right? They will do it in terms of like kind of they will do it different way, like different environments and also different vendors, right? So it's hard to say, hey, this is exactly the way they do it. But typically, I think you can think about like, hey, this is amount of the chips we're going to purchase in the next four months. And this is amount we're going to commit. And we're going to sign the contract pricing whether this term will be four months or for the LTA, you can go as long as like to a year or even two years. So let's see the way you think about it. But typically, you'll be like, hey, we signed this contract for four months. And by the end, sorry, for three months, for a quarter. So by the end of the quarter, you negotiate a new contract pricing for the next quarter. So that's typically the way it works. But the problem for right now is for consumer electronics, it's like the pricing probably is sort of, I would say, secondary. Because if you couldn't secure value, you couldn't even make any money, right? So number one is securing value. Whether that, you know, make sure that fits to the demand outlook you are seeing. Two is whether you can negotiate the best pricing you can. But given the overall pricing environment was still volatile, was still asking so much, it's hard to not take in some kinds of margin impacts because essentially you try to get a chips, right? And you cannot just get too much away from your vendors. Yeah. I want to get more into the demand destruction element because I think this is very important to think about where the equilibrium is going. Just to clarify, what is it about AI specifically? Is it in the training? Is it in the inference? Where specifically in the process of building out AI services does this voracious demand for DRAM come from? So honestly, to be very honest, I think it's everywhere. Like, right, you know, AI training, you need tons of HBM. That just everyone knows, right? You also need a lot of the CPU DRAM. And that's LPDDR and the DDR that go into the server. You also need HBM, right? So that's on the training part. In the inference team, you also need HBM, right? And you also need the CPU DRAM to go through those workloads, right? But, you know, the main one will be the HBM. And even the inference thing, right? And right now they are separating to pre-fill and decode. And, you know, I think 80%, I think right now, like the most important thing will be decode, right? And decode is super, super like memory bound and memory intensive. And HBM, the memory importance of there will just, will only increase to my understanding, especially given the long contents window continue to expand. and the inference usage and adaptions continue to going up, right? So I think those are the typical way I would think about those, right? And the last thing is I think a lot of people will talk about this today is about agentic AI, right? Agentic AI, you want to power those, you need a lot of CPU, you need a lot of CPU-based server, right? So what's including the CPU server? There's a lot of DRAN in there, right? And a lot of agentic world load also means you need a lot of inference as well, right? And that goes back to my previous point. You need a lot of HBM and DRAN, right? So I will say like kind of DRAM and I got HBN and DRAM is kind of everywhere. And we kind of talk about storage, but storage is all everywhere, you know, in doing this process. I have a basic question, but why does AI need a lot of memory at all? So you just explained, you know, it's important for inference. It's important for training, whatever. But like, why? Yeah, I think this literally like goes down to the model layer, right? Because essentially when you are trying to train, let's say you want to train the model, right? And then you just need to pure tons of data. You need to address tons of data there to train your model. And then for the inferencing part, you need one data after another to kind of compute one after another sort of a chain of salt, right? Yeah. So to do that process, how can you keep the previous data, your process to keep it in there and to process to another data that coming out after, right? So that's why you need a lot of memory. And go back to my previous point, right? The long context window basically means like when the things that you are processing longer and longer, right? Like how can you have enough memory to process those, right? And, you know, one of the good examples, if you use 3GBT, you guys definitely remember, right? You know, back to 3GBT coming out, people are just like, hey, like what's the weather, right? Yeah, write a poem. People right now, or last year, people are doing like, hey, can you write a 20-page report how cancer can happen on people, right? for example. Then we'll give like 20-page report and they will wait like five minutes, right? And so if you think about the process, like they do all the calculations, all the researches and also give you the output tokens that the answer you get from your prompt, that's significant way longer than the prompt that you are given, right? And that's also a lot longer response that you are getting when you go back to the GPT-3. And remember the usage we know back to GPT-3 was I forgot the monthly daily users, but right now I think the user for TGPT was 800 million, right? And we have not included the users for the Gemini, for Cloud, for xAI, right? So if you multiply those effects, I think the memory demands is very significant and it's very clear to me. Yeah, it's really interesting. I saw some people were talking about this, but it really strikes me in my own usage of AI, just basically how much more I use it as capabilities have grown. Like it was sort of cute in late 2022. It's like, oh, get it to write a poem about this. And it was like, OK, and then I would go, you know, several weeks sometimes without having anything to query. And then, you know, these days, like I'm using it all the time, like trying to do things with code or scrape data, et cetera. So overall token consumption has grown massively with increased capabilities. All right, let's talk about the demand destruction element. Any commodity is going to be, the price is going to be a function of supply and demand, and eventually prices get high enough such that certain forms of demand just are no longer economical. And it's like, you know what? We're just not going to make as many video games or cameras or whatever else uses a memory. Are we starting to see any of that yet, such that certain uses and consumption of memory? So Joe doesn't have to buy a Switch for his son. Yeah, that's right. Yeah, I actually bought a switch recently. So the price... You should resell it. Are you going to resell it? I don't know. We'll see how the arbitrage there works. But I think we're already seeing a lot of impacts, actually. But it kind of vary in different kinds of products, right? In PC, we're surely seeing that. Dell, Nanovo, Anser, Asus, they are all having a price hike for kind of different products, right? And, you know, on the kind of demand side, you're also seeing that, I think, for the Chinese smartphone market, right, a lot of the analysts, a lot of the research firm, right, a lot of the companies were saying, like, they are cutting off their smartphone outlook. For example, I think MediaTek recently left earnings, right? They are saying they are cutting off the mobile to, I think, it was 12% or 10% to 15% of the 2026 outlook. That's very significant, right? You're saying this year is supposed to be $100 million. but it will essentially go to $90 million, right? That's very significant. So I think we are already seeing that, but kind of vary different places, right? Apple will be a great example here because they are saying, hey, we have all the price hike, right? But we actually managed pretty well. They are saying the real sort of meaningful margin impact will actually show up in the second half, 2026 instead of like the first quarter or second quarter. I think those are sort of the things I'm seeing. So it's definitely already showing up in the market, right? I think what you can expect this year will be two things. One is sort of demand destruction in PCs, right, the water consumer electronics, or the display, whether it's the memory display, that's very straightforward, but also the camera, right? The camera is also very important. I'm hearing some of the display on the camera as well. And, you know, also on the sort of what we call a delay launch, right? Because you don't want to launch your new products that have higher price than you initially thought, which will impact your new launch performance, which usually your new launch product has better ASP, better margin compared to your previous generation products. I think those are the things we're going to see in the coming months, in the coming quarters. I think it will show impact and people will start feeling it. Joe, do you think we're going to get people stripping old Nintendos for memory chips? Remember how during the commodity super cycle in the early 2010s, I guess, there were all those stories about people like stealing electricity wires. Yeah, it happened again in 2021 too, I think. Yeah, we might get some of that. Well, actually on that note, is there like a short-term fix for this problem, either in terms of design, maybe you can make the products more efficient in some way so that it needs less memory or recycling old memory chips? I don't know. Yeah, I think, you know, from the demand side, it's a lot difficult, right? From an OEM perspective, right? There's only that much you can do, right? You can dispatch, right? But I don't think that addresses the fundamental issues, right? You are just dispatch your products and so you can ship. But there's this double-aged sword, right? When you downgrade your products, that actually looks bad compared to some of your peers who doesn't dispatch, right? And make your product less competitive that impact your final sort of quarterly or annual shipment. I think what needs to be addressed or what can be done is probably on the supply side, right? So surprise that we are mainly talking about a couple of vendors, right? The Micron, Hynek, Samsung. And for legacy ones, right, you have Winbound, Nanya, and China, like CSMT, and GHICC, right? Those people. So I think the most direct way right now this year, because this year the really challenge is there's a clean room constraint. So clean room constraint is basically you have limited space that you can put all those equipments and start manufacturing chips in the fan, right? So you have clean room shortage in all three major memory makers, right? So your incremental wafer capacity coming online for 2026 will actually be quite limited. So in that sense, how can you produce more bits, more DRAM bits, while having only limited incremental wafer capacity? The only thing you can do is non-migration. So in DRAM process now, you have the most advances. Right now it's 1C, going down. I have 1B 1A and 1X blah blah blah Right And you know I think right now they are trying to do to migrate as much as they can to 1B and 1C which on the same wafer basis they will produce a lot more bits compared to a legacy node. So doing that way, on the same wafer basis, you can produce more bits. So that should produce more supply, even though your wafer capacity is constrained. But the challenge is two things. One, how fast you can run about this non-migration, Right. Not migration is typically difficult. Well, I would say difficult is, you know, it takes time. Right. For to go into the most advanced data. Right. You need EUV machine. Right. You need all different kind of manufacturing processes. Right. And not every fab can already prepare to doing that. Right. So it will take time to ramp up all those production for the new advanced data. Also, you are having the challenge to like, hey, OK, I want to do that migration. Right. But how are we going to balance the capacity with HPN? Again, that's another issue you want to kind of think about, right? Because some of the HBN process now is actually using, for Hynix, they are using 1B. So that's actually the events process now, right? So even you are including 1B, but at the same time you want to do more HBN. So like the increase there is actually quite limited. So I think those are the way it can potentially fix the issue. But, you know, from our house view, right, even we factor all those in, I think this year we're still going to see quite significant shortage. One of the themes that comes up when we talk about any commodities super cycle is that, sure, higher prices will eventually elicit more production, more supply. But in the meantime, you know, there's a sweet spot for the existing producers, especially when you have just a small number of producers that can produce at scale. They enjoy massive profits, right? And they're sort of going to be reluctant to build out new fabs because that's money going out the door. It also means lower prices. So there's always this sort of Goldilocks period for them before the supply comes online where they're just raking in profits. What do we see is the industry's impulse to invest? And I'm curious, like, you know, whether it's Chinese producers, are they in a position to potentially undercut the profits of the Korean makers? Are the Korean makers seeing themselves as having a long window where they can enjoy large profits before they have to invest? What is the thinking on the supply side about those big capital outlays? Yeah, that's a very good strategic question. I think every management will probably give you a sort of different answer and kind of on a high level, probably the same. But I think in general, look at historical cycle. It's like if we are seeing quite sustainable demand in the next few years and then clearly the capacity couldn't catch on the demand, they are going to expand the capacity in some time. It's probably not going to be like, hey, we're announcing 2026. The capacity really coming online, meaningfully, is probably 2028. So I think they will still announce, right? And we are seeing that from Micron, right? Micron is announcing a new NEMFAB in Singapore, right? They are expanding. They have two FAB. Currently, it's under construction in the US, right? Also doing tons of non-migration and recently acquired the new FAB from PSMC, which is a Taiwanese chipmaker, right? And then all those moves, you are already seeing some sign of that. And another sign you are seeing is the CapEx, right? Both the Hynix, Samsung, and Micron, their DRAM capex has actually increased quite significantly this year. And expect similar trend in 2027, given, I think, you know, they're going to probably try to expand more capacity and invest more in both HBN as well as the equipment that goes through advanced equipment, right? That goes back to the point I was talking about the non-migration. If you want non-migration, you need to use more advanced equipment to produce more advanced chips. Yeah, it really does remind me of the oil industry, right? where there's a bust in the underlying commodity price and everyone starts talking about being disciplined on capex spend. And so it takes a while to ramp up. I wanted to ask about, I guess, the balance between HBM and other types of DRAM. How are people or how are the actual chip makers thinking about that? And is there a chance that because HBM, my understanding, is that it has higher profit margins, Is there a chance that everyone just pours into HBM and kind of leaves the more basic stuff in the dust? Yeah, so actually, you know, it's very interesting. I think what probably seems for Q, when you wrote about this for our institutional client, it's that, you know, the margin, well, typically we think HBM has a higher margin, right? Which is definitely true. But what's happening, you know, we see all this like spot price, contract price going out so much, right? The margin of the commodity right now is actually higher than HBM. So here, that created a real dynamic that Tracy, that you talk about it. Because when your margin of how many DRAM is actually going higher, why would you make more HBM? Like, why, right? But I think if you really think about like a long-term perspective for a memory supplier, right? Before the old AI blend, your demand is really driving from a couple A market, right? PC, mobile, automotive, industrial, blah, blah, blah, right? Right. HBM right now is surged as a new growth driver for the company, which we believe will last for the next few years. Right. And for the memory supplier, they're thinking the same. And if you are not continuing to push your capacity, push your technology, when you're lagging behind, it's hard to come back. And they go back to my previous point that this is an area that is relatively easier to differentiate your products to get more market share, more minutefully. Right. So I think three memory makers are memory value HBM, even though right now the margin of commodity rate is higher. Right. They will I think they will still invest quite a lot on HBM and they will do their best to sort of balance the commodity rate market. But I think, you know, I think this year, at least the number I'm saying, it's still quite significant shortage. How do the Chinese producers compare to the Korean producers? Is there a is there a gap? oh yeah i think there's definitely the gap i think like you know if you look at the memory in general i think it's probably three years three years to say and it's probably four years i would say that this is very like you know timeless is like kind of today you know who knows what's gonna happen in the future but i think you know there's definitely a gap you know when i think about the competition pressure from chinese memory makers we always want to kind of separate the chinese market and also the ex-china market that's also the memory supply was looking at it right so chinese market is probably about 25% of the global DRN demand. So really the competition is happening in China because for the leading Chinese memory supplier, like CSMT, for example, I believe like 90% or 95% of the revenue is coming from China and Hong Kong. So meaning most of the DRN is really competing Chinese market, competing with Micron, Heinex, and Samsung. I think I will say right now on the high end, I think it was still dominated by streaming makers, but CSMT are getting too momentum. One, they are probably getting sort of a low end and medium products. And they are also benefited by Chinese government's policy in terms of the self-sufficiency drive to, you know, whether it's for the commodity, right? Also, right now, they are pushing for HVN developments that support Chinese AI hardware. So if there's a shortage, and it seems like the shortage is going to be with us for some time, how are the chip makers actually allocating what supply they have? Does it go to the company that I think is going to be really big and important in the future, like, I don't know, an open AI or something? Or does it go to an existing customer that's been buying in volume from me for years? Yeah, I think there's no doubt to me that some of the highest tier customers will get a value for sure. Of course, there's a more complex pricing negotiation behind. But I will say the value still will be the most important thing. And I think that's probably what they're going to do, right? And besides thinking about a buy customer, I will think about kind of buy sector. I will say the server DRAM and HBN, two things together, will be the top, top priority. Because for the whole A market in DRAM, we are seeing actually HBN and server DRAM together is probably more than half of the DRAM market, right? And that's important. And it's going so fast, right? And it's unlike the mobile, right? Mobile is kind of flat. The mobile demand is really driven by the increased DRAM contents in a mobile, which is quite limited, right? If you buy an iPhone over the past year, you know, like they are actually quite limited on their DRAM content increase. So I think, you know, those are the top two priorities for the memory company to focus on in the next two, three years. And we'll see what happens, you know, after like second half of 2027 or 2028, if there is some structural change, but I think server-driven and HVM will be the top priorities. You know, the other day, I was very lazy, which is not rare for me. Just that one day. Just one day. And I used Claude Code. You know, I have a bunch of screenshots on the desktop of my computer, and it sort of makes my desktop look like a mess, and I can't see folders. So I'll, like, drag and drop them and put them in the recycling bin. And I was very lazy, and I put it to Cloud Code. I said, clean up my desktop. Clean up my desktop. And I was like, this is so insane. I'm using, you know, Anthropics computers wherever they are on the other side of the country to clean up my own computer's desktop. Why do I even have my own computer? Did it do a good job? Yeah, of course. Yeah, it did great. And I was like, why do I even have my own computer at this point? I started wondering, you know, I'm like, why do I even have a computer? And so related to this sort of DRAM question, could we just get into a situation in which people don't really have their own compute on their own device because more that like Anthropik's brain is sort of – or OpenAI's brain is sort of controlling the things we use. why not just have a very low spec computer or low spec phone or whatever, or low memory, whatever, because I'm already using their computers, et cetera. And are we just going to see whether memory or elsewhere, this sort of migration where all of the interesting stuff and all of the memory and all of the compute happens elsewhere and my phone just becomes a sort of, you know, or my computer just becomes a screen with an internet access. Yeah, I mean, I don't know about the idea, like, you know, not having the personal devices. Yeah, I might, like, have a physical device, but it just sort of seems similar. Like, over time, does it make sense for me to have all of this actual compute inside my house when I'm just like, oh, I might as well let them have it all? Yeah, I would say it depends on, like, what's your end purpose, right? You know, for example, laptop, for example, right? If you're doing, like, video editing very, very heavy, right, you probably still need quite a lot of DRM, If you're just doing document work, it depends on how you're going to use it. If you're using it super intensively, I think you still need a very good DRAM, especially if you're pulling all different API from different places. So I would say it just really depends on the end purpose. I don't think, at least I've been seeing there will be a kind of structural DRAM destruction in the devices from agenting AI. So going back to the beginning of this conversation and the cyclicality of the industry People have been throwing around the term supercycle again very commodities Is this just a supercycle maybe bigger than previous turns that we seen throughout history Or do you think something structural has shifted here, perhaps given the rise of AI and the fact that Joe is using Claude to clean up his desktop? This is actually a very dangerous question. You are basically asking, right, whether this time is different, right? I think there's a couple of things that actually rhymes with the history, right? Non-migration, fake capacity, right, those things. But I would say, like, you know, there's a kind of difference I'm seeing, right? Because we rarely see in a sort of super cycle that there's a new demand driver coming online. It's not only constrained demand, this thing is also constrained to supply, right? Because, For example, in 2010 to 2012, there's a super cycle driven by mobile, right? So mobile is basically, hey, we have a new product coming online and it requires a lot of DRAM and our capacity couldn't catch up. But right now for HBN, it's like, hey, okay, we need to do HBN, but your HBN is actually constrained your commodity DRAM capacity. So I think this is the key difference that we are seeing for this, right? Apparently, the demand accelerations is also last quite long, right? If you really start encountering, I would say, the AI acceleration in terms of demand, probably let's say second half of 2023 when NVIDIA really start emerging in people's attention. To now, it's almost probably like, let's say, three years, two years and a half, three years. And we are looking at this cycle to last until, let's say, safely to say second half of 27. So we are looking at a four-year cycle. That's quite rare in history. Usually, like the cycle, if you look at historically for the memory, it's probably like 15 months, probably longest like 18 months from start to its peak. But right now, we are going to an area that was like, hey, the demand was going up. Not a price directly, but the demand at least was growing quite significantly over the past few years. And the pricing impact is happening right now, starting from probably Q2, 2025. Well, when you say, okay, the cycle could end by 2027 or the second half of 2027, what are the ingredients for it to slow down? Is it the more production capacity? Is it a slowdown in total demand? Like what do you see potentially happening in 2027 that brings supply and demand more into balance? Yeah. So I will say it will end. I would say it's a more tricky area to sort of estimate. Because based on the number I'm modeling, we are seeing here, the demand is actually going higher and the shortage is actually going to worse. And the part of that is because a media server, their adherent content was going to 3x compared to regal. Sorry, what kind of server? Oh, an NVIDIA server. No, Lubin 200, yeah. Sorry, Lubin Ultra, Lubin 300. Okay. And if that's the case and the demand was continuing to be that strong, right, your HPN shortage should even be worse compared to 26. And your DRAM should go a little worse, right? Because the memory record is going to try to make more HPN wafer, right? And that kind of counting out more on commodity DRAM. I think the reason I say it's more tricky is because two things. I mentioned about a lot of non-migration. It's actually happened this year. It's also going to happening in 2027, right? So tons of live migration are going to come online and are being completed. So that will add a lot of additional bits for the memory supplier. Additionally, the wafer should have a lot more coming online by the end of 2026 and for, you know, throughout 2027, right? So I think those are the two variables that you want to factor in. From what I'm seeing, at least, I think like 2027 you're going to still be shortage. Yeah. We need a strategic memory chip preserve, right? Put a floor on prices and the cycle. Actually, on that note, do you see a lot of stockpiling in the market or panic buying right now where people are seeing these forecasts and they're freaking out and just buying whatever they can? How you get a bullwhip effect. That's right. Yeah, it's hard to say no, right? Like one of the good, I think the thing you're seeing in Titanic, Samsung and Micron, their inventory was dropping every single quarter since I think Q3 2025. And that's just one of the signals that you are not just buying the products on the shelf, right? And the products that come online in that quarter, you're also buying the products in their inventory, right? You are trying to get as much of the product as you can. And given all the demand and how AI has developed so fast, like by monthly basis, it's hard to say like, hey, there's no preemptive purchasing behavior happening. And especially when the hyperscalers and the leading AI labs, they're competing with each other very intensively. Securing capacity on the hardware is sort of the baseline. You want to get the best equipment that you can compete. So obviously, if I'm buying a Nintendo Switch or if I'm buying a PC or maybe some sort of consumer device, then the increased memory cost is a meaningful driver of that. And maybe the company will have to raise prices. Maybe I won't buy it or whatever, and you might get this. What about like for the big hyperscalers, when we see these huge capital expenditure numbers from the likes of a Microsoft or an Amazon, do these prices changes move the dial at these levels in terms of – do they affect anything when we're talking about these buyers? Yeah, I don't think – number one, for sure. I think the capex is not really – well, the capex increase is not because of the direct price increase. Sure, yeah. But the URM price increase does going to have some kinds of impacts to their purchases of memory, right? Assuming the hyperscalers are the direct buyer of the memory, which is not necessarily the case in different locations. So assuming they are the direct buyers of the memory, for sure, right? If your initial goal was using $100 million to buy a certain amount of memory, right now you probably need to do some discount there. for sure, especially these things that's probably going to be more and more in the coming quarter, right? So one of the things they are trying, and hopefully they are trying to do, but it's still very difficult, is they try to have a long-term agreement, right? To commit large value for the year basis, to hopefully to have better pricing. But it's really hard to achieve that, because why in the memory supply I want to do that, right? If we can negotiate the pricing in a quarterly basis, we can actually get more money, right and especially given the current uh pricing and supply demand environments ray wong thank you so much for uh coming on odd lots really the perfect guest let's uh stay in touch and maybe we'll revisit it in 2027 to see if things have eased yeah hopefully all right Thanks so much, Ray. Take care, Ray. That was great. Take care. Thank you. Tracy, that was fun. I do feel like there's a lot of moving pieces there, but just from AI in general, the crowding out effect is such a big part of the story. When you saw those big capital expenditure plans for 2027, like these are real macro drivers. They're going to show up in CPI, et cetera, like that. And because it's like, you know, maybe we'll get a lot of productivity gains in the future. But right now, the pace of spending is so furious. It is like a fiscal boom. Yeah. A fiscal stimulus. This is what happens when some of the world's biggest companies and most cash rich companies decide that this is an existential threat. Right. There's no upward limit on how much they're willing to spend in order to survive. And so you could just throw money at HBM chips, I guess. And then you get the chip makers going, well, actually, like we want to produce a bunch of HBM. Like forget about DRAM, all of that stuff. I do think like, yeah, you know, when it comes to some of the data center and energy stuff, it's a little ambiguous how much it's affecting energy prices right now. But already the politics of that is very fraud. And then you're going to start upsetting the gamer community because they can't get, you know, NVIDIA has talked about curtailing some of it. Big mistake. And then they're going to upset my son because he can't get a Nintendo Switch, et cetera. People are just going to start feeling it more and more, the sort of visceral reality that various resources that they thought they could get abundantly. It's like, nope, we've switched this line over to the data centers, over to AI, and it's going to become more – this crowding out effect is going to become more and more real to people directly. Which is so ironic because when you think about AI, you know, it's this thing that exists in the ether, right? But at the same time, it has this huge physical impact on the real world. It's kind of funny and I guess like not what a lot of people would have expected. No, I suppose not. But the DRAM, people, anyone who has a terminal should just look up DRAM prices. I thought you were going to say anyone who has a terminal should start stripping out their memory chips. Everyone who has a terminal should just unscrew the back and pull out the DRAM. No, but people should just look at the chart because really here is this thing that's very sleepy. I mean, this was true commoditized tech. This was the low end in terms of what people were excited about in chips and so forth. And as Ray mentioned in the beginning, you know, costs generally were on this downward trend and so forth because growth was modest and technology continues to improve. And then it literally just looks like an L. Really in like the last four months just gone completely nuts. And so, yeah, kind of a fascinating spot to watch. And it's just interesting to the point that you were making how much it really is like a commodity super cycle. Yeah. A commodity cycle. Well, I guess in 2027, maybe we'll find out. We'll see if it balances out. Shall we leave it there? Let's leave it there. All right. This has been another episode of the Odd Lots podcast. I'm Tracy Allaway. You can follow me at Tracy Allaway. And I'm Jill Weisenthal. You can follow me at The Stalwart. Follow our guest, Bray Wong. He's at rwong07. Follow our producers, Carmen Rodriguez at Carmen Armand, Dashiell Bennett at Dashbot, and Kale Brooks at Kale Brooks. And for more Odd Lots content, go to Bloomberg.com slash Odd Lots for the daily newsletter and all of our episodes. And you can chat about all of these topics 24-7 in our Discord, discord.gg slash oddlots. And if you enjoy Oddlots, if you like it when we talk about DRAM and HBM and other acronyms of the memory chip industry, then please leave us a positive review on your favorite podcast platform. And remember, if you are a Bloomberg subscriber, you can listen to all of our episodes absolutely ad-free. All you need to do is find the Bloomberg channel on Apple Podcasts and follow the instructions there. Thanks for listening. Thank you. Thank you.