What has *Actually* Happened in AI in 2026? | AI Reality Check
73 min
•Apr 23, 20265 days agoSummary
Cal Newport and AI commentator Ed Zitron review three major AI stories from early 2026—OpenClaw's hype cycle, Anthropic's military contracts and ethical claims, and the data center infrastructure crisis—concluding that the year has been bad for AI despite breathless media coverage that obscures the lack of actual progress and mounting financial instability.
Insights
- OpenClaw's viral adoption was driven by LLM prompt injection and media credulity, not genuine AI advancement; it revealed that frontier models are expensive and unreliable, pushing developers toward cheaper open-source alternatives
- Anthropic leveraged military involvement and ethical positioning for PR benefit while remaining embedded in defense operations, demonstrating how companies use selective transparency to manage reputation
- The data center boom is largely illusory: only 15 of 115 announced gigawatts are under construction, yet NVIDIA has sold GPUs for capacity that doesn't exist, creating a financial house of cards similar to pre-2008 mortgage securitization
- AI startups are fundamentally unprofitable wrappers around third-party models with inverted unit economics—more users mean higher costs, making venture-backed exits increasingly unlikely
- Media coverage uses 'dread laundering' to amplify AI existential fears while ignoring concrete failures and contradictions, creating emotional exhaustion that prevents rational assessment of actual capabilities
Trends
Shift from frontier model dependency to smaller, cheaper open-source models driven by cost constraints, not capability improvementsIllusory infrastructure spending: announced capacity far exceeds actual construction, indicating speculative bubble in data center financingVenture capital AI winter approaching as unprofitable startups face fire sales and acquisition collapse, with limited exit opportunitiesMilitary-industrial integration of AI companies proceeding despite ethical claims, with selective transparency used as reputation managementDecoupling of media narrative from technical reality: hype cycles complete in weeks while actual product failures go uncoveredPrivate credit market exposure to AI infrastructure debt creating systemic risk through retirement and insurance fund exposureGPU inventory accumulation in Taiwan warehouses via ODM supply chains, indicating demand destruction or accounting manipulationAnthropic's service degradation and rate-limit changes contradicting growth narrative while revenue claims remain unverifiedRegulatory scrutiny emerging around AI company accounting practices and revenue recognition methodsConsolidation of AI research talent into large tech companies, eliminating acquisition targets for venture-backed startups
Topics
OpenClaw agent framework and LLM prompt injection effectsAI media coverage credibility and dread launderingAnthropic military contracts and ethical positioningData center infrastructure capacity and construction delaysNVIDIA GPU sales, inventory, and revenue recognitionAI startup unit economics and profitability challengesVenture capital AI funding and exit strategy collapseLLM cost structure and scaling limitationsPrivate credit market exposure to AI infrastructureOpen-source model adoption versus frontier modelsAnthropic service degradation and rate limitingODM supply chains and GPU warehousingAI company revenue verification and accounting practicesDistributed AGI versus monolithic LLM architecturesFinancial system risk from AI speculation bubble
Companies
OpenAI
Acquired OpenClaw framework; claimed to build agents but likely motivated by competitive threat from cheaper alternat...
Anthropic
Military contractor using ethical positioning for PR; revealed $5B lifetime revenue in court filing contradicting cla...
NVIDIA
Sold over $500B in GPU commitments with only 15 of 115 announced gigawatts of data centers under construction; invent...
Microsoft
Spending $37.5B quarterly capex on servers from ODMs; data centers taking longer to build than anticipated
Amazon
Claims Project Rainier 2.2 gigawatt data center operational but actual capacity significantly lower than stated
Google
Major AI infrastructure investor facing same data center construction delays and GPU capacity constraints
Meta
Large-scale AI infrastructure investor with significant capex commitments facing deployment delays
Quanta Services
ODM supplier with rising inventories; manufactures servers with NVIDIA GPUs for hyperscalers
Foxconn
ODM supplier manufacturing AI servers with NVIDIA GPUs for major cloud providers
Broadcom
Announced 3.5 gigawatt chip deal with Anthropic with unclear deployment path
Perplexity
AI startup facing potential fire sale as venture capital exits become difficult
Cursor
AI coding startup signed GPU rental deal with xAI, exemplifying unprofitable AI startup model
Character.ai
Acquired by Google for several billion dollars; acquisition price mostly went to founders and investors
Inflection AI
Sold to Microsoft for ~$1B; acquisition price primarily benefited investors rather than creating product value
Windsurf AI
Coding company acquired by Google for $2B; most employees laid off post-acquisition
Nebius
Announced data center projects not under construction; former crypto company raising capital without building
CoreWeave
GPU rental company buying from ODMs; represents speculative AI infrastructure play
Fermi
Project Matador 11 gigawatt data center announcement; CEO departed and contractors unpaid
xAI
Renting GPUs to AI startups like Cursor; part of speculative AI infrastructure ecosystem
Axios
Published credulous OpenClaw coverage with sensationalized headlines about AI singularity
People
Ed Zitron
Guest analyzing AI industry stories; critical perspective on hype cycles and financial instability
Cal Newport
Hosts episode reviewing AI stories; synthesizes technical and financial implications
Dario Amodei
Made military contract statements; claimed ethical boundaries on surveillance and autonomous weapons
Jensen Huang
Promoted OpenClaw and Nemo Claw at GTC 2026 despite unclear business case; company faces GPU inventory crisis
Sam Altman
Claimed military contract negotiations; acquired OpenClaw framework
Emile Michael
Questioned Anthropic's reliability citing CEO claims about AI consciousness; designated company as supply chain risk
Peter Steinbruner
OpenClaw framework creator; received hundreds of millions in acquisition proceeds; continues posting about AI
Jack Clark
Repeatedly uses evolution simulator example as proof of AI capability; example is 50-year-old concept
Noam Brown
Created Cicero diplomacy AI; example of modular architecture combining LLM with planning and policy networks
Elad Gill
Advised AI startups to seek exits within 12-18 months, indicating market saturation and exit difficulty
Michael Burry
Raised concerns about NVIDIA GPU inventory and sales claims weeks after Ed Zitron's analysis
Andre Karpathy
Referenced as example of 'smart' person who fell for OpenClaw hype despite technical background
Mustafa Suleiman
Received significant proceeds from Inflection AI acquisition by Microsoft
Denis Yarats
Acquired by Google; example of big tech snapping up AI research talent
McKenzie Singlos
Criticized for laundering reputations of data center companies through credulous coverage
Quotes
"I think this speed and lack of accountability can create a sense of overwhelming disruption and change that can really be pretty disquieting."
Cal Newport•Opening
"Every time we talk it's like there's very big news and everyone's like oh look at the we've got a new number it's even higher than usual but the actual underlying economics and infrastructural layer even just the service performance is worse and it's very strange."
Ed Zitron•Early discussion
"It's just llms doing what they think a social network looks like as in when i shouldn't even said the word think spitting out what the model would say is likely to be a social network post."
Ed Zitron•OpenClaw discussion
"The media and the ai community is so desperate for a hero they're so they they know they know in their deep down in their soul that something is wrong that none of this makes sense so the moment when anything, even directionally, feels like it proves that they're not wrong, they grab it and they shake it vigorously."
Ed Zitron•Media criticism
"I think the future is you have multiple different things, most of which are just hand-coded by a person. And maybe you have an LLM in there if there's language involved because it's pretty good at if it needs to speak to someone or interpret it."
Cal Newport•Future architecture discussion
"Only 15 of 115 announced gigawatts are actually under construction right now, with the rest in a liminal pre-production stage, in which they could and likely will be canceled."
Cal Newport•Data center story
"NVIDIA has already sold too many GPUs it has already sold more GPUs than are being than are actually having data centers built for them."
Ed Zitron•GPU inventory crisis
"Every single ai startup is a wrapper of a model owned by someone else and every because the the core thing is that you cannot control the cost of a user with an lm you can't do it."
Ed Zitron•AI startup economics
Full Transcript
AI news comes at you fast. Each article feels more breathless and more terrifying than the last, but before you have a chance to see how any particular story turns out, there's 10 more in its place. I think this speed and lack of accountability can create a sense of overwhelming disruption and change that can really be pretty disquieting. Well, it's Thursday, which means it's time for an AI reality check episode, so I thought this would be a great opportunity to try to slow down this news onslaught and get a better sense of what has actually been happening in the AI space recently. All right, here's my plan. I've invited the AI commentator, Ed Zitron, to join me. And we're going to look at three of the biggest stories about AI to land in 2026 so far, including one in which Ed is actually very much involved. And what we're going to do is for each of these stories, we're going to take a closer look on what actually happened and how things have since turned out. Our goal by the end of the episode is to answer a simple but critical question. Has 2026 been a good or bad year for AI so far? We have a lot to cover, so let's get right into it. As always, I'm Cal Newport, and this is Deep Questions, the show for people seeking depth in a distracted world. And we'll get started right after the music. All right, Ed, well, it's been three or four months since you were last on the show, and there's been some big AI news since then. So I wanted to have you on to go through some of the big stories that have happened since January. And because you're a commentator who is maybe i should say this uh less impressible than the average ai commentator we i figured your point of view is good for my reality check audience uh we're going to try to end this this discussion by uh voting whether or not 2026 has been good or bad for ai so far but what's your pre-vote where do you think based on what you know you're going to end up here uh probably not a good time for them it's just every time we talk it's like there's very big news and everyone's like oh look at the we've got a new number it's even higher than usual but the actual underlying economics and infrastructural layer even just the service performance is worse and it's very strange well this is part of the reason why i like doing these reviews with you is often these the story will be big everyone get worried about it people will call people like you and i for quotes and then everything moves on and there's no follow-up and i think it's useful for calibrating how to react to the news story you're hearing now to occasionally go back and say hey what happened with that story that had me worked up a couple months ago which brings us to a great place to start because what was the first big story of 2026 i think arguably it would be open claw which i believe became generally available to the public later in january now i've broken this up into two sub stories. I want to start with like the easily dismissible one just because it's fun and then get to the more serious one. I'm going to read you a quote and we'll get into it. So the easily dismissible but fun aspect of this story is when someone opened a molt book, a social network that was configured so that it is easy if you're writing an open claw agent to post on it. So they add hooks into it. So it was easy for your open claw agents to post and read things from this social network for about four days. Everybody went crazy about Maltbook. I'm just going to read you a quick quote from your favorite publication, Axios, from the end of January. Imagine waking up to discover that the AI agent you built has acquired a voice and is calling you to chat while comparing notes about you with other agents on their own private social network. It's not science fiction. it's happening right now and it's freaking out some of the smartest names in ai well you're a smart name in ai's ed so are you still freaked out about moltbook no the moment i saw it i'm like a this is just llms this is just llms doing what they think a social network looks like as in when i shouldn't even said the word think spitting out what the model would say is likely to be a social network post and then the second thought i had was this is fake this is what well 100 there are regular people just using their open clause to post on here these don't read that they didn't read like llms in some cases some some cases they did but some of them were just like i saw someone post the slur within one hour i'm like okay this is just a regular person using that regular is probably the wrong word a person is using this as a means of posting and it's funny when you say like the smartest people as well because i think that that term no longer has any value because that's like andre carl pathy who is it's just the term smart at this point just does that just mean they got good grades at school because if that's the case we are completely screwed like if we think only the people who got good grades are smart then i don't know what to say for the world because the people that fell for moltbook was that was insane they were like oh it's agi it's as if they forgot how large language models worked or never learned the first place Well, I don't think they understood what Open Claw was or what Maltbook was or what any of this was other than it involved lobsters. Yeah, and they heard, Agent! Agent! It's your tournaments! The quarterback mini! I did a little digging here. Axios' original – they moderated the headline, but I thought it was worth just a – I think we memory hold a lot of this coverage, but the original headline was we're in the singularity colon new AI platform skips to humans entirely. But it did the trick where you put the quotation marks around the first part. So technically, you are not declaring that to be the case. You are quoting someone. This one got fully memory hold, right? No one talks about Moldbook. I mean, I think I covered it on my show at the time. I said, yes, people are just telling their LLMs to post. llms write stories they finish the stories you tell them the right there's actually good research i was this came up in my doctoral seminar i'm teaching on super intelligence which is great because it's like 10 doctoral students who just do ai research i'm learning a ton from them and they know the literature even better than i do and they're saying there's really good research out there that whenever you do any prompting of an llm if anything in your prompt in any way indicates that you're prompting an ai almost always it goes in the sci-fi mode right so the So you can ask the same question and if you say, you are a whatever, you are a journalist, please answer this question. It will give one answer. And if you say, well, you're an AI, so how do you think blah, blah, blah? It always will go towards dystopian themes of AI coming alive. So it's very easy to prime. And I think a lot of that was going with OpenClaw. People would say, please go post on this social network, and they just wrote AI-type stories. right um but was covered very credulously i would say which is the key point pretty much par for the course i mean i still i don't know if we want to wait until the second part of this but it isn't the open claw thing is one of the most insane things i've seen in the tech industry may even be crazier than the overall llm boom well go on with because that's let's let's get into the second part i have some quotes but let's um well let me read you the quote and then let's get in Yeah, read the quote. This is a representative person talking about OpenClaw earlier, like early February, late January. For the past week or so, and this tone, this is called AI enthusiast. This is like such a known tone. This is going to sound very familiar. For the past week or so, I've been working with a digital assistant that knows my name, my preferences for my morning routine, how I like to use Notion and Todoist, but which also knows how to control Spotify and my Sonos speaker, my Philips Hue lights, as well as my gmail it runs on anthropic cloud opus 4.5 model but i can chat with it using telegram i called the assistant navi inspired by the fairy companion of arcania of time not the ocarina of time the game yeah all right nerd zelda oh okay i get you no no no it's just like it's like a really weird choice well he makes a point it's not the james cameron movie base so there we go okay and navi can even receive audio messages from me and respond with other audio messages generated with the latest 11 labs text-to-speech model oh did i mention that navi can improve itself with new features and that it's running on my own m4 mac mini server and also i just got fired because i just spent 100 hours setting up navi instead of doing my job well i and i just spent myself and i now can't pay my rent because i spent four thousand dollars a month on api calls yeah um like oh that that's the other problem okay so that's open claw right so you could my understanding is uh it's a library it's a python library yeah which makes it easy to write your own agent an agent being code that calls an llm and then uses the response from the llm to help drive its movement so you can say hey llm what should i do and then it does it um open claw made it easy for people to write their own uh so people all around the world began destroying their computers and leak it all because it's actually hard to write it is but here's the thing you even that term gives it too much credit it just does what llms do like it's just oh i had it i read this thing on one of the mac websites where it was like oh yeah i had it um build a website and it's just the most generic looking vibe code slop ever oh i had it uh transcribe my my voice notes Like, yeah, so, okay, it's doing what LLMs do. Oh, and it's able to write stories. So LLMs? And that's, this is the weirdest thing. The thing that really confused me is, on top of the credulous media coverage, and pretty much everyone who covered this should be ashamed of themselves, I think most people did the worst job possible, in the sense that I read most open-clore coverage, because I was trying to work out what it did. God's honest truth, I was like, what is this? But you read, like, The Atlantic, and it was like, was it the atlantic or cnbc they were like this is another chat gpt moment quoting jensen huang because of the fast adoption that a lot of people tried it and then they looked at that chart and said well this is a big deal but the thing is it's like fast adoption it's like it's slop it's slop commits on github and also mac mini selling out in the greater bay area but the thing that was crazier to me other than all the crudulous coverage was nvidia's gtc 2026 four trillion dollar or so market cap company right that's their conference gtc's the big conference yeah yeah yeah and you got a 3d ai generated picture of jensen huang the ceo of nvidia with with lobster claws and they released this thing called nemo claw and they're like oh this is the chat gpt moment this is the agentic future and it's like what are you talking about mate did you just get in a car accident do you have a concussion you just steered your company like a year ago gtc was like gen jensen jensen going out with full swagger being like yeah we've got vera rubin we're gonna do this 10x more efficient whoo shooting guns in the air signing he signed a woman's boob last year this year he's like yeah we've got nemo claw got nemo claw you want to try nemo claw ah you like that jingling the keys again do you like nemo claw what but please spend 125 000 on a GPU. You need to buy a Vera Rubin, even though we don't have anywhere to put it, as we'll get to. But it's just so weird, because when you actually get down to it, it's the classic LLM story. It's like, okay, what are you talking about? It's a new agentic interface for managing programs. It's an LLM. Is it a chatbot connected to an API? Yeah. It's like the Donnie Darko meme. What's the Donnie Darko meme? it's like i forget what the line is in the movie but it's like oh i've started i've managed to create a new agentic workflow is it just an llm connected to an api yeah yeah because that's every story every story i've read it's just do you have two llms bonking each other's heads is that what's happening great okay i'm very impressed we need to have the largest company on the stock market do something about this pronto it's hysterical i i think that's an important point because i I do think when the average person hears about things like OpenClaw or different agents, they're often thinking there's a new artificial intelligence technology, right? Right. That there's a new – we built – OpenClaw is a new digital brain that can improve itself, and it's learned how to do things that prior models have it. And I think what people don't understand is that OpenClaw is a Python library. It's a Python library that makes it easier to write a Python program that can make calls to LLMs. And you can aim it at whatever LLM you want. The LLM is somehow – like that is the brain, but there's nothing new. There's no new LLM for OpenClaw. It's a library that makes it easy for the average person to say, I'm going to write my own agent. It turns out agents are hard to write because LLMs, they write plausible stories. But as we've learned, they're not often really good, carefully checked plans for doing things. And so it causes a lot of problems. If you say, hey, LLM, give me a plan for doing stuff with my personal data, and then you have a program that just automatically implements that, turns out sometimes bad things happen. But there were two – here's my two useful things. I'm going to say there's two useful things about OpenClaw. One, because a lot of people began experimenting with building their own OpenClaw agents. One of the quick things they discovered is, oh, the big frontier LLMs are expensive. and they were racking up thousands of dollars of uh token cost the api calls to clod or to to gpt and so it got a lot of the real booster tech enthusiast types to start looking at much smaller much cheaper models because they just literally couldn't afford it this is why i think open ai bought open claw they try well there's an important detail though okay please so and it's an important important to know where this was in history so open claw came out january-ish yes now you used to be able to during this period connect your anthropic clawed max account a 200 buck a month account you used to be able to connect it to open claw so you weren't paying api calls you were just using anthropic services that's unlimited you pay 200 and yeah you have a rate limit but there's you can use it as much up to that rate limit and you can spend like thousands of dollars of api calls and that's been proven there's a coder called Shellac who did a study on it. This is where you get the number. You quote a number often about how much it's actually costing per token versus what they're charging. This is where partially that number is coming from? Yes, yes. So it works out to like somewhere between $8 and $13.50, weird way of saying that, per dollar of subscription. So you're able to burn like $2,700 on the Anthropic subscription. For $200. You're paying $200. It's costing them $2,700. Yes, exactly. Sorry. Kind of a kludgy explanation. so anthropic let this happen so the reason that claude uh the open claw got so big anthropic sued them because they were called like claude bot at first claw c-l-a-w but nevertheless anthropic allowed this to happen then february 12th they raised the 30 billion dollar round couple weeks later open claws cut off the aristocrats it's just it that anthropic is such an unethical company they should have never let it happen to begin with but one of the reasons that open claw got so big was both using those cheaper models but also using those max subscriptions and so open ai buying open claw was so funny just like open ai is just meta it's meta plus enron and it's so funny watching them why would you buy this what possible reason but oh we can build agents with it what do you mean no why why why they have much better frameworks for well i have two explanations let's get back to you tell me which one you think is more likely so the this is maybe giving too much savviness credit to them the the savviness is i think it was a real problem for a lot of enthusiasts to discover oh wait a second if we use really cheap open weight models open source models or even just really like three billion parameter models we can run on our own machine we get pretty similar results like actually we don't need a one the 10 trillion parameter super frontier model to read my emails and to add appointments onto my calendar. I think that's really terrifying. If you're a company like Anthropic, just take it on 60 billion in investment or your open AI is like we need people to think that these are the big brains and nothing else matters. So the conspiratorial slash business savvy interpretation would be open AI needs to sort of slow the roll on that or make that tool much more native to its models because they really do not want a generation of ai enthusiasts to say oh wait a second i can kimmy is like a fraction of the cost that it does just as well right the other way of thinking about it it's like them buying the the uh that podcast show recently tppn yeah that's just like uh we're just buying things left and right because we have money we're not quite sure what to do the sort of yeah i think i don't know which ones it's probably number two because because they're going to keep running open claw they've said that already they're going to keep running it and people are still using open source models so it's kind of like i just think that they were buying stuff because they thought crap we got to do we don't have an open claw what if we just bought it it's rich kid syndrome like that's the thing like both open ai and anthropic act like rich kids because i went to a private school i'm not proud to say it i was the dumbest kid in the private school i did not do well bottom of my class every single year failed multiple languages like genuinely legendarily terrible i barely scrape through but i met a lot of these kids and my parents scraped by to get me there as well it good on them but i met a lot of these kids and what they do is when they don't want to learn something when they don't want to build knowledge when they don't want to put something together of their own they just acquire it's like dad go and buy me that daddy go and go and get buy me buy me a boat buy me whatever and it's open ai doesn't know what they're doing other than they have a lot of money so they can spend it and i think they bought it thinking wow this will be a backdoor into anthropic a little bit we'll be able to see what anthropic does more because lots of people use this and we can we can somehow see how claude is running agentically or they bought it to kill it that's what i think but the other thing is is peter steinbrenner or whatever he's called he's still farting around that guy i don't know if you've ever read his posts but he is constantly working yeah and i would i don't give him a ton of credit for that because it's like feels like a depressed person but also i've heard he got hundreds of millions of dollars for it as well so it's like if i had that much money you wouldn't hear from me again i would disappear well no i'd keep posting but it it's strange because it's like what are you actually working on and i think he vibe coded a lot of it as well which is even more terrifying and there are massive security issues as a result it's just one it is like a psychosis unto itself and what i think get i know we talk a lot about the media stuff what i think it is is the media and the ai community is so desperate for a hero they're so they they know they know in their deep down in their soul that something is wrong that none of this makes sense so the moment When anything, even directionally, feels like it proves that they're not wrong, they grab it and they shake it vigorously. They just go, this has to be it. This is going to be the thing. And if we love this enough, it can be a real boy. And it never is. Like, OpenClaw is gone. Like, just no one's talking about it anymore. No one cares. Searches on Google have gone down. Yeah, I just looked for it. It's minimal. I checked this morning. It's minimal coverage. It's been minimal coverage. I mean, it's kind of around, but it's become a niche topic. Well, let me tell you my second thing that I think is good about OpenClaw, right? The second thing is I think it actually points towards what I think is the healthy, sustainable future of AI, which is smaller, task-specific, and much more modular architectures, right? So not built around a single AI entity like an LLM, bespoke AI systems that do specific things. If I want to play poker with AI, there is a great AI system to play poker with. If I want to do certain types of digital VFX work, there's really good AI systems that are made to do that. I think that's the future. But all those LLMs? No. Well, no, they're not, right? Or they have LLMs in them. That's why I say modular architecture. I think the future is you have multiple different things, most of which are just hand-coded by a person. And maybe you have an LLM in there if there's language involved because it's pretty good at if it needs to speak to someone or interpret it. I point towards the Cicero model as the great example of this. Noam Brown's AI system that plays the board game diplomacy. And it has an LLM in there, a small one, for chatting with the other players and then converting what they say into a sort of more technical language that the rest of the system understands. And then it has a planning engine and it has a policy network that can evaluate the different boards. uh it has uh multiple other systems that all hook together classic ai shit like this is this is real ai stuff like when it's just like yeah i made diplomacy but this actually just reminded me of something but so you know but just uh i want to get to that but just to bring a close to the point is i think this gave people a taste of that if you're building they're like oh i want to build my own system to do one thing i want to build a system to uh answer my emails that come into you know, request for my show to answer those emails and to put things into a spreadsheet. And like, oh, I can write a program to do that. And I'll use an LLM to help me. And it can be a small one because that's, this is not, that's kind of not the core of it. And suddenly you're exposing people to this idea. I mean, I call this vision distributed AGI where like one day you would look around and be like, there's 10,000 bespoke small systems that each do something well. And if you add it all up, oh, that's a lot of things now that computers do as well as people. And it's a very different vision than opus 5.9 is uh grok 7 or whatever it is grok 7 yeah it's embodied in a robot with predator machine guns and it can just do everything anyways all right back to your point so this just reminded me so jack clark of anthropic fascinating character one of the co-founders he used to write at the register one of the single most critical tech publications in the world his public his blogs were extremely critical i've seen him twice peddle out this example which he refers to as like a an evolution simulator a predator prey simulator and he brings it up all the time and he right he uses these highfalutin terms i went and looked this up it is like a 50 year old idea he's like yeah i used clawed code to build it yeah because there are hundreds of them online hundreds of them that he was trained on it's just a simulation simulation program it's a little simulation that says okay we got bees and the bees get killed by the bee the bee eating bears i'm just making up animals already this is why i can't make one myself but it's like all of the different creatures and how they interact it's like yeah and i'm able to change things here and here and there and it's like yeah there is a web version of this it is 20 years old yeah but the way they frame all of these things is like oh simulation like the singularity it's like no and it just i feel like the ai era is a mass exploitation of ignorance it's just it they found something where the media just they knew the media maybe they didn't know this in advance but the media won't check anything the media would just like yeah the local cloy it's it's got a social network this is agi now every time a three gigawatt announcement it's like that go to three gigawatt data center that's like three nuclear power plant big wow even though it's not getting built which i know we're going to get to it's just ai as a term as you well know is it means so little and so much at the same time that they can basically do anything and i think combined with the hysteria they are in the situation where literally i think we could have another sam bankman freed situation that we don't know about yet that an AI company could come out and just go, yeah, we've done this. And it's the, I mean, kind of mythos is almost that. I know we're not going to get into that, but it's, I feel like we are, maybe there's already a scammer out there, but this is the environment, the exact environment. Yeah, you can gather a billion dollars easy. Well, I mean, I just saw another one the other day where it's like a company that claims it's doing recursive self-learning and they raised half a billion dollars. And one of the co-founders runs another company called u.com. And you know what's crazy? that is not mentioned in the financial times's piece it's just we are like grifters have found their meat this is so much worse than crypto and nfts it's so so much worse because the fuzziness of ai allows them to have infinite time and infinite money to say well we still haven't worked out that recursive self-learning company by the way of course they are still theoretical like all of them yeah like world models um but no the 50 job loss it's it's next month now yeah that's what that's i i said the wrong month it wasn't this one eight to 12 months baby with a margin of error of maybe a hundred percent banner headline 50 every time just read the top thing yeah that reminds me a little bit about my my oldest plays i coaches help coaches little league team baseball team or whatever and the pitchers are getting better they're at the 13u they play on the the full-size fields now or whatever and like the pitchers are better now so what they learn is if i throw the high fastball into batter swings i'm gonna throw some more high fastballs like this is clearly and i kind of feel like this is dario amade saying 50 of jobs he's rolled this out three years in a row now he's like it gets covered every time i'm gonna keep throwing those high fastballs as long as the the media is swinging the proverbial bat yeah but high fastballs are proven to be difficult to hit as opposed to llms which have never been proven to take and take jobs ah there we go i like it baseball's way more fun than ai just if only we'd have put this money into baseball i agree with you there yeah baseball has less constant waves of existential dread being poured upon the entire populace no they just reserve it for like cincinnati and Pittsburgh and Mets fans. That's right. You're right. Mets fans are like, you know what? Mets fans look to AI for a little bit of psychic relief. They're like, oh, this is not quite as dark as what we're dealing with. Yeah, this isn't punishing me as much. Only half the jobs are going away? Oh, that's not so bad as an 11-game losing. Yeah, no. And most Mets fans are like, yeah, I would fire half of them. Yeah, they should be fired. Maybe we should do all of them. Yeah. I hope Juan Soto's the first one on that list. All right. Story number two. I actually, for whatever reason, I didn't cover this one as much. I talked to some sources in the sort of surrounding D.C. tech industry, but I want to get your take on this. This is the Anthropic and Department of War story that picked up in February. I'm just going to read a little bit from Dario Amadei's statement that kind of kicked off this whole thing. So he said, Anthropic understands the Department of War, not private companies, makes military decisions. We have never raised objections to particular military operations nor attempted to limit use of technology in an ad hoc manner. However, in a narrow set of cases, we believe AI can undermine rather than defend democratic values. Some uses are also simply outside of the bounds of what today's technology can safely and reliably do. Two such cases have never been included in our contracts with the Department of War and believe they should not be included. And then he lists mass domestic surveillance and fully autonomous weapons. So can you first bring us up to speed on what unfolded and has unfolded there? and then what is actually happening because i find this story because i haven't looked at it as closely kind of confusing all right so just before the war in iran i think dario is a savvy con artist and i think he i call him you don't it's not you saying it's me saying he's a con artist so just some background anthropic has been installed with classified access in the u.s military since june 2024 that's a very important detail they were used in venezuela incursions whatever you call that they were used throughout and are still used in the war in iran so what happened was amade said i forget what the conversation was i maybe he instigated it's kind of hard to tell but some conversation between him and the u.s military was we're not going to let you use this for mass surveillance of americans nor are we going to let you use it to control autonomous weapons now the second one really pissed me off because you cannot control anything with llms you can't control if you control the robot with llms it would barely move because the processing time even yeah people say oh what about on device but shut the fuck you don't know how these work that's not how this it's not an l that threw me off as well i know enough about it i was like why are you talking about llms why was the autonomous i think they mean ai in general i know no they meant autonomous weapons they 100 meant that i know because i read every single article and every single statement about this every single time like autonomous weapons and to be clear anthropic in their own statement said lms are not consistent enough to run autonomous weapons correct thank you dario your first fact make no sense to run a model based on language parsing and generation to steer a missile so i don't understand that okay so the first one you say was happening though you as far as you can tell using these tools as part of intelligence gathering sure they probably were involved in the change somewhere i mean were they but the thing is i can't confirm whether it is no one can because anthropic is already was already embedded and they attempted to basically renegotiate the contract post hoc and i'm not siding with the u.s military here but they tried to say we're adding these things and they did it mysteriously somehow just before the war in iran so what i think this is my personal belief i think the like it was a few days beforehand i think what i just to clarify what you're about to say i'm just looking at this now they're saying mass domestic surveillance of fully autonomous weapons in that february statement they're saying oh have never been included in our contracts so i had been given the impression that they were specifically called out in the contracts as we will not do this but actually what i'm seeing here is uh it's not like they weren't just that wasn't discussed at all in the contracts and amadeya is saying hey we never mentioned in the contracts these two things you might use it for and we want this in the contract so okay go on but that's a i'm only seeing this now that yeah it's a little tricky anthropic 100 had visibility into what the u.s military is doing so i would not be surprised i cannot confirm this whether they time this specifically to time at the war in iran because suddenly there was this insidious awful every single person who spoke like this should be fucking ashamed of themselves i'm disgusted by it there was this insidious thing of people being like anthropic is the ethical company i saw hashtag just we claude death penalty uh i saw katie goddamn perry being like i just bought claude and it's like you just paid a company that was actually part of this war and people like well open ai is now and then sam altman slid in and was like well we can do whatever that is then sam altman claimed that they had actually negotiated something that didn't allow the things that anthropic wanted then it turned out that emil michael from what's his but from emil michael from uh the u.s military said actually we've agreed to all legal means to be clear there's i don't believe either of these companies give a rat shit about any of this i don't think they care about it at all but anthropic had this swell of good press because people thought that they were opposed to the war in iran when in fact they were directly part of it claude was used during it now how complex was the use it's probably like here's a bunch of images where should we blow up and it went here's a school and they went oh just great and that actually happened and then there were weird articles that came out saying like actually claude didn't do that yeah you can't prove that mate what i can prove is that claude was used in the war in iran so whatever but your your conjecture your conjecture is the reason why amade brought this up was it's press yes the size of that contract is worth jeopardizing when you're looking at like an ipo six months from now what sorry size of that contract their military contract is up to 200 million dollars and the up to is an important operative word 200 million dollars they lose that money on inference like two weeks yeah and they're looking to raise i mean their valuation is what in the hundreds of billions three eight three something hundred billion they'll probably ipo at 750 if they even make it but that's the thing now they did it for a hundred billion dollar move there in theory yes yeah well also the thing is as well it's like then the department of war said oh we're going to on we're going to um we're gonna put you as a supply chain risk nothing happened then they were like it's a supply chain risk but we're gonna keep using you for six months then there was a lawsuit the department then anthropic sued the department of defense and said if we don't have this removed we might die and then admitted by the way and this is one of my biggest this was like my full joker moment during that during that um motion that they filed krishna rao the cfo of anthropic filed an affidavit sworn affidavit where he said that anthropic had only made five billion dollars in its entire lifetime yeah now when you go and add up all of the reports of revenue such as the information saying 4.5 billion dollars in revenue in 2025, such as Anthropic themselves saying annualized revenue that would mean they've made $1.5 billion in the space of a month in 2026, it adds up to way more than $5 billion. I have tried to talk to pretty much every major reporter that covers Anthropic's revenues, and they will not discuss this. It's the most conspiratorial I've felt this entire time. It is like everyone is trying to ignore a fire in a room. and the crazy thing is is that happened nothing changed and then a judge said actually anthropic's right we're not going to allow the supply chain risk designation and now apparently the u.s government is using claude mythos so in the end nothing happened yeah anthropic anthropic got a bunch of completely spurious press around them being ethical despite the fact that they are already part of the military uh they revealed their actual revenues i mean it was great it's it's all good um that revenue story that is an amazing one outside of you i covered it i learned about it in part from you i found only one article there was uh maybe a reuters or an ap article that talked about uh this quote-unquote like shaky revenue math that's popular in silicon valley so there's one piece i found where a financial reporter actually was covering like hey when you hear these numbers there's a lot of multiplying by 12 or multiplying by 24 going on and you you But that was a big story So for the listeners to understand it Anthropic had to under oath a signed affidavit right so the penalty of perjury or whatever you would say in a corporate setting had to release their revenues And it was billion to date on billion of investment in debt I think to date Yeah, and they spent $15 billion on compute so far. Yeah, $15 billion on compute so far. The other part of that, the part of the story I did cover that I thought was interesting was the Undersecretary of Defense, whoever that was. Emile Michael. That was Emile Michael, right. And he went on and it was funny. It shows something about how the online commentary space works. He went on and said, hey, here's why we don't want to work with this product. If you watch him, he's basically like this is a product that will say it has a soul or that their company is saying that there's a chance that it's alive. And what he was saying was like this is a wonky product, right? Like this doesn't seem like the type of thing you want in a military setting where you have the CEO saying there's a chance it's alive and it'll say it has a soul. This doesn't seem like a reliable piece of hardware. And what was the online commentator report was Pentagon convinced that Claude has a soul. So it completely – they flipped the veil. I'm so sick of this. I'm so sick of the goddamn AI bubble. I'm so tired of this. Yeah. i wish i got this guy i wish anything i did was i wish you've not read one punch man have you no okay so this is a complex thing but one of your listeners is going to hear this and love this there is a character in one punch man called king everyone thinks that he's the most powerful man in the world because of the king engine which is his so-called power it's actually because his heart he is so anxious and scared at all times that his heart is going so fast you can hear it he has no powers he's a regular guy but because saitama the main guy comes along and destroys anything near him everyone thinks he's amazing and there are multiple times during the story where a bunch of stuff happens around him and people go wow they must have all just died when they saw king wow king must have destroyed them with the king engine this is anthropic anthropic is just this wasteful crap pile of a company with services that break half the time less than two nines now of service availability and they have models that degrade at random they gaslight the users they rug pull them on rate limits but everyone's like anthropics capacity is so they're hitting capacity because they're so popular and their models are so good it's like i'm going crazy man i just at some point what i'm saying will feed into the mass consciousness i guess and at that point i'm going to be insufferable but it's like every time i hear a story like this i feel like i'm i'm going insane what are the main revenue sources if we're being realistic about it so if you're these ai companies well my understanding is open ai is chat gpt subscriptions yes anthropic is um the cloud code api api apparently it's api yeah but here's the thing i'm not accusing anyone of fraud but i there was a eric newcomer had a piece where he said the um anthropic they had the coach coach you the venture capitalist and he shared the deck that anthropic had shown them and there was a bit where it was like yeah 85 of their revenue is api calls and 15 is subscriptions gonna be honest i don't believe him i just don't believe it i don't believe that there is what four odd billion dollars of api calls and open ai apparently is the other way around where it's like 85% subscription, 15% API. What would an API call be? So for the listener, what's calling APIs? So it would be an AI startup. It would be a business that's running their own models for some reason, that is running their own systems that are built on top of the API. But that's the thing. Even that question kind of gets at what I'm saying, which is, what the hell are you doing with this? like what what is like i get ai startups that just sell things that have llms plugged into them but it's like they're claiming they have all this enterprise use and what i think it might be is that anthropic has slowly because the information reported this recently i think it's been going on for a lot longer anthropic has started to push enterprise users onto the api even when they're using clawed or clawed code i think that's fairly recent in the last few months right but i also just think that these companies are making up what they're saying in decks because no one can prove otherwise i think i want them to go public so bad i want them to go public so bad never in a million years i wanted a company to file an s1 more i want to see inside their laundry i want to go look around i i don't i don't doubt you'll be the first to read those s1s i will be smoking a big cigar it's going to be delightful here before we get to the third story let me tell you my new term i coined about uh ai coverage all right i'm thinking i just came with this on the spot but i something else is going on right now that i want to call out is what i call dread laundering and what you do is you will launder a sense of like despair dread about one thing related to AI to help amplify a less supported feeling of dread or despair about another. And so here's where I've been seeing this recently is I think the technology business case for LLM somehow being at the core of automating a bunch of jobs or destroying the economy is very weak. And I think there hasn't been a lot of good support for that because, again, these are just LLMs that we're building better apps on top of and it's slow going. But there's a lot more focus recently it's like we have to there's a dread quota so how do we fill it if that is losing some traction right now so there's a lot of other coverage going on about destruction of the arts writing the writing is going to disappear movie making is going to disappear education is falling apart and you put that next to Dario Amadei talking about jobs or this or that and you're laundering the dread from oh we have a text generator and people might are going to be lazy and try to not write text, which is a real story and an annoying one as a writer I don't like. And you launder that dread over to like, well, all these other bad things. This is all kind of the – if we're worried about that, that kind of just justifies the dread in general. So like also like maybe my job is going away. Maybe the Terminators are coming. And I really wish these were really separated and that you could have an argument about we have automatic text generators. It brings up a lot of problems for parts of people who produce text for a living. Let's talk about it. Then we have over here this claim that an LLM is going to take over an executive job or is going to – and that's – those fall under scrutiny. It's really hard to get a compelling case over there. But if you throw enough darts at enough things, you create a miasma of unrest in which like it's hard to make out what the actual signals are or not. So just everything. It's like a pox in all the houses. Everything is terrible. so that's my no i i i fully agree and i also think that doomer porn clearly gets clicks it's just that i think that when this is all over and the bubble burst i think every single person who engaged in it should lose their job across the board i know it sounds aggressive it doesn't but i think everybody who i think everybody who engaged in the doomer porn and yeah there are some people who try and try to do it in good faith but the ones who like the axioses of the world who genuinely sat there and fermented dread they shouldn't be allowed to work in journalism for a minute they should take a knee they should they should step aside for people who actually live in the real world it is a problem that we have to address and maybe i talked about this on your show uh earlier this week but i'm hearing from listeners and readers that again they use terms like i'm stuck in a cage having wave after wave of despair or dread crash on me with no option hope of escape from it. And I'm taking wave after wave. There's a responsibility aspect to it, right? It is difficult for the normal person to be hit again and again from all different angles. Well, what if this is terrible? What if this is terrible? What if this is terrible? And if there's smoke, there's fire mindset that we're wired for. And it's really, I think, been very unsettling. Again, I get unsettled by it. And I actually know the technology and know that 98% of this is really not well supported. But it's just emotionally difficult not to be having to just immerse yourself in wave after wave of everyone putting their full attention on what angle can I find that makes this seem the worst. Like that's always the angle that things are coming from. It's never from the, well, this doesn't make sense. What happened to that? Well, where's all this revenue? Hey, what about this story from three months ago? Nothing happened of it. I mean there was a guy who posted a video that I made fun of and then he attacked me when OpenClaw first came out where he said literally the singularity is here. like in the next few days this is it look at this graph line goes up next few days singularity is here and i kind of made fun of him and then he he recorded a whole video attacking me about how crazy my takes are and i just want to say okay it's been four months i don't see the robot army that was supposed to be there in a couple days where you never follow where you are yeah where's where's ultra claw where's the clawed bot that's going to chop my goddamn head off but that's the thing it's like i think that there is an actual theme above all of this that is actually outside of the ai bubble as well which is short-term memory and long-term memory that just people say stuff things happen and then they forget about them entirely like remember the clawed code marketing push at the beginning of this year like the anth it was the atlantic that said uh did this is the chat gpt moment and it had all sorts of people building useless apps there was that whole surge of support for that and now anthropic is actively throttling their services they are making their models worse they are cutting off open claw nothing yeah no coverage none of the none of the people because you think here's the thing that i have with ai boosters even if they fundamentally disagree with me about the economics and all they don't even seem to engage with the problems i don't even mean this in an antagonistic way i mean if i was a pro ai person if i was like i don't know i'd be i have a big piece of metal in my head or something i would i saw some dickhead british guy being like hey they're losing billions of dollars i would at the very least be like i should probably look into this yeah i should probably make sure and if i really like this stuff then i saw the companies screwing over their customers i'd be like wow doesn't that change the story a bit nope mainstream media honestly a lot of independent media just goes yeah you know it'll something will happen it's like when it comes to the doom they will extrapolate as far as they need to when it comes to the capabilities they'll go yes it's going to be this powerful that when it comes to the things happening in real life they're like complicated yeah you know yeah you say you know things happen you know it'll be all right though and when i say all right i can't really tell you what that means but it will be what when i say all right i mean everyone's going to make money but not me but the companies who i love for some reason it's so weird that's also weird that part confuses me that the the media class i'm a part of hates all billionaires except for like these three yes exactly i don't get that part all right story number three this is in your wheelhouse um has to do with the reality of the data center boom i'll read you a quote from a futurism article that includes you in it so be prepared nice the data centers powering your favorite ai chatbot are running low on helium cash and neighbors who don't hate them and that's not even the worst of it according to reporting by bloomberg about half of the data center slated to open in the u.s in 2026 will either face delays or outright cancellations the publication interviewed analyst at market intelligence company sightline climate which in research first flagged by ed zitron last week noted that 12 gigawatts worth of power-consuming data centers are set to open in the U.S. this year, but here's the catch. They say only a third of those are actually under construction right now, with the rest in a liminal pre-production stage, in which they could and likely will be canceled. There's a huge story going on here that's not being covered outside of – it's covered in Bloomberg in places where people really need to monitor, like the private credit markets and other things that could affect their investment portfolios, but it's not broadly known beyond it. What's going on with this illusory data center boom so every time you hear someone say we're building a two gigawatt data center real simple just say no you're not no you're not we don't know how long it takes to build a one gigawatt data center because no one has built one i know that sounds crazy no one has built one but once again and cnbc i'm going to say mckenzie singlos at cnbc i'm specifically saying she has laundered the reputations of these companies because what happens is stargate abilene open ai's 1.2 gigawatt 1.2 they opened a single data center in september 2025 and then what was published was the stargate abilene was operational project rainier a 2.2 gigawatt data center in indiana for amazon fully operational that's a quote from amazon no it's not 2.2 gigawatts is what they're saying they claim to have half a million trinium 2 gpus 500 watts a piece that's about 250 megawatts they claim they're up to a million now that's 500 that's a lot less than 2.2 gigawatts because data centers take forever to build we do not have the power that and people are saying well the power's getting built that proves they're going online now the problem isn't that the power doesn't exist at all it means the power doesn't exist at the point of need so sightline climate i actually caught up with them on a recent newsletter where they they said that of the 115 gigawatts of data centers are meant to come online by the end of 2028 only 15.2 gigawatts of them are actually under construction now this is really weird because i did the math this is napkin math forgive me it when you look at these and you say okay they have a pue so the efficiency so 1.35 efficiency we'll call it when you use that you take that 15.2 gigawatt thing you divide it by 1.35 it's about 10 gigawatts of pure gpus that's about 285 billion dollars worth of nvidia gpus why am i saying this well nvidia claims that they have visibility into half a trillion in gpu sales by the end of 2026 and a full trillion by the end of 2027 right where are they going jensen where are the gpus going jensen where are they where where are they as well because it's a bunch of has sold it's just a billion people are building custom video gaming rigs at home come on it's easy i well actually i think i know where they are i think they're in taiwan because so it's just very weird because what this means is that nvidia has already sold too many gpus it has already sold more gpus than are being than are actually having data centers built for them it's crazy and this is the thing i bring this up with journalists i bring this up with economists i bring this up with tons of people they're like it's fine they're being built what are you talking about i'm like look at the day and they go ah it's always like a weird wave off but this is like this is the largest company in the stock market and i think that their total revenue from the last few years is over 300 billion dollars and they're claiming that they'll hit half of half a trillion by the end of the year they have half a trillion i think that's just for the year they keep saying these numbers as well that don't match up but let's say they're true and if nvidia beats and raises so they beat their earnings estimates from analysts again i think we need to start asking a real question about what nvidia is doing with these gpus because talking to some hyperscaler accountants i know there is a way that they could be doing this where they're able to book the revenue without sending anything it's called a transfer of ownership it's when you just sign a contract saying yeah you own these gpus they're sitting in my warehouse but these are yours and that counts legally that's perfectly legal it's very strange and if they're not saying it they should be filing an 8k but nvidia's inventories are growing on their earnings as well so like it's a sign that something's being warehouse but i spoke with a few sources and what it is is when a hyperscaler say microsoft they don't buy a gpu from nvidia they don't go send me a gpu i'll put it in a server what they do is they work with someone called an odm an original equipment manufacturer it's original device manufacturer or um design manufacturer i think it's design manufacturer they build the servers and they put the gpus in there quanta foxconn also known as hon high precision corporation limited hell yeah i wish we had more normal names we're strong where we in all sorts of companies out there what they do is they they their revenues all of these odms are going up crazy style because what they do is they pass the cost of the gpu through as revenue they buy the gpus from nvidia they put them in a server they sell them to a microsoft or an oracle or a meta or an amazon and then they say yeah it costs this much with the cost of the gpu in there this allows nvidia to hide a great deal of gpus because they're sitting in taiwan uh quanta's inventories went up last quarter i don't know if it's categorically because nobody's buying them and they're not being shipped but for the most part i think the nvidia is just pre-selling years of gpus and i don't know how this is not scary at the people michael burry brought it up briefly weeks after i did just to be clear and no one seems concerned about this when in fact if there only 15 gigawatts of actual capacity being built and 10 gigawatts of that are GPUs NVIDIA can sell more GPUs unless it wants to put them in a warehouse But to the larger abstraction of data centers not getting built as well, it's like we're dealing with fraud then. if we've got a hundred and something gigawatts of data centers being being bill announced but only 15 of those are actually under construction under construction could mean anything it can mean a sink a scaffolding yard which is the case with n scales data center in louton england then that means fraud that means that someone is doing fraud that means that people are not actually building things that people are likely buying land and speculating that a data center might get built there perhaps they'll file some planning paperwork paying their ceo six figures the whole time fermi is a great example rick scott's fermi uh building an 11 gigawatt data center out in uh i forget it's project matador don't worry though they're not building anything have a patch of land the ceo just left they apparently didn't pay their contractors broad so this and and this is the thing everyone's talking about the ai boom with all this certainty but the actual proof that things are happening isn't really there in fact and i i did the maths and it turned out that over 50 percent of the data centers under construction through the end of 2028 are for open ai or anthropic every time anthropic announces they just announced a 3.5 gigawatt deal of broadcom chips where are they going where are they going no one asks no one thinks no one tries and the answer is they're not going anywhere these chips probably will never get bought so okay so let's walk through this a little bit right so so it sounds like i mean nvidia is selling that they're selling them to these odms right so the odms are basically saying we will we're getting contracts so we'll keep buying chips because there's a lot of money this market i just want to be clear this is how it's always worked this is not a weird thing this is how they build acense continue sorry uh because there's a lot of interest in ai there's a lot of money that's raisable in ai so you have a lot of entities saying i want to raise money for ai projects this is leading to a lot of we will now spend money on these odms to hey we want to buy x number of chips in setup and servers but then there's nowhere to i think i've i think i've muddied it up a bit okay so you've got two stories the odms are so when a hyperscaler at microsoft they said 37.5 billion dollars of capex last quarter when they buy servers they buy from the odms yeah odms then put them in a warehouse in taiwan and they say okay when you're ready for the data center let me know yeah and the data centers are taking longer than or harder to build and people realize they've raised the money they've made the orders there's nowhere to put them so the warehouses are piling up the video is like i'm hey put them wherever you want like we're getting our paychecks like you can put them put them on a hot air balloon we don't care the dodgy thing with nvidia though is the it's unclear because we're talking 100 billion plus gpus that have nowhere to go that have been sold which begs the question of whether they're leaving nvidia's warehouses at all yeah because nvidia could do accounting treatment that just goes yep this is yours now and then it's here and so but completely separate to that because microsoft amazon google their data centers are being built though they're taking forever but even then there's not enough capacity to install these gpus then completely separate to that over 100 gigawatts of data centers have been announced that are just not being built yeah and those are more than likely not hyperscaler ones they are more than likely random fly-by-night operations they're companies like nebius nscale uh iron these former crypto companies are they spending money are they i know they're they're raising money are they spending money to the odms like there are chips somewhere in a warehouse either the odms or the nvidia's that they paid for they just have nowhere to put them or are they just raising money and paying salaries until it fizzles out little column a little column b hard to tell i wouldn't be surprised if it's both i think that there is and when like a corwee for example they buy from odms like caught uh from like dell and uh super micro who recently had a co-founder arrested for selling chips to the chinese so that's cool um but yeah i think that there is a lot of yeah we're building a data center ah you know business is rough we just got to find the land how have we got to find the power now that's going to take another three months i'm going to need to make 650 000 a year in fact that's probably a fun thing to go and look at the companies in question and seeing what executive compensation is but then there's also just the problem of data centers are hard to build yeah well this sounds like this at least rhymes with the housing crisis the magnitude is a little bit smaller and tell me if i have this right like my understanding of the the the financial crisis of the earlier 2000s is okay we have these in that case financial product these mortgage-backed securities. And people want in on those, right? Because they're making a lot of money selling these. They're making a lot of money reselling these. But you kind of ran out of mortgages. But everyone still wanted to get into this. But there's no more mortgages to put in the mortgage-backed securities. So we say, well, we'll make these credit default options and these swaps. And we'll build derivative products on top of these. We just need things that we can keep selling because there was more money that wanted to be spent here than there was things to actually spend it on. And of course, once you had built out this giant house of cards built on leverage and bets on bets on bets when the middle of the house couldn't support the whole thing fell down this feels like a simpler version of that there's a lot of money out there that's like we wanted to get into ai too because every 16 seconds we're getting an article about how it's the most powerful technology ever and it's about to take over and take all of our jobs so there's a huge amount of money that wants to go into ai but there's not actually enough places to put it and that seems like a summary of what's going on now and literally there's not enough land and buildings that can take the chips to put the chips in. So we have all this money being spent and NVIDIA seems to be collecting a lot of it. But there's nowhere to put these chips. It seems to be what you're saying. It's just way more money that wants to go into this market than there is actual investable assets to put the money into. And so shenanigans follow and you get a very fragile system. And this is why we're worried about the private debt market is beginning to teeter a little bit because these these investments aren't returning um nvidia has so much of this money coming in with nowhere to put it this feels like that's the core of instability so what happens when some of these contracts fall apart and nvidia has a fall and it's you know x percent of the stock market is that the right way of seeing it that's like it kind of rhymes with the financial crisis in that sense it so here's the thing i don't think it will be as bad it's not as much money at stake by far and it's not derivatives it's not bets on bets on bets so it's simpler not yet yeah with that's the big thing it's not derivatives private credit the big scary thing that is like 30 to 40 is related to the software industry and software debt which is a whole separate subject you are right in the there is a massive amount of speculation happening here to quote gordon gecko i think from one of the wall street movies um speculation is the root of all evil someone correct me whether it's wall street 2 money never sleeps but it's um it's weirder than that this is unlike anything because it's a very centralized thing of nvidia and nvidia's continued value and nvidia's kind of load-bearing eight percent seven percent of the stock market it's also very weird that it's one company effectively doing it but the there are hundreds of billions of dollars of data centers that are allegedly getting built and probably uh maybe half of that maybe 75 of that is funded by debt the private credit industry that's thing that's scary is that much of private credit is funded by retirement and insurance so right now i don't think data centers make up a ton of private debt that's awesome like at least not a load-bearing part yeah i will say the actual housing crisis comparison i make is venture capital and it's not related to um it's actually not related to data centers at all so what it is is ai venture capitalists get paid sometimes as a percentage of the funds value the assets under management like any kind of asset manager so ai companies right now are awesome for them because they get them and they constantly got number go up so far so big so huge because ai companies are fluffy right now and everyone has these ai companies and in the subprime mortgage crisis the way that people waved away the thing about well your interest rate's going to change in a year or six months was they said well i'll just refinance yeah in the case of ai startups elad gill famous venture capitalist yesterday said all ai startups should look to exit in the next 12 to 18 months and it's like okay well why would you buy them because most ai startups are just wrappers for models and you can't take them public because they lose a bunch of money the subprime ai crisis i talk about is partially with companies not being outrun because the costs go up it's also you've got what like 200 billion 300 billion dollars worth of venture capital tied up in ai startups that can't be sold right and how does that connect to data centers exactly well data center customers predominantly ai startups predominantly two of them anthropic and open ai but others as well cursor just signed a deal with xai to rent gpus for example what happens when all of those die who's going to pay your data center bills also all the data centers are deeply unprofitable because the horrible debt they require it's just it's not it like you kind of said it rhymes but it's not like for like and again i say it's like more people should be thinking about this even the people who are ai boosters should be thinking about this because this is an existential threat this is not just a ed's being a hater or ed hates it's like the maths doesn't make sense there's not enough space for the gpus to get installed there's not even things being built for half of them if they manage to sell if nvidia sells like half a trillion dollars worth of gpus in the next year they're not going anywhere yeah in fact i worked out mathematically based on their last quarter it takes six months to install a single quarter's worth of gpus and i actually think it takes longer now i at some point this falls apart and everyone's gonna act as if it was a big surprise and they shouldn't the warning signs were there from the beginning right right they literally they cannot keep selling that many gpus because you're you're you're where there's nowhere to put them and you're building up such a supply right so then the two were i mean it looks like what's inevitable is going to be two things financially speaking there's going to be a stock market hit and private you know retirement fund insurance hit when the this game of musical chairs stops which is going to probably lead to much more financial scrutiny probably regulation on accounting within these companies and then the venture capital firms when they take the hit of like oh we couldn't exit these companies which we're not otherwise going to be able to get an exit out of but if we don't get them sold right away because again it's hard to build a useful profitable ai product um you're going to get an ai winter so they're going to be like well forget this you're going to have a few years where it's going to be very difficult to get ai and investment so i yeah i would actually reframe that slightly i think what happens is i think you're right about the stock market stuff when it comes to the ai startup i think what's going to happen is a fire sale moment it's going to be a panic you're going to hear about an ai startup maybe perplexity may be lovable that needs to sell they're like we need to we need to get this out the door and an a funding round will fall through then an acquisition path will fall through the moment it becomes obvious that ai startups are trying to sell everything will start collapsing vcs will have to start telling their investments to sell sell right now get out of there except when you look historically ai startups do not get acquired windsurf ai coding company acquired by google nope they paid two billion dollars for three people the rest of them got sold to cognition for a couple hundred million dollars and most of them got laid off uh what was it inflection ai to microsoft billion or so dollars mostly went to investors mostly went to mustafa suleiman um what was the other one character.ai bought by google several billion dollars except that mostly went to the founders and some of the team and of course the investors but the actual products are not getting acquired the actual ip doesn't exist so when these things come to exit i don't think it's going to be pretty at all right in fact it's really easy to clone most of these companies because they're just wrappers for llm models right and the top the top minds which is what is actually being acquired they've all pretty much now there's not that many left of truly innovative you know researchers in this space who are doing startups to get all the big companies have snapped them up for the most part that's the issue they're you know denis wasabi got snapped up by uh google you know the company got snapped up i mean these companies have there's only so many of these big like academic research minds and they've all for the most part been and sometimes it's very expensive to do you had to buy and shut down their company to get them but i i hear your points the problem is yeah you cannot if you're a vc assume well anything we fund will also get bought for a billion dollars because our founders are so brilliant it's like actually the brilliant founders are already for the most part probably under con there's only so many of them and they're under contract with these companies and if what you really have is the product yet which is a point and then we'll wrap it up after this but i do think it's an important point is that like it is we don't really know how to build very useful profitable products that's the odd thing about this space we well i mean that you can't there's a couple popular products um the various coding harnesses like cloud code etc are popular among programmers not a particularly profitable product space those they're expensive to run the chat bots are popular in the sense that they have lots of monthly active users, but it's unclear that I don't imagine those are particularly profitable either just because of the compute cost of people using them. And that's kind of it, I think, is the problem. It's very difficult to have your wrapper company actually be a large concern. So that's interesting. Yeah. Yeah. So that could be that that's the story that underlies all these other stories. And I think it's going to if it's true, I think it'll surprise a lot of people because it's going to be biggest technology ever about to conquer everything this technology ever about to conquer everything ai winter stock market collapse never mind yeah that is this guy if that happens that's going to be that will be an interesting moment i think there's going to be a lot of frustration among the american populace of like well wait a second if that happens you spent two years trying to scare me you spent two years of forget covid COVID is cold and flu season in terms of disruption. This is like World War II level impacts on our country. It's just this is it. If it does fizzle into – not only fizzle, but basically the conclusion of it is that everyone's retirement portfolio has and then that's that, that's not going to go well. I mean that would have like political ramifications here in the US. I think you're going to see political parties rebuilding around how they think about these technologies. And maybe it won't happen. but i mean i i am confident on my ai startup thing because every single ai startup is a wrapper of a model owned by someone else and every because the the core thing and then we can wrap up i apologize is that you cannot control the cost of a user with an lm you can't do it yeah your mo and also your most excited customers are the most expensive which is antithetical to how a business works and also all of them are unprofitable. Yeah, this is very different than, even though the SaaS model is now falling apart for other reasons, it's a very different situation where at least what made that tech sector so desirable, for example, was this idea of you can just scale up profits infinitely. Everyone who pays $20 a month for this is $18 a profit and we can handle an unlimited number of users. And that, of course, got a lot of private equity eyes bigger than their stomachs. Like, oh, great, we'll just build giant sales teams And look, if line goes up like this with 10 salesmen, what if we have 100? And, you know, but at least under the underlying profit mechanics made sense of it cost us negligibly more to have 100,000 users versus 1,000. And it's massively more income. This is very different, you're saying, than LLM-based AI. It's actually very expensive to service the users. And the more they use it, the more expensive it becomes. And that's a hard dynamic. Yes. And more users doesn't make it cheaper. No, more expensive, in fact. Yeah, it's unlike a gem or something. Great. More the merrier because very few people actually use it. It's actually the opposite. All right, Ed. Well, a pleasure as always. Thanks for having me. Yeah, you always bring out the radical in me. But I think we've got to balance out. I think people are hearing all day long the strongest of boosterism. So it's good to check back in on some of these stories and give a less impressible take. So we'll have to do this again soon because there will be, unfortunately, no shortage of new stories coming out that we're going to have to react. Thanks for having me, man. All right. Talk soon.