This is an iHeart podcast. Guaranteed human. Cauto media. Hello and welcome to this week's Better or Flimer monologue. I'm your host, etc. Sorry I missed last week's monologue, everyone. I was sick and in truth, I'm still kind of sick, but the work is important. I also want to thank all of you have sent very kind words via the email inbox and they've read it. I really do appreciate you so much. And actually, that's a good place to start. Everything's rough. As you know, you've seen everything. The horrors are many. Remember to be there for the people close to you. And as isolated as the world might make you feel right now, there are millions of people feeling exactly like you. Our subreddit r slash better offline is a great place to start. As I assure you that if you feel that isolation, the castigation, the irritation, but the push to put AI in everything, yeah, anything but alone. We are all feeling it. I'll admit I've been kind of reeling myself. I've had family stuff to take care of all while being sick for the best part of what is now three weeks, I think. It's tough to take a break when you have so much to do, especially when the stuff you have to do involves keeping up with the current events and motherfucker, we are in an event heavy era. I missed the premium newsletter for the first time in months in the monologue for the first time since we started doing them, felt guilty, which is ridiculous. You'll tell me off for her saying it, but this show is built on sincerity. So I'm going to say it anyway. So give yourself a little bit of a break. Everything is fucking rough. Not me, though, not taking one of those. Don't get those. Now, as I wrote in this week's newsletter, which I called the beginning of history, things appear to be reaching a breaking point. Let's catch up. A few weeks ago, Anthropic and the Department of Defense got in an argument because Anthropic wouldn't allow them to use Claude for domestic mass surveillance or to control autonomous weapons, the former of which is a stretch of what LLMs can do and the latter of which is totally out of their capabilities. As a result of this flimsy defiance, Anthropic was designated a supply chain risk by the Department of Defense. This immediately led to a depressing amount of people doing bullshit. We stand with Anthropic. Yes, we clawed commentary. Suggesting that this company was in any way ethically opposed to blowing things up. Let me be blunt about how wrong you are if you think this way. Anthropic's Claude LLM was used in the war in Iran. It isn't clear how, but it's more than likely it was handed a bunch of day or coordinates, images, targets and so on and asked what to do. This does not mean it is powerful or accurate or really anything other than a means of escaping the responsibility for choosing who to kill. Dario Amade really make no mistake about this, loves war, enables war and is a full supporter of the US military. And I quote, using AI to defend democracy, which can mean literally anything that America wants it to. Just look at the history of fucking America. Similarly, Clammy Sam Altman immediately swept in to take Anthropics business once the DoD kicked Anthropica out. And I want to go into the negrity because it's weeks old. But from what I can tell, chat GPT will support all legal uses. It's I really just fucking despise everyone involved here. I think they're absolutely disgraceful. I think Sam Altman's mulling bullshit online about claiming he didn't actually agree to all legal uses and would go to jail. You know what, Sammy, if you go to jail, I'll come visit you, you little shit. Yet something interesting happened as a result of all of this bullshit. Anthropics, so who were designated as supply chain risk, filed a lawsuit against the Department of Defense to fight him. And in doing so had to include a sign and a sworn affidavit from its chief financial officer, Krishna Rao, which revealed, and this stunned me, that Anthropics has had five billion dollars in lifetime revenue to date, with today referring to March 9th, 2026. Now, this flies in the face of basically every reported revenue, including the information story. And I like the information I pay for it. But this is important that Anthropics had made four point five billion dollars in twenty twenty five and Anthropics own statement that it hit fourteen billion dollars in annualized revenue on February 12th, twenty twenty six, which works out to about one point one six billion dollars. Interestingly, Anthropics also revealed it spent ten billion dollars on model training and inference, which is the process of creating an output in the same period. Now, the exact phrase was exceeding five billion dollars to date. And my God, my God, if your argument there, boosters is, oh, yeah, well, that could mean six billion or seven billion. Right. Shut the fuck. I'm sorry. Can you please can you put down the show when Williams you've been eating for a fucking second and think Anthropics is incentivized in this case to say it's making a lot of money, because all of this is in a filing, begging the government to not remove its ability to monetize public sector work, but pretty much war. I really need to be clear. I've been arguing with people about this for days. If you add up all the annualized revenues, you get six point six six billion dollars, spooky, probably more if you include missing months and such. It's very obvious that Anthropics is misleading people. Because I don't know, I trust this CFO in an affidavit far more than the leaks of annualized revenues. I don't see the same alarm in in the tech journalism world or the business world. I just don't see it. I don't see anyone giving much of a fuck about this. Despite this very likely meaning, everybody has been misled for years. Let's simplify. Anthropics has raised one sixty billion dollars with thirty billion dollars of that arriving on or around February 12th. It made five billion dollars in revenue all time and spent ten billion dollars in training and inference costs alone. Just think about that for a second. Previously, before that new money, they had thirty billion dollars. They spent ten billion on inference and training and they made five billion dollars of revenue. So just, I guess, like tens of billions of dollars just fucking annihilated there. This company is a dog. It's very obvious that its leaked revenues have either been inflated or outright lies. And in the end, Anthropic is just kind of a piece of shit. You know what? It's time for a little run. And while I think you'll all enjoy this, this is really targeted at those beat that are yet to be swayed by my arguments. I hear all of this crap about changing everything. But where's the proof? Wow. Anthropic managed to turn thirty billion dollars into five billion dollars and start one of the single most annoying debates in history. Where's the money? Who is actually getting a profit out of AI? Nvidia, the companies that make RAM, because it doesn't seem to be the companies who are buying the GPUs. It doesn't seem to be the AI companies either. I don't think it's true. But if you believe it, you believe that code is truly being automated away. To what end? What are the actual documented economic effects we can point to? And what are the actual meaningful changes to the world? Also, are you not a little bit concerned about how much code might be written that people do not read, let alone understand? Because I'm learning a little bit about code right now, very slowly learning to code. And the more I learn, the more I realize that it's important to understand what it fucking does. But you know what? AI boosters, if you're listening, I don't know how many of you do. But please, if you talk about the magic of AI and why we should be excited about AI, I need you to start talking today and use real data, something from today. Please. You are legally banned from saying the word soon or in the future. All of my stuff has to be in the present. So you're should to point to one thing from today, from today's models that even remotely justifies burning in nearly a trillion dollars and filling our internet full of slop and creating the moral distance from an action that might have blown up a fucking school in Iran and empowering the theft of millions of people's work and having to hear every fucking day about Sam Altman and Dario Amade to terrifyingly boring and annoying oaths with no culture and no whimsy in their wretched little hearts. Oh, wow. So you can code a clone of an open source software project all set up with an LLM that may or may not get the code right. Do you really need this? Is this really impressing you? Stop paying to invest with free trade. You can invest without the legacy fees with a free isa, a free pension and commission free investing in funds, stocks, ETFs, bonds and more. Join over 1.6 million users on free trades award winning free platform. Go to free trade.io slash radio to get started. Capital at risk. Isa and SIP rules apply. Other charges may apply. We all love a good meal, but there's no feeling quite like cooking one, whether it's everyone at yours for a Sunday roast or after school sausage and mash quick, simple, gone in minutes. One thing brings it all together. Ah, Bisto, the original gravy, rich, smooth and unmistakable. Since 1908, when the gravy makes the dish, make the gravy. Bisto. I want to be clear that my position is sincere. I do not see a path out of this for large language models. I have sat and thought about how I might be wrong far more than I've searched for how I might be right. This industry cannot sustain itself and is not in any way trending towards viability, let alone profitability. This is not a game for me. I am not on a team. I think you are all cresting the wave of the end of the software industry's growth era. I think you think Sam Altman is your friend. I think you think Open AI is your sports team. I think you think large language models are something you need to align with. I think you think this is fucking fun. It isn't. It's a waste. Open AI and Anthropica are not selling great software. And even if you somehow think that they are, their software business sucks. Absolute arse. Anthropics spend two dollars for every dollar it made just on training and inference and that's before sales and marketing. That's before real estate and that's before the actual people. That's not just crap. It's shit. How much of what you love about AI is actually rooted in the present? Are you having fun? I'm having fun because I enjoy writing and reading my podcast. If I worked in the technology industry full time right now, I'd want to cry my fucking eyes out. It's a depressing, ugly time where bosses talk endlessly about stuff that doesn't work and force their workers to push it on their customers all while losing money. It's a fucking cult built on debt and theft and I'm sick of hearing about it and even more sick of hearing that I should be scared of it. I'm not scared of AI. I'm scared of the financial apocalypse to come. I recently read a terrifying stat the other day from Apollo's asset manage, sorry, Apollo asset management's John Zito, who said that between 2018 and 2022, software accounted for 30 to 40% of private equity leveraged by Outs. The specific era in which venture companies are trying to get into the venture capital stopped providing reliable returns and private equity's own growth started to slow. Worse still, the largest software leverage by Outs of 2021 to 2024 were financed with anywhere from 50% to 90% debt. In very simple terms, PE firms bought into software companies and bought software companies they believe would grow forever, pumping them full of debt, taking on debt to buy them and did so based on the growth trajectory of software companies from 2005 to 2018, a completely different era when there were tons more lands to conquer, a tons of growth ideas still left to build. Where I, that was a weird noise, but I'm going to keep it. We're past that. I think we're running out if we haven't run out already. As I said a few years ago in the Rodcom bubble, I believe we're at the end of software's hypergrowth era and they come up and scissor gum with 46.9 billion dollars of software debt now marked as distressed, which means likely to be un-likely to be paid even. All of this has also been happening as software growth has slowed across the board because there are only so many things you could sell and only so many people to sell them to. Generative AI was meant to be the solution here. It was meant to be the panacea that would restart growth in the software sector, allowing software as a service companies to upsell their clients, create new SaaS companies that venture capital and private equity could invest in and usher in that new era of hypergrowth. Instead, large language models are unprofitable, lack significant or innovative features that make selling you software possible and outside of open AI and Anthropic do not appear to have much revenue potential at all. Most businesses don't even break out their specific AI revenue. IBM literally just stopped doing so and 80% of their generative AI revenue was consultancy services hocking it to people that didn't really need AI, but they needed to start doing AI because their shareholders would kill them. But here's a good example. Salesforce, which makes tens of billions of dollars a year. Well, they revealed that they had $800 million of annualized revenue for their agent force chatbot. That works out to about 60 something million dollars a month. That's boopy. That's dog shit. And if you disagree, you don't know fucking finance and you're living in a dream, but who is the dreamer? We are both at an end and a beginning, a reckoning for decades of hubris and a punishment for those who believe that all software would grow in perpetuity. I don't know what happens next, but I do know that we're at the beginning of history and that looking at the past as a way of confirming your biases about everything is a mistake. You need to start looking at the fundamentals. If you don't, if you're a booster, if you're an AI fan listening to this, if you're someone that writes about tech and you're still in the AI camp, you need to start proving your arguments. Because when this collapses, and I'm confident it will, you're going to look like a fucking idiot. Now, some of you might be not so bad. There are many of you that won't. Many of you have taken a kind of middle ground, a centrist position. We all know how those work out. It's time to start looking at the fundamentals. And it's time to stop looking at the dot com bubble or looking at Uber, looking at whatever little myth you have to pretend that all of this is going to work out. If you can prove me wrong, I look forward to reading it or hearing it, but it's been years and nobody appears to have tried. I really don't know what will happen next, but I'll be here to explain what I know every single week. Thank you for listening. Yeah. This is an I heart podcast. Guaranteed human.