#196: SaaSpocalypse, Claude Super Bowl Ad, SpaceX Acquires xAI & Claude Opus 4.6
Episode 196 covers the 'SaaS apocalypse' - a massive software stock selloff triggered by AI disruption fears, Anthropic's Super Bowl ads attacking OpenAI, and growing 'Move 37 moments' where elite professionals realize AI matches their expertise. The hosts also discuss new AI model releases, enterprise AI platforms, and massive Big Tech capex spending on AI infrastructure.
- The SaaS stock crash reflects uncertainty about future value capture rather than current performance - investors question whether traditional software companies or AI labs will dominate the $6 trillion white-collar labor market
- Human replacement cost pricing may become the dominant AI software model, as CFOs prefer paying $150K for an AI agent that does 10 people's work rather than complex credit-based systems
- Elite professionals including top scientists and coders are experiencing 'Move 37 moments' - realizing AI now matches or exceeds their capabilities in core competencies
- Big Tech's $650 billion combined 2026 capex signals they're targeting the entire knowledge worker market, not just the $300 billion software industry
- Revenue per employee will become a critical metric as companies aim for flat headcount while dramatically increasing productivity through AI
"I just don't think that Wall street even comprehends where this is going. And I very confident that most business leaders don't even have a clue yet of where this all is going in the next five years."
"Human replacement cost is the most obvious answer. Selling AI solutions or agents at a fixed price of one worker that can do the work of 10 is easy."
"This is not Silicon Valley tech bros. These are people who have built their career on analytical thinking, technical work, mathematical development and abstract thought."
"We will never ever write code by hand again and something I was very good at is now free and abundant."
"They're going down not because revenue is falling. They're going down because we're discounting the future uncertainty."
I just don't think that Wall street even comprehends where this is going. And I very confident that most business leaders don't even have a clue yet of where this all is going in the next five years. So the numbers are going to keep getting bigger. Like there is no end in sight to this. Welcome to the Artificial Intelligence show, the podcast that helps your business grow smarter by making AI approachable and actionable. My name is Paul Raitzer. I'm the founder and CEO of Smart and Marketing AI Institute and I'm your host. Each week I'm joined by my co host and Smarter X Chief Content Officer, Mike Kaput, as we break down all the AI news that matters and give you insights and perspectives that you can use to advance your company and your career. Join us as we accelerate AI literacy for all. Welcome to episode 196 of the Artificial Intelligence Show. I'm your host Paul Raitzer, along with my co host Mike Put. I guess we'll get to the announcement right away. Mike, we got like something fun we're going to do. So we have, as I said, this is episode 196. We have episode 200 of the podcast coming up. And a week or two ago I made note to the team, I said this seems like something we should do, something for maybe like every hundred you should celebrate. So the team has been working on it and they came up with a bunch of ideas and the thing we, we settled on is we are going to do a, our regular weekly episode in a with a live audience. So this will be the first time I think, Mike, we're doing that. We did a Macon live podcast, but it was not our regular weekly episode. So episode 200 of the Artificial Intelligence show is going to be for a live audience. Now who is that live audience, you may ask? So this is going to be for our AI Mastery members only. So if you are an AI Academy Mastery member, this is for individuals who have individual licenses or members of business account teams. You will be able to register to join us. So again, that is going to be on March 3rd. You can go to Academy SmartRx AI. If you are not a Mastery member, you can join. You have until March 3rd to join and get access to that and then this will be live for our members to be able to register for within the Iacademy platform. Hopefully that is ready to go on Tuesday, February 10th when this comes out. We're recording this on February 9th. If not, just hold tight. It'll, it'll be there probably later this week. We'll have that link ready to go. So again, episode 200 on March 3rd, we will have a live audience. We're going to do our regular. So it's, you know, block off a couple hours. We're going to do our regular, probably like hour, 15 minutes. We'll log in early, you know, make sure everything's working fine. And then we will do audience Q and A after the weekly and A conversation. So should be fun. I will see how it goes. I don't know. Like I said, we've never really done that, so that should be cool. All right, so that's coming up. And then this episode today is brought to us by AI for agency summit that is happening this Thursday, February 12th. We have. We're going to have over 3,000 agency professionals and leaders joining us for this virtual event. There is a free registration option thanks to our presenting sponsor, Screen Dragon. So that is from noon to 5 Eastern on Thursday, February 12th. The IFR Agency Summit is designed for marketing agency practitioners and leaders ready to reinvent what's possible in their businesses and embrace smarter technologies to accelerate transformation and value creation. The agenda is live. You can go check it out. We have amazing speakers. It is aiforagencies.com Again, that's www.aiforagencies.com. you can go and learn more about that. And as I said, there's a free registration option. Thanks again to Screen Dragon for supporting that event. All right, so that is coming up. This is a wild week. I've got. I was telling Mike's, like, how's your morning goes? Like, oh, I don't know. Creating the launch video for the partner program that everyone's been, like, waiting for us to announce with AI Academy. So that's going to get announced on on the 12th. I have a keynote tomorrow morning. We have an Intro to AI class on Tuesday. Yeah. And then Monday is my meeting day. So I have eight meetings today. So just I'm like, if I sound a little frazzled today, like, all over the place. I got a little bit going on in my brain at the moment. All right, so let's transition into the iPulse mic. So again, this is our informal survey. We run this every week. It is to get kind of get the pulse of our podcast audience. So we ask two questions each week. You can go and take these each week. And then we give a synopsis the following week. So our questions last week were, how do you feel about AI agents interacting autonomously on platforms like Moatbook without human oversight? So again, this is a topic from episode one. Hundred ninety five. We talked about moat book. Let's see what do we got? Mike, we have 33% is the highest response as skeptical. I think the emergent behavior is overhyped. I'd probably put myself roughly in that range. We about 30% uneasy. I would also be in that one if it was a multiple choice. Lack of human oversight is concerning. We had 22%. Fascinating. This is the future of AI collaboration. And 15% alarmed. This feels like a preview of things going wrong. So I guess if you do uneasy and alarmed, Mike, that is 45% of the answers. Right. Which isn't surprising. I am both of those things. All right, and then our second question was anthropic CEO Dario Abide predicts AI could displace 50% of entry level white collar jobs within five years. How realistic do you find this timeline? 60% said about. Right. This tracks with what I'm seeing. Wow. Okay, that's wild. Now again, informal poll. We are not projecting this as like data you should go and run with and make headlines. Headlines about it is just the sentiment of our audience. A small segment of our audience. But that that tracks about. And then we had 18% say, too aggressive. It will take longer than that. 15% too conservative. Disruption will happen faster. And then about 7%. 8% skeptical. I don't think AI will displace jobs at that scale. So. All right, again, it's. What is the URL, Mike? Smarter.
0:00
Yeah, SmarterX. AI forward. SL pulse. We'll have the latest one up when you hear this episode, so jump in and take that. Should be exciting. We'll talk about at the end of this episode once you have more context into the topics, what questions we'll be asking.
6:10
Too cool. All right, and then main topics. Mike, we're going to roll right into the sass apocalypse. It was like it's a. It's a sexy headline. It was all over the mainstream media. The stock market went nuts last week and there was lots of articles written about this, lots of podcasts about this, and it was the most obvious topic to lead off with today. So let's talk about SaaS apocalypse.
6:24
All right, Paul, because people are freaking out because software stocks experienced their sharpest sell off since the 2008 financial crisis. This past week, there was more than 300 billion erased from software data analytics and financial data companies in two days. And traders on Wall street dubbed the event the SaaS apocalypse. Now, the immediate trigger for this was Anthropic's launch of new agentic plugins for legal and sales workflows. So that sent legal software stocks into free and then spread across the entire SaaS sector. Now, these numbers may have changed since the initial run on these stocks, but I want to give you a sense just with some numbers and drops, just how much this was kind of a bloodbath. So legal Zoom at the time dropped 20% in a single session. Thompson Reuters fell nearly 16%. HubSpot was down roughly 39%. Year to date, ServiceNow had lost more than 27%. And the S and P North American software index fell 15% in January alone. Now, one trader at Jefferies, an equity trader, warned that the worst case view here is that software could become the next print media or department stores due to AI's impact. And a JP Morgan analyst actually said, basically the sector is now being sentenced before trial. Now, there wasn't just all doom and gloom here because plenty of people push back on the hypothesis. Nvidia CEO Jensen Huang called this sell off the most illogical thing in the world. Multiple analysts compared it to the deep seek panic of early 2025, which actually turned out to be a buying opportunity. So Paul, I'm curious, just you're an observer both of the SaaS market and just an AI insider in general. Like is this another deep seek moment where everyone panics and it turns out fine, or are there companies here that are in risk, real trouble?
6:46
So this was a really interesting one for me. I was somewhat ironically, I was actually in Arizona last week giving a talk on the state of AI and AI innovation imperative to about a hundred SaaS CEOs. So as everything was transpiring on Monday and Tuesday, I was doing the opening keynote on Wednesday to this group. And so to try and understand what was happening and then to put it in the context for a group of people who were being directed, directly affected by what was happening, who obviously have unique insights as well. So I had the opportunity to talk with some of them about what was going on and what they were seeing. So I won't get into, you know, those conversations specifically, but I can just tell you I have some very personal context as to what was happening in addition to I've obviously been a SaaS investor since the early days. I was lucky enough to invest in HubSpot's IPO back when it came out at $35 a share or whatever it was. So I've been a long time investor in a lot of SaaS stocks. I not at all providing investing advice. I will say that I have personally backed away from a lot of SaaS stocks over the last three years, starting probably about five years ago. And it was because I, I personally was just seeing a shift in where the value was going to be created in the future. So I would not say that what was happening was a massive surprise to me. I think the, the rate at which it happened and the, the depth of the drops was surprising. But the fact that the multiples that are usually applied to software companies are sort of being questioned is not surprising at all to me. So I'll try and provide like a little bit of, of context as to what I think is going on. So is this an overreaction from the market? Most likely, like it is an extreme reaction, but there is uncertainty around the stability of the software company multiples and whether the traditional software players will be the ones to capture the value moving forward. And I think that's the most important thing to understand. This isn't necessarily a common commentary on their current earnings or the current revenue or the projections over the next 12 to 18 months. This is a debate about where would the value capture happen in the next three to five years, basically. So a lot of SaaS companies got caught flat footed when ChatGPT arrived. And I know this from firsthand experience because I had conversations with CEOs and the product teams within these software companies that had no idea what was going on. When ChatGPT emerged, like literally questioning their own product roadmaps, I was on calls where product leaders were visibly shaken by what had happened. So the product team struggled to understand that moment. And then they had to scramble to figure out their generative AI roadmaps. And so for the last three plus years, they've been working to integrate these LLMs, the large language models, into their platforms, while having to increasingly deal with AI native startups that are chipping away at use cases and customer bases. And then the AI labs themselves, who they're buying the intelligence from through the API, they're integrating these large language models through OpenAI and Claude and Google, and then those labs themselves are offering alternatives to their software. So it became a battle of who's going to own the operating system and the interface and where would users want their AI. So at a high level, the question in part is about the $300 billion annual SaaS market, but as I was saying, it's really more about the total addressable market of human labor and which technology companies are going to capture the largest shares of that. So I will put the link in the show notes, but Andreessen Horowitz recently came out with A state of markets report. And it's very good. And you don't even have to download it. It's just you can click on and you can go through all the slides and there's a bunch of good data in there. So they pegged the annual software spend in the US at about 300 to 350 billion. So companies like ours, like yours as a listener, you buy software from HubSpot and Workday and ServiceNow and Salesforce and Adobe. So like all of that spend combined is about 300 to 350 billion annually. But the bigger opportunity for the software companies moving forward and for these AI model companies is the white collar payroll. So in the United states there's about 11 trillion in annual wages. Roughly 6 trillion of that is for white collar worker knowledge workers. So the software spend that 300 to 350 billion is actually likely going to increase. The pie is going to get bigger, but it's also probably going to shift. And so the first question investors are asking is, well, who's. If let's say that 300, 350 billion in software spend goes to half a trillion in three years, the question becomes who's going to get that? Is it going to be the legacy SaaS companies or are the model companies going to start chipping away at that or is it going to be a bunch of vibe coding? And it's like all these like people are just going to build these apps and start eating away at this software. So this presents pricing pressure. So as we discussed in the last episode where I was sharing my frustration with the credit based pricing models that these software companies are charging. So you have these software companies scrambling to figure out how do we integrate AI when we're not the ones building the models. We're trying to figure out how to fuse it in. But these people can get this from ChatGPT, like why would they come into our platform? So there's these questions of product and how does it fit. But the bigger question becomes how do we price for this to the point where people don't get annoyed with us and just start going and finding alternatives. So I had tweeted Mike, like as a follow up, I just thought, I don't know what day this was. This was like Saturday or Friday or something. And I was thinking about this pricing issue again and I said, here's what bothers me about possible SaaS pricing models moving forward. Human replacement cost is the most obvious answer. Tokens, credits per agent, et cetera are all abstract and variable. It's very hard to budget for that. But selling AI solutions or agents at a fixed price of one worker that can do the worker the work of 10 is easy. So in the tweet I said, I'm not saying it's right. And most existing software companies could never get away with positioning that way right now. But AI native startups that have the freedom to be more bold and direct in their messaging will have no problem saying the quiet part out loud. So if you want to sell software to somebody, if you want to say, sell intelligence to a cfo, what is the simplest way to do it? So this is like a thought experiment for everybody listening. If you think I'm crazy, we're going to charge you credits and it's going to be 1.3 credits if it's a chat that uses reasoning model, and it's going to be 0.7 credits if it's a chat that uses information lookup, then we're going to charge you 0.25 if it's a, if it's an email that's sent. And so imagine a CFO when they're hearing this, how you're going to charge them, but instead if you say, listen, we're going to charge you $150,000 a year. So it's going to be one FTE, basically at a manager level, that FTE, that digital coworker, is actually going to do the work of roughly 10 people. And here's the breakdown, here's an analysis of all the capabilities it's going to have, and here's how it's going to shift your workflow internally so you're still going to have humans, but those humans are just going to manage these digital co workers for $150,000 a year. And we think it'll generate about 3 million a year in revenue. So your revenue per agent, which I don't even know if I'm making up a term here, but like, I don't know, it's been talked about. But if you think about companies, the way you think about employee bases is revenue per employee. So I can generate 250,000 to half a million revenue per employee. What's my revenue per agent or agent swarm, like group of agents. And again, I'm just kind of thinking out loud like that is the absolute most logical way to sell AI technology or software in the future is like, what's the revenue per agent? And then I just, I pay for that agent. I don't give a shit how many credits you're using or like what the utilization rate is like just what's the value it's going to create for me? So the argument for SaaS, like why maybe this is overblown, is enterprises rely on these software companies for data management, stability of the technology, security, governance, customer support, product management, evolutions of the products itself, like all, all that is provided by the legacy software companies. And the vast majority of enterprises that are buying the software are not going to vibe code an alternative to HubSpot or Salesforce or ServiceNow. But the concern is, and this is where the investment issues come in, is AI systems like COD become capable of doing work that buyers will need to pay, want to pay less for the separate SaaS tools and the per seat licenses. There's going to be an issue with feature unbundling and commoditization so AI can replicate good enough versions of the workflow. So the example I gave when I was talking about HubSpot and my frustration with our credit model is, okay, we're burning up all these credits paying for the AI to do our chatbot. Why don't I just go get a different chatbot or like do the API ourselves and just like, screw it. I'm not going to like pay you guys all this money for it. Like I know I can get someone to vibe code just that function or I can just go get an AI native solution that just does customer service. Well, now you start to see how you chip away at the overall value of I'm paying for a thousand features for my, whatever my per seat license is. What if the AI stuff you're offering that you're trying to charge me for, I don't value that much or I know I can get cheaper somewhere else. So it breaks the pricing model and then it starts to eat away at the value proposition. So I'll end with this one. Mike. There was a the all in podcast, which again, I've kind of largely stopped listening to because I thought it just got too political. But I do like to bounce around on the timestamps because they still have brilliant insights on like key things that you want to hear about. So Brad Gerstner, who we've talked about many times, he has the BG2 podcast, which is brilliant. He's a venture capitalist. I'm just going to read an excerpt from his commentary on the all in podcast where they're asking him about this. He said, this is a real train wreck. And I was on CNBC at the start of the year and I was asked this question, what do you think about all these stocks being down? Referring to software stocks. So again, this is the beginning of the year. And I said, listen, they're all down and 90% of them deserve to be down. They're going down not because revenue is falling. Look at the revenue is actually stable to increasing for most software companies in terms of the revenue growth, they're going down because we're discounting the future uncertainty. And that is the key here. When something as profound as AI comes along all of a sudden, it causes you to question whether or not there's as much certainty and durability in those future free cash flows. So that's what the valuations are based on. That's where the market caps come from, is future cash flows, not the current. They're not rewarded for that. So in the case of Salesforce, he gives an example. It's gone from 30 times free cash flow multiple to 15 times. That means somebody buying it today says, Listen, I think 15 years into the future, I can count on these free cash flows before they were willing to pay 30 years into the future said, well, hell with AI today. We don't know what's going to happen seven years into the future. So for people at home to understand, why are these companies hitting their numbers but their stocks are going down, they're two totally different things. So again, this is all about the future impact of AI and the uncertainty it creates for these software companies. And when you go look at the answers that people are giving, like Jensen. Jensen is like the most brilliant person I've ever heard talk. I disagreed with his take. Like, and it wasn't the that it's overblown. Like, I get the overblown part, but the analogy he tried to use, it was, it was very. I've never heard Jensen give an answer where I felt like he was caught off guard and didn't have the right words. And I felt like his answer was actually intentionally defensive of a software industry that he knows is ripe to be disrupted. And he didn't address the real concerns here, which is the uncertainty around the future impact and who gets the. Who aggregates the value of human labor. And I don't know that it's the software companies. Like, again, I would have a hard time right now making major bets on any publicly traded software companies company, even at the discounts they're at today, because I have no idea. And I, I just look at companies like, who are the ones? Was it Merkur? No, mechanize. The ones who are literally like, yeah, we are. We are here to take humans jobs. Like, that is our pure mission in life, they have no problem saying it. So I don't know. I think that's the key is like the eyes only going to get smarter. It's only get more generally capable. It's, it's going to start to increasingly do the things that we all do. And who, where, where is the interface for that? Do I, do I want to go into HubSpot or Salesforce or Workday or do I just want to like, live in Claude and let Claude go get the data and like, I just want to talk to it? I don't know. But that's why it's uncertainty that causes stocks to plummet.
8:42
Yeah, I couldn't help coming back to like this phrase. I don't know if it even means just something I wrote down where it's like, okay, if I'm thinking about the role of SaaS companies right now, a lot of them are defensible but not premium. And in the sense that, look, HubSpot's not going away anytime soon. But it's also on the other side, it's like, well, okay, unless this is getting dramatically more intelligent with a sensible pricing model, what am I paying a premium for? And the answer is nothing. Not that that won't change, but that's. We'll talk about in the later segment about CRM systems. Like, they're making some really interesting strides forward, but these tools have to become more intelligent to even begin to justify how much you're paying for some of the capabilities.
21:40
Yeah. And again, like, remember the SaaS playbook? It was don't even worry about being profitable for years. Like, it was all about growth. And, and at some point you're like, oh wait, what if that future growth isn't there? What if, what if OpenAI or Anthropic eats into that future growth? So again, the pie keeps getting bigger, but maybe AI natives come along and start taking away chunks of that market share and the AI labs themselves decide that to keep funding their growth, they got to take that stuff on. And I don't know anyone who can predict how that plays out. And again, when you have uncertainty about future cash flows, future growth, you have to start considering that into the value of the stock and it just plummets. And again, I think it'll bounce, but I don't know that it's ever going to go back to the multiples that, that the SaaS industry was built on for the last 25 years. And I think that's the, that's the concern of Wall street is like, we might be we might have to have a reset of the SAS market. Companies don't go away, but they never trade at the multiples they traded at before or the select few who figure out how to be the system of record and have the agent layer and be the user interface. Like there'll be some of those. I am not smart enough to bet on which ones those are going to be. So that is why I personally I am not aggressively looking at SaaS stocks for the future in terms of where I'm going to invest because I'm not smart enough to know.
22:20
All right, our next big topic this week Anthropic has run its first Super bowl ad campaign during Super Bowl 60. We're recording this on Monday. So the day after the super bowl, this is the campaign titled A Time and a place, included a 60 second pregame spot and a 30 second in game ad during the first quarter. So these ads basically, and we'll link to them in the show notes, show actors playing ad supported chatbots derailing genuine user questions with absurd sponsored pitches. Now interestingly, the tagline that Anthropic ran in the ads that they released before the game on YouTube, which you can still see, we'll link to it in the show notes, was the following tagline which is quote, ads are coming to AI but not to Claude. So basically taking shots at ChatGPT for having ads. This appears to have changed to a less aggressive tagline which we'll talk about during the actual TV spots yesterday during the game. Regardless, this campaign is a direct response to OpenAI's recent announcement that it would begin testing ads in ChatGPT and almost as headline worthy, was kind of the freak out from OpenAI about these ads. So OpenAI CEO Sam Altman responded on X with a big long post saying that the ads were clearly dishonest. OpenAI CMO Kate Rauch also posted a response attacking Anthropic for thinking quote, powerful AI should be tightly controlled in small rooms in San Francisco and Davos rather than have free ad supported versions. So Paul, I'd love to get your thoughts on what you thought of the ads and also lots of entertainment here, lots of drama, grab your popcorn. It's really fun. But also what's really worth paying attention to here beyond the messaging and the narrative.
23:53
Yeah, so they dropped these on February 4th and I had they they made the playlist available on the anthropic YouTube page so you can watch all four of them. As you were saying, Mike and I put this on LinkedIn that day and I would Say it was like roughly split. Like there were definitely people who thought they were just funny and God, that they were just sort of poking fun and you know, just doing what the challenger brand does. They're trying to, you know, come up on, on the, the top player and get some market share and just advertising, it's, it's all, you know, it's just marketing and advertising. I didn't, I didn't personally get the, like the offense that some people took to it, but I understand it. Like everybody has a, you know, a different view of these things. And so if you go through all the comments just in my LinkedIn post, you could definitely see a mixed reaction to them. Most people thought they were pretty funny, but some people thought they were just over the line. So yeah, so I put it up and then within like an hour, Sam Altman tweets, you know, this novel about like the, the whole thing. I was like, damn. Like that's like touched a nerve. Like it was a very, very unusual response from the dominant player. So again, like, you know, when Apple was running the ads against, you know, you know, Windows and PCs, I don't remember back exactly, but I can't imagine like the CEO of Microsoft was like putting out a statement about these ads. Like you don't like usually when you're the leader, it's just like, yeah, that was funny. Like you take the L. Like you just like whatever that was, it was humorous. Like, way to go. You know, nice job and you get them next time. Like it was just a very bizarre response. And then the part that was really weird about open the eyes responses, it was obviously a coordinated thing because multiple people were using this. We have more free users in Texas than, you know, Anthropic has total. And I was like, oh, that just, I don't know, like that. Yeah, it just looked bad. So anyway, so just for context for people, if you haven't seen Sam's tweet, I'll read a portion of it. This is like not the whole thing at all. Said first, the good part of Anthropic's ads, they are funny. And I laughed. But I wonder why Anthropic would go for something so clearly dishonest. Our most important principle for ads says that we won't do exactly this. We would obviously never run ads in the way Anthropic depicts them. We are not stupid and we know our users would reject that. I guess it's on brand for Anthropic doublespeak to use a deceptive ad to critique theoretically Deceptive ads that aren't real. But a Super bowl ad is not where I would expect it. More importantly, we believe everyone deserves to use AI and are committed to free access because we believe access creates agency. Here we go. More Texans use ChatGPT for free than total people use Claude in the US so we have a differently shaped problem than they do. If you want to pay for ChatGPT plus or Pro, we don't show you ads. Anthropic serves an expensive product to rich people. That was pretty cringey line. We are glad they do because OpenAI also serves an expensive product option to rich people. So that was just kind of. We are glad they do that and we are doing that too. But we also feel strongly that we need to bring AI to billions of people who can't pay for subscriptions. And then he, later on he said, we are enjoying watching so many people switch to Codex. There have now been 500,000 app downloads since launch on Monday. And we think builders are really going to love what's coming in the next few weeks. I believe Codex is going to win. Now that Codex is the ad, at least the one I saw. OpenAI, they ran an ad for Codex. Yeah. Which I know what Codex is and what OpenAI is. And I actually was like, wait, was that an Open AI ad? Like at the end? Like I. Because I didn't know what was happening throughout the whole thing. And when I saw you can just build things, I was like, oh, this is an OpenAI ad. I think the one that really set off Sam and the others was. So OpenAI has been running these ChatGPT ads. Mike. I don't know if you've seen them for, for months now where it was like a workout thing and they were like doing pull ups and.
25:32
Yeah.
29:18
And then they like scroll the text super fast. Like you can't even read it. And I've always said to like every time I say I turn my wife I'm like, this is such a. I don't get that ad at all. Like, it's just, it doesn't convey the value and like it's hard to read. So one of the Anthropic ads was a parody of that. It was like a guy doing pull ups. So they were very directly attacking OpenAI. So then, you know, it got kind of worse because then The CMO of OpenAI also tweeted with the same similar messaging. It was like betrayal, deception, treachery, which were sort of like the opening to the Anthropic ads. Those ads are funny. And the former Meta Ads executive who made them is good at his job. He had a lot of practice. Feels like that was a bit of a shot. Here's what's not funny, calling ads a betrayal when your business model is selling paid subscriptions to companies. ChatGPT has more free users in Texas than Claude has globally. And then they put a trademark after Claude, which I'm assuming was a shot at them for going after Claude Bot. Maybe. I don't. I don't know what that was. Real betrayal isn't ads, it's control. Anthropic thinks powerful AI should be tightly controlled in small rooms in San Francisco and Davos, that it's too dangerous for you in all caps, that the future should be built somewhere else by someone who is smarter. We don't believe that. And then it just goes on to talk about free access and agency. I just thought it was really weird. I thought that OpenAI responding in such a defensive way, in such a coordinated defensive way made them feel kind of like weak and scared. Like, I don't know. It's very, very unusual for the number one brand in an ad situation to respond in such an irritated way to ads. So I don't know, something's going on. Like, obviously they're not liking each other at the moment. So, yeah, it is what it is. You can go watch the ads. But they're definitely. They definitely hit a nerve with OpenAI. And then I think the story of the rest of the super bowl was like the amount of AI ads off the charts. So we had Gen Spark, which there was a cool story about how that kind of came together real fast. Google Gemini Nano Banana Pro, which went more for the sentimental, which was kind of cool. There was AI.com, which was hilarious. Somebody paid $70 million for AI.com and then it got announced like a week ago and then they debuted and then apparently the website crashed, like as soon as the ad ran. I don't even know what the hell it is. But anyway, that was there. Salesforce, Benioff and Mr. Beast, like, did a deal. Like weeks leading into it was like, all this happened on X. He's like, oh, make us a Jimmy. Make us an ad for, you know, the Super Bowl. So they did a slack thing. Like, I don't know. That was hard to follow. One, I think that was on brand with whatever the Mr. Beast game show is right now.
29:18
I think so.
32:00
Yeah. Ye, like, shit was just blowing up everywhere. I had no idea what was happening. I didn't even know it was Slack. So I'm not sure exactly what the point of that ad was, other than it was Mr. Beast. Meta glasses. They spent a ton of money advertising the sports glasses. Amazon, Alexa plus had one. You had rig camera, ramp, rippling wix, building websites. So whole bunch. And then there was numerous ones where AI was used to create the ads itself. So this was last year, was sort of starting the tipping point. I feel like we just went all AI on the ads this year. They were everywhere, and I was just dipping in and out. Like, I. I saw those just, you know, kind of in between doing the other things I was doing.
32:00
Well, like you said too, this is all. All is fair and kind of love and advertising and whatever. But, like, kudos to trying to steal market share. But the narratives around this stuff are also just a bit confusing to me. It feels like almost inside baseball to your average person. Totally. Right? It's like, what your average person might not take away Claude versus OpenAI. They're just taking away a not AI corrective, you know?
32:39
Yeah. And the Genspark one, I was like, I actually don't know what Genspark is. We may have covered it before, but it's like they all start to run together. I was like, isn't. Is that just like Claude or ChatGPT in a user interface? Like, it seems they were running the same ads like Microsoft was doing for Excel. Like, everyone's featuring the same use cases. And this goes back to that discussion about SaaS. It's like, well, who. You're all featuring the same stuff. Which interface am I going to use to do the thing? Becomes this, like, massive debate. And I agree. I feel like all of us who, like, you know, you and I doing this podcast, people listen to this podcast. We're. We're kind of on the inside here. It's like, we get it. We know what these brands are. But I definitely got text message last night, like, what was that like? Yeah. And then like, my wife and I were talking about it and she goes, yeah, I didn't really get the Claude thing. Like, what. What was that for? And. And so I was like, yeah, it's so. I do think. And I did see some tweets about that. People were just. It kind of went over their head what. What it was all about.
33:03
All right, our third big topic this week is kind of. We're starting to see more what we would call move 37 moments for a lot of people. We'll talk about what exactly that means as a throwback to some topics we've talked about in the past, but basically several developments this week pointed to a growing unease among technologists and business leaders about how quickly AI is advancing into core professional work. So in one example, Sam Altman posted on X that after he built an app with Codex, the tool suggested features better than what he had come up with himself. He wrote that it made him feel, quote, a little useless and that it was, quote, sad. Former Dropbox CTO early Facebook engineer Aditya Agarwal posted a similar reflection after coding over the weekend with Claude writing, quote, we will never ever write code by hand again and quote, something I was very good at is now free and abundant. And he felt a bit melancholy about that. Now on the enterprise side, we heard Goldman Sachs announced it is deploying Anthropic Squad to automate trade accounting and client onboarding after six months of embedded work with anthropic engineers basically creating digital co workers. And separately, KPMG International pressured its auditor, Grant Thornton to cut fees by 14, arguing that AI efficiencies should reduce the cost of audit work as people are kind of getting automated and augmented and in some cases replaced. So, Paul, the reason we're kind of pulling these threads together is it really does feel like we're hearing more and more of this like move 37 moments from especially like elite coders and scientists. Not just your average person, which is important, but people who are at the top of their fields are increasingly saying they're having these like wake up calls of like, holy, like this thing is now doing what I do at a very high level of competence. I'm curious if you're seeing that become more and more of a thing.
33:56
Yeah. So the move 37 moment, just for context, everyone goes back again to the AlphaGo documentary that we've, we've talked about in the game of go, when AlphaGo defeated Lee Sedol. So I also did my macon keynote in 2025 on the move 37 moment for knowledge workers. And we'll put the link in that, that keynote is actually available for free on our YouTube channel. So you can go kind of get the full context of what's going on. But I define it in there. Again, I was focused more on the human aspect, not the technological aspect of the move 37 moment. And so I defined it as like the moment when you realize AI is better than you at what you do. And so for me, this started in 2022, when we first started getting image generation technology in early 2022. And then, you know, by fall of 22 we had ChatGPT moment, you could just start to see this was going to affect a lot of people. And I had written a blog post early on in 2022 where I said, what if the work that defines you and brings you fulfillment changes? What if AI moves further into the strategic and creative realms much sooner than expected? And that, to me, is what I consider the move. 37 moments, that moment when you just realize, like, wow, it is on par with me or better than me at something I considered myself to be an expert in. And my point in my opening keynote was, like, we're just not ready. Like, society isn't even considering this more broadly, and they won't know how to cope with it and how to move from. From that realization. So I just thought it was interesting because you highlighted, you know, Sam and Atia, and like, these were just two people that just happened to surface in my feeds this week. Like, it wasn't like I was seeking this information out. It was just like this, you know, it kind of. I don't know, like, you start to just feel when more and more people are talking about it. Now, Sam has talked about it before, so that one, you know, just kind of every once, every few months, he has these. These tweets like that. But there was. There was a. The one that really got me was this David Kipping. And I. I had not been a follower of David's prior. I'm now a subscriber to his. His podcast. But he's an associate professor of astronomy at Columbia University and a founding director of the Cool Worlds Laboratory, where he leads groundbreaking research on exoplanets, exomoons, and the search for extraterrestrial life. So this guy is a legitimate expert in his field. And he told the story on his podcast that I'll just read excerpt from, because I think it just, you know, encapsulates the moment. And again, how this is beyond just technology now. And he said, two days ago, I attended what felt to me like one of the most impactful scientific meetings I have ever been to. And I felt compelled to share this with you today. So, again, this is him just doing, like, a dialogue on his podcast. I do have some notes. I have quite a few bullets that I want to get to, so bear with me. And then he actually goes through a bunch of them, said, I think this will be worth it, because this meeting is transforming the way I think about the future of science. I was well aware of the impact of AI in my field. I had my own thoughts about how it was benefiting me and the risks that it might have for the future. But what was so shocking about this meeting is that all of this was said out loud. This wasn't just the voices in my head. Everybody was saying the same thing. Everyone was saying all their concerns about AI. We're all on the same chorus. And that was really struck him was that these people that he holds in high esteem and that are coming from similar conclusions to himself. So this was a meeting at the Institute of Advanced Study is one of the most elite intellectual institutes on the planet Earth. He said this is where Einstein worked, this is where Oppenheimer worked. If you've seen the movie Oppenheimer, the scenes were shot there. It's the home of many generations of brilliant physicists and many of the academics as well. He went on to say, this is not Silicon Valley tech bros. These are not billionaires. This is not hype. These are people who have built their career on analytical thinking, technical work, mathematical development and abstract thought. These are truly thought leaders. He showed many examples. So then he refers to the lead instructor who was kind of making a presentation to this group of the top minds in the world. He said he showed many examples of research projects being achieved with just a few prompts and how these systems were so powerful that they were delivering very impressive results at the cutting edge of what we would expect for a scientific paper. The first big shocker, they said, well, I don't know that it was a shocker. I think I'd internalize this already. But to hear it said out loud was jarring. And this was that AI models had now achieved complete coding supremacy over humans. Software development, widespread spread concession that AI was not just better than humans, it had complete supremacy. And even the phrase order of magnitude superior was being used in the room. It wasn't purely isolated to coding supremacy. There was also a concession that analytical reasoning, problem solving, mathematics, those skills were also comparable to the level of ability of the current AI systems at least. And perhaps even there was also some level of maybe not quite supremacy, but advantage superior ability at this point to the people in that room which remember, and this is him speaking, I challenge you to find a room with a higher average iq. The lead faculty who was leading this discussion said this. I kind of had to write this down. So he said these models, in a very broad sense of the world intellectually, can already do something like 90% of the things that he can do. Again, he's referring to the lead person, 90%. Now he wasn't sure about that number. He said it could be 60%, it could be 99%. But it was clear in his mind that it was a majority and it was only going to grow with obviously future versions of these models. And it had broad scientific ability with just a few prompts. So that was startling to hit on. This is not just coding premise. This is much more than that. And so then I'll just kind of fast forward the conclusion. The whole thing's like an hour and 14 minutes. It's probably, you know, worth a watch or listen. He said. This is a conversation that the most elite institutes in the world are having emergency internal meetings about. And the smartest people you can name in the world you can think of are worried about this and this is a threat to their intellectual supremacy and have even conceded much of their ground to that already. That is what makes me think that this is really happening. This is really happening. This is not just coming down the pipe someday. Like we're in it, we're swimming in it. So again, I think it's just this, you know, it maybe it starts at those highest levels where people have these hard problems, Mike, that they can solve for and that they challenge by. And then they can give those hard problems that maybe push on these models in ways the average user doesn't. Because like, hey, GPT4 was pretty good at writing emails like how much different is 5.3? But these are people working on the hardest problems in humanity. And if they're seeing advancements like this to where again, in an internal private meeting at Princeton with no tech Bros, No Silicon Valley VCs in the room and they're admitting to each other this thing is on par with me. 160 plus IQs like that. That's hard to comprehend. But I think it shows us that we're really heading into a phase where we have to start having these bigger conversations around the impact this is going to have on people's work and their sense of meaning and all those things that we bring up on this podcast.
35:47
Yeah, and I won't spend too much time on this, but Paul is. We've been doing planning for the year. I started off the year kind of putting together an overall strategy and strategic direction and vision for our content studio internally and found myself, just by happenstance, actually starting off this strategy document. And I think I had actually wrapped it up on January 1st with an a short essay called My Move 37 Moment for Writing. I'm a writer by trade. I have been a professional writer in one way or another for a very long time. I would argue I'M maybe just shy of actually being a world class writer. It is like my superpower and I don't mean to be arrogant about that, but like, I have some receipts to show that. And I said that this year, end of 2025 was very, very different. It was the time where I had to just say, I can safely say AI is better writer than me in every way that counts. Now, I won't go into all the details. There's a lot of nuance to that. That doesn't mean writing or write writers are totally obsolete. It just means that when it comes to taking my ideas and putting them into words and really good words and really logical constructions and emotive constructions, AI is just as good at me and it's way faster at doing it and it probably will be better soon. And I don't know if I knew this was coming. Like, we followed this for a while, but my gosh, like, I get his quote of like, this is really happening. Because I was like, okay, three years ago. I knew this was going to happen. But when you're in it, you're like, like, whoa, okay, we're living. It is here. It's not coming down the line. It is here. And I felt that very viscerally during holiday break.
43:02
Yeah. And I, I mean, I've had those moments myself plenty of times. Yeah. And I think the key is just to like, to talk about it because, like, even after I did the keynote at Macon, like, so many people came up to me and just wanted to talk about that concept because they were feeling it themselves and they didn't know how to vocalize it. Like, they didn't. And again, you almost need to get meetings like this where you get these physicists together and they just like, admit the thing that they haven't really told anybody about. Like, hey, I'm kind of feeling threatened here. I don't. Yeah, I don't know what this means. Some of the future of my job and what I'm going to be asked to do. So, yeah, I think it's just a ongoing conversation. We're not trying to present like some prescriptive answer here about what to do about it. But you're not alone. Like, if you're feeling it and you're, you know, an ad copywriter or you're a top salesperson or you're an HR professional and, or you're a professor and you write like recommendation letters. Like, Ethan Malik wrote about this like two years ago. Like, isn't the, the pain and like the time it takes to do the thing, what makes it worthwhile? And like, what if AI just does the thing for you? And so, I don't know, I think it's something we're all going to be faced with. I just think so many people are unaware it's happening and sometimes the people who are aware it's happening don't want to accept it and just prefer to like, go about their lives and not face it head on where we're just kind of like, let's just get it out and let's, let's start figuring out the code, let's go through, you know, the different emotions and let's figure out the path forward.
44:36
All right, before we dive into rapid fire, Paul. This week's episode is also sponsored by our 2026 State of AI for business reports. So in the past we have done a State of Marketing AI report through Marketing AI Institute. But this year we are going beyond marketing specific research to uncover how AI is being adopted and used across organizations. And we would love, love, love your help creating our most comprehensive report yet. So what we have is a survey that literally takes just about five to seven minutes to fill out. And in return for filling it out, you will get the full report when it drops. Plus there is a chance to win or extend a 12 month SmartRx AI mastery membership. So if you could go to SmartRx AI forward slash survey, it'll dump you right into a Typeform survey. Again takes five to seven minutes. A bunch of questions about how you're using AI in your business. And just to be very clear, this is totally different from our like AI Pulse surveys. This is a bigger research project we're undertaking and we're very excited about.
46:05
This is the sixth year, Mike, that we're doing.
47:10
It is, it would be the sixth year we're doing this.
47:12
Yeah. Last year we had over 1800 people respond. So yeah, we would love your response. It's, you'll see the, the data is extremely valuable. You could go download the 2025 one and get a sense. And again, that was, you know, specifically targeting marketers and business leaders. But this one is, as Mike was saying, got much more broad and trying to get a much greater sense of where everyone's at in terms of their adoption, their sentiment around AI. So really looking forward to that data.
47:14
All right, let's dive into rapid fire this week. So SpaceX has acquired artificial intelligence company XAI, both companies owned by Elon Musk. This was an all stock deal that values the combined entity at $1.5 trillion. SpaceX was valued at 1 trillion XAI at 250 billion in this transaction. Under the share exchange, one share of XAI converts to roughly 0.14 shares of SpaceX stock. Now the stated rationale for this tie up is building orbital data centers. SpaceX has requested FCC authorization to launch up to 1 million satellites for space based AI compute. And Musk has estimated that space based AI infrastructure will be the lowest cost option for AI infrastructure within two to three years. Now the deal gives the cash burning xai access to SpaceX's financial backing, which is important. And this combined company is targeting an IPO as early as summer or fall 2026, with SpaceX reportedly looking to raise up to 50 billion. Now Paul, what do you make of the rationale here? We talked about this being a possibility last week. Is orbital Data centers a real play? Some commentators have kind of mentioned this might just be about giving XAI a financial lifeline by combining the entities. Is it a little bit of both? What's going on?
47:39
I don't know. I mean the data centers in space is above my pay grade. I again, it's like anything else. You can definitely find a lot of people who think that it's a fool's errand and it's unnecessary for where this is all going. And then Elon, you know, is obviously making a major bet that it's going to be the future. So who knows? To be determined. The, the. Interestingly in, in this announcement, Elon made a big shift from like from day one. Everything he's always talked about with SpaceX then into Tesla was always about getting to Mars being a multi planetary species. It is now the moon is the mission. So I mean in 2018 he said we would have a human base on Mars by 2028. And now the, the plan is to get like an Optimus base on the moon maybe by 2028. So I don't know, things shift, goals shift. He said he still thinks Mars is in play, but it might be another five to seven years. Probably it's more like another 20 years before that becomes something they would actually pursue. But we shall see. What I will say is anyone who's flown United with Starlink has the WI FI life changing. If you're like me, WI fi on planes is like just one of the most horrific experiences. You pay your $8 or whatever it is and then it doesn't even work and it's slow and it just drives you crazy, man. I flew a United flight that had Starlink and it was unbelievable. It was like 200Mbps. It was lightning fast, it was reliable from like takeoff to landing. So I don't know. They've got some awesome technology. Whether or not we need data centers in space, I don't know. But. But yeah, it'll be a massive IPO when SpaceX and Xai as one go public. So stay tuned. I'm sure it's coming later this year.
48:58
All right, next up, Anthropic has released Claude Opus 4.6, its new flagship AI model designed for expert level reasoning, coding and sustained multi step work. So this model tops a bunch of different benchmarks, including one about agentic coding. It also tops humanity's last exam which measures reasoning ability. And pretty notable improvement is actually what Anthropic calls fixing something called quote context rot. So it's the tendency of AI models to lose track of details as conversations get longer. Opus 4.6 has scored dramatically higher than Sonnet 4.5 on this type of task. This model also features a 1 million token context window in beta, meaning it can process the equivalent of several large books or an entire code base in a single session. Anthropic also introduced agent teams in Claude code that allow multiple AI agents to work in parallel. And Opus4.6 is now available in Claude in the API, AWS, Bedrock and Google Cloud Vertex AI. So Paul, from what I'm seeing and experiencing using this, the agent teams thing seems like a big deal. The ability to not lose track of details as conversations get longer seems also to be getting a lot of attention overall just seems like a pretty interesting improvement here.
50:55
Yeah, a couple things jumped out to me. One, I don't know the exact timing, I didn't go timestamp everything. But the day OpenAI announced Codex that we're going to talk about, Anthropic launched this. It felt like almost simultaneously and I just started laughing because it felt like Anthropic was doing to OpenAI what Opening I historically has done to Google, which is they hold releases until OpenAI does something and then they drop the thing and that's what OpenAI has done to Google for three years. So I, I don't know Anthropic, I just feel like they're having fun right now. Like their marketing and product teams is just basically sitting there just like laughing and enjoying the fact that they're getting under open eyes skin. And so this on top of the ads all happened at the beginning of last week. That was interesting. I did get so Descript is a product, we use a software, we Use for audio and production and editing. And then lovable. I've mentioned I use for app development, I guess I could say vibe coding. And both of them I got emails from last week saying, oh, we're now, you know, now on 4.6, Claude. So it's like immediately infused into these platforms. And then the other thing I just mentioned is Apollo Research does analysis of like the safety of these models. And they had a report that came out that said they couldn't actually complete their testing of 4.6 because of the situational awareness of the model that it was being tested. Tested. So, like they couldn't finish their alignment testing because they kept finding out that the model was aware it was being tested. And I'm using the aware in like air quotes here. Yeah, but that is a major problem. Moving forward again, the models absorb the information on the Internet where it is talked about that they are aware of their testing. And so in the training data for these models, they learn that they are tested to find out how aligned they are.
52:12
Yeah.
54:03
And so you don't know when you go to figure out alignment if it's actually aware that it's being tested or if it just learned in its training data that it is tested. And it's, it's so bizarre. Like it's this circular thing in your brain. But I just thought that was kind of crazy that they basically admitted, like, we can't fully test the alignment because it knows we're doing it.
54:03
That's wild. You know, one last thing here I do, I did mention this context rot thing just as an example for people, because I see a lot, people get kind of like almost like frozen in time about the limitations of AI. Like, they keep citing all these limitations from whenever the last time they dived into this was. Which makes perfect sense. There's plenty of limitations of AI models, but I keep hearing people throw stuff around, like hallucinations, just, oh, after a long enough chat, it gets bad. Like, sure, yeah, that's happened. But also the number, the percentage improvements on this model, which is just a 01 up from the last model, are dramatic. It's like five times more accurate at this context rot, like deep retrieval stuff. You really have to update your priors if you're leaning on these kind of old ideas of what the limitations are. And it's. I think it's just critical to not get surprised when you say, oh, of course it can't do that. Well, no, in a lot of cases it now can.
54:26
Yeah. And we always say, like, you gotta plan on the frontier of where the AI is going. Yeah, like if you're planning your career, if you're planning your business strategy, if you're planning your product strategy, you have to kind of assume the stuff that's solvable will be solved. And you have to assume we're going to get rid of hallucinations, largely we're going to reduce the error rate of these things. Like yes, they exist today and you should be aware of it and you should be vetting everything and like checking your work and all that stuff. But it might not be that way forever based on the current trend lines of how fast these models are improving.
55:24
All right, so next up, like you mentioned, Paul, OpenAI has also launched GPT 5.3 codecs, calling it the company's most capable agent encoding model. The big thing here is not only is it more capable, but it runs 25% faster. It's designed for what OpenAI calls long horizon tasks, which we've talked about before. These are multi step jobs requiring sustained context, research, tool usage and complex execution. It has set new state of the art scores on multiple benchmarks. And Interestingly, it is OpenAI's first model that they say helped build itself with the Codex team using early versions to debug its own training. It is also the first OpenAI model to hit high on the company's internal cybersecurity preparedness framework. There's a reason I think OpenAI was posting about that in the last few weeks. The model is now available in the ChatGPT app, command line interface and ID extension. So Paul, that last part about the model helping build itself, I mean this is what people have been talking about, right, with recursive self improvement. A very limited example perhaps.
55:59
Yeah, I mean Anthropic like Dario was on record like last fall talking about 90% of of anthropic's code was going to be AI generated. People thought he was crazy. And now you have people at Anthropic saying like Claude code basically wrote itself. Like we're, we're almost 100% of coding is being done by the AI. Now in theory there's still humans in the loop and still very verifying everything. And as you said, like it, you know, over time it reduces its reliability and autonomy. And yeah, so it's not, not saying like just getting rid of the developers and getting rid of the, you know, engineers and the researchers but, but yeah, I mean increasingly the thing you have to solve for is how to verify all the output. It's like we've talked about that with a verification gap as writers we can do research reports and we can do all these things and it can output tens of thousands of words in minutes. But we still gotta verify. We can't publish that stuff. We wouldn't use it ever. Just hit the button and go and okay, here's your code or here's your research report. So that's the problem I see in a lot of businesses is they don't know how to deal with, with the, the onslaught of stuff you can create, whether it's code or content or videos or image, whatever, humans still got to figure out how to manage all that stuff. And so I do think there's this race right now obviously on the, the agentic side and the coding side of who's going to own this. This goes back to the conversation we started with. It's like, what is the interface? Where are you going to build things? Where are you going to live in as a worker, as a coder, as a researcher? But yeah, this is obviously a highly competitive space and I feel like we're probably, I don't know, days or weeks away from the latest thing from Google Gemini and they'll probably level up. And it's just, yeah, it's this crazy three legged race right now where these companies are just constantly iterating on the same things and keep coming out with the better models. And I get why people are kind of overwhelmed by this. It's moving really fast and it's hard to keep track of it. I had one, I was talking to my dad last night and I was like, wait, did I. He was asking me something about the ads and I was like, didn't we, did we talk about that on the podcast last week? And he goes, I think so. I'm like, really? Like when did that happen? So I'm like losing track of my own head. It's like, oh no, that was the newsletter. I wrote that this weekend. I didn't talk about it yet.
57:04
All right, next up, OpenAI has announced Frontier, which is a new enterprise platform for building, deploying and managing AI agents that function as what the company calls, quote, AI co workers inside business workflows. So this platform is designed around four pillars which are connecting to enterprise systems like CRMs and data warehouses. So agents have business context, enabling agents to execute tasks in parallel across real workflows, built in evaluation so agents improve over time and enterprise security and governance control. So this is being used by some early adopters including Uber, State Farm, Oracle, Cisco and more. Agents built on Frontier can follow users into ChatGPT, browse the web and operate applications like Excel and PowerPoint. And analysts actually see this as putting OpenAI in kind of a direct collision course with Microsoft copilot, despite the fact Microsoft is a major investor in OpenAI. So I'm curious, Paul, like Frontier seems like it has some big implications for this trend of AI co workers we've talked about. Is, is this about to become a reality? Is this going to ruffle feathers at Microsoft?
59:10
I think this is a pretty important launch. Like this is one of those where probably look back in like six months and think, okay, like they were telling us exactly what they were going to do. There's a chart actually on the launch post from OpenAI that I think probably ends up being a pretty important chart about how they see the future. So I don't know. A few notes that for AI co workers to actually work, a few things matter. They need to understand how the work actually gets done across systems. They need access to computer and tools to plan, act and solve real world problems just like we do. You know, we need those things to do our jobs. They need to understand what good looks like. So quality improves as the work changes. And for right now, that means humans have to be overseeing and approving and iterating on the work. They need an identity, permissions and boundaries that teams can trust. So you want to put these agents to work, you need to be able to know they're going to be reliable and they're not going to break the guardrails or do things they shouldn't do, access things they shouldn't access. And then all of this has to work across many systems that are often spread across multiple clouds or software providers or, you know, sources of truth. So Frontier works with the systems teams already have without forcing them to replatform is how they explain it. You can bring existing data and AI together where it lives, as well as integrate the applications you already use using open standards. So I don't know, a couple of notes. This kind of again goes back to the SaaS apocalypse opening conversation. So OpenAI is explicitly framing this as an operating system for AI coworkers on a single enterprise platform. It addresses the agent sprawl problem. So as more teams deploy agents in isolation, complexity increases. You got agents all over the place. We've talked about this, like how are you going to manage that? How are you going to manage permissions and data access and quality? So Frontier's pitch is centralized context plus execution plus evaluation plus governance. So each new agent, whether it's an OpenAI agent or your agent or a third party agent. It reduces the friction of integrating them into kind of a team of agents, agents in essence. So it keeps the current stack. Again, that doesn't force replatforming and they emphasize integrating across existing systems with those open standards. So you're not abandoning current apps or agents. So again, I'm not going to get rid of my software license, but I need everything to work together. So, you know, I actually went into ChatGPT and I was like, all right, so what would be the implications of something like this for HubSpot, Salesforce, Workday, ServiceNow? What are the implications to this kind of approach from OpenAI? And so it actually had some really good answers. But a few of the things I'll highlight is one, the the user interface moat erodes. So if work happens through agents in chat GPT users spend less time in the native SAS user interface. So if I'm, if I'm talking to my CRM data while I'm sitting in chat GPT team license, I don't need to go into HubSpot. Like the data is just getting pulled in and I'm, I'm less and less dependent upon going there. And now imagine if I just create an agent and that agent goes and does everything in HubSpot with one seat. So maybe I've got to buy a seat for my agent, but that agent can go do the work of 25 people and just go pull all the data into here. And now you realize why seat licenses collapse. Basically, System of Record becomes the core moat. So the strongest position is the app that owns the authoritative data, which is Good news for HubSpot and Salesforce and Workday and ServiceNow like they are that system of record. But if a third party layer like OpenAI becomes the primary work surface, the SaaS becomes just more of a database and processing engine under the hood and not a place I'm logging into every day. And then that all leads to pricing pressure and repackaging. So as agents automate tasks, buyers are going to ask why am I paying per seat if humans aren't doing the work? And so that leads to this pressure where they want more usage or outcome based value, not per seat licenses or per credit licenses. So the winners will be those who credibly price on outcomes and prove value with instrumentation. Which goes back to my argument. Like, isn't human replacement cost the answer? Like, as much as that sounds like a horrible thing, and I'm not saying it should be, it's the most logical thing. It's like, well, I know the value I get out of employing a human for a year. So what is the value of employing a collection of agents for the year? Like, I'm willing to say it's like, oh, okay, so I'm gonna, I'm gonna hire this person instead of, you know, I'm hired this agent instead of five people. Okay, like, as a cfo CEO, I get that. Not, not the thing people want to hear, but like, trust me, that is how they're thinking. So I don't know, it's really interesting. I, again, like I said, I think I would go look at that chart that they broke down the different layers and I feel like you're going to hear a lot more about that chart. Sometimes you just get this instinct of like, I think that's an important visual. And I'm, I'm pretty sure that's an important visual.
1:00:21
All right, next up, we're talking AI capex. So big tech's combined capital expenditure plans for 2026 now total roughly $650 billion. That is up about 60% from $410 billion in 2025. So Amazon has led the announcements here with $200 billion in planned AI spending, more than 50 billion above Wall street expectations and up from 1 31.8 billion last year. CEO Andy Jassy said spending is driven by, quote, very high demand for AI compute. Though Amazon stock dropped roughly 11% on that news, Alphabet said its 2026 capex could reach 175 to $185 billion, roughly double it's 2025 level. CEO Sundar Pichai said even that amount, quote, still won't be enough given supply constraints. Meta's planned capex sits at 115 billion to 135 billion, also nearly doubling year over year. So this all comes as labs like OpenAI have been seeking alternatives to Nvidia chips for inference workflows. They have had discussions with startups like Cerebras and Grok. Now, Paul, I guess the reason we're talking Capex, the reason we're talking how hard it is to find compute, is that like it or not, these companies are putting their money where their mouth is. These are enormous expenditures. Their stocks are getting hit a bit because investors are looking how much they're spending on this. But it certainly doesn't seem like capex is slowing down.
1:04:48
You know how absurd these numbers are.
1:06:16
I mean, they're insane.
1:06:18
Throw around like 185 billion. Like it's, you know, like an expensive dinner. These numbers are absurd. And to go from like 80 billion in a year to 185 billion in a year, you have like the average person, no concept of how big these companies anticipate the future market to be to be willing to spend that kind of money. And so again, like, if you hear Alphabet and OpenAI and AMS and all these companies are spending a hundred billion or more, do you really think it's to go after a $300 billion a year software industry? Like, if, if that's what you think they're after, you need to rethink the future. Because the only way you spend 185 billion in years, if you think you're going after the $6 trillion of wages, like there's nothing else that makes sense. The only other thing I'll say is had conversations lately on a couple of, of things and I can't get into like the specifics of this, but we've alluded to this before. Like all the money has accrued right now, all the OpEx, the CapEx and the, the value has accrued on the training of these models. The re. The reason Cerebras and GROK with a Q are in the conversation is because of the future being in the image inference, which has not historically been Nvidia's strong suit. So Nvidia's GPUs are used to train the models. That's why Nvidia has been so valuable. The future use is when all of us are using this intelligence everywhere we are in every product we use and every software and every, every piece of hardware. Like intelligence is going to be everywhere. Think about the meta glasses and whatever OpenAI is building with Johnny. I've. And like, it's going to be the inference, going to be the use of the technology. And that's where the value of Nvidia and Cerebras and Grok come from in the future is how they deliver the inference and who's getting paid and what data center on Earth or in space is being, you know, called upon to provide that inference in that moment. So I don't know, I guess I'll just, I'll just say like we're just getting started, right? And most of the stuff happening behind the scenes isn't public knowledge yet, but there's reasons why. Again, I'm trying to figure out how to say this without getting too specific. There's reasons why these AI labs and cloud companies have confidence that they can spend this level and that there will be a return at the end for it. And I just don't think that Wall street yet even comprehends where this is going. And I very confident that most business leaders don't even have a clue yet of like where this all is going in the next like five years. So the numbers are going to keep getting bigger. Like there is no end in sight to this.
1:06:19
Yeah, I was going to ask you about Wall street because it just seems like the scale of this and where this could be going is such, of such magnitude. I'm not sure. And also longer term, thinking outside of these quarterly investments and earnings, I wonder if they're missing something.
1:09:20
Something totally. They're definitely missing like, but that's what they're paid to do. Like you, they missed AI for seven years. Like they didn't realize the significance of what was happening until ChatGPT hit. So all those years where we were looking out like we built the institute, we're talking about all this stuff. Like they just, it's not what they're paid to do. Like it's not, they're not making like 1020 year bets. They're, they're making like the next three month bet and then like everything moves based on performance and earnings reports and like projections for the next 12 months, whatever. So I would just say, I would say Wall street is reality in that it affects your retirement portfolio and your investments today. It is not necessarily reality of where the future is going and where the money is going to move to. And that has been proven time and time again or they all would have bought in video in 2000 2016. So yeah, just keep, keep watching. Things are going to change. They're going to move fast. And, and at some point the hundreds of billions is going to sound like not a lot of money again. OpenAI is actively trying to raise like 1.4 trillion. Like we're, we're just, these numbers are silly and we lose sight of how much money this really is and why they would spend this kind of money. Money. I think that's the thing that people forget is it's not just numbers. There's reasons behind these numbers and they're massive numbers.
1:09:33
All right, next up we have some new data from the firm Challenger, Gray and Christmas who we've talked about before. They do a lot of data on kind of hiring among US employers and they announced, the US employers announced over 108,000 job cuts in January. That's up 118% year over year. It's the highest January total since 2009. The firm also reported that hiring plans collapsed. They measured just to over 5,300 hiring plans being planned by firms. This is the lowest January figure on record since Challenger began tracking hiring plans in 2009. Transportation sector led with over 31,000 cuts. This was driven largely by UPS announcing roughly 30,000 layoffs. Amazon accounted for approximately 16,000 additional cuts. They also measure where AI was cited as the reason for cuts and it was cited as the reason for just over 7,600 of January's job cuts. It's about 7% of the total. And they also, you know Paul, this is just interesting. Not only 7%, I mean seems significant. Obviously 7,000 is not huge amount in the grand scheme of things, but I bet that number might be higher if you're focusing on who's admitting it. It.
1:11:01
Yeah.
1:12:13
And also I think we've talked about in the past this real impact here might be hiring freezes, not just layoffs due to AI. Are we seeing any proof of that in here with these kind of hiring plans being so low?
1:12:14
I mean the jobs are a challenging space when you look at the impact of tariffs and the economy overall. And just like everything else that's going on. And so I think everybody wants to just like see this hard data that says oh okay, AI is directly impacting job cuts or they want to be proven right that it's not and it's not going to. In the, I guess in the spirit of this being a rapid fire, I'll just say a couple things. So the reality of the future cannot be found by looking at the past. Let's just be really clear that whatever the current Data says about AI's impact is not representative of what is going to happen. Most companies have, one, the agents aren't reliable yet enough to replace workers, you know, whole one to one ratio. Two, most companies haven't aren't far enough along their adoption curve to even know how to do that yet. I think that the revenue per employee number is going to become more important. So if you are a publicly traded company, a venture backed company or private equity owned company and your revenue per employee number isn't going up double digits, percentages at least you're going to have a problem like they're going to expect you're going to get more out of your employees. And I have talked with plenty of companies and I've said this on the podcast for where flat headcount is the goal. Like they, it's not that they want to reduce headcount. No like CEO who appreciates their people wants to reduce headcount. And so in many cases. The best case scenario moving forward is they want flat headcount. So natural attrition, maybe they're still going to hire some people to get, you know, back to that level, but they want revenue per employee to skyrocket. So if you're at 200,000 revenue employee, let's get that to 500,000 in the next 18 months and then like let's not hire, let's stay at this threshold of employees or like increase slightly but revenue we want up 100% over the next three years, but we want headcount up 10% or flat. They might not be saying that that publicly. They are, trust me, saying it privately. And so I think that's the hard reality. I think it's going to create some very difficult problems in the economy and it's going to get very messy politically leading into the midterms. It's going to, it's going to be a major item in the midterms is the lack of growth of headcount.
1:12:27
All right, next up, two announcements this past week have kind of pointed to a shift in how CRM software might work moving forward. So first up, Day AI, a startup founder founded by two HubSpot veterans, raised a $20 million Series A LED by Sequoia Capital. The company has built what it calls an AI native CRM that replaces traditional fields and records with a context graph and natural language interaction, letting long running AI agents handle customer data autonomously. Day AI has roughly 120 customers after more than a year of private testing separately. At the same time, HubSpot is positioning itself as an agentic customer platform. The company has launched AI Teammates that research accounts, enrich data, qualify leads and complete tasks without human prompting. And a companion feature that is in the platform called Breeze Assistant acts as an AI go to market expert that provides role specific insights and can write to and update the CRM on behalf of users. So Paul, very very relevant subject to our first topic. Also just generally some trends we've been hitting on with HubSpot agentic pricing and features. What was your reaction to seeing these two in kind of parallel, the incumbent and the startup over here?
1:14:53
It does just demonstrate what we were talking about you know and I don't know that day eyes, you know, major going to come right after HubSpot directly and like start impacting market share. But it's just the example of these AI native companies can just be built and, and build from first principles like what is a smarter CRM and let's go build that from you know, day one Now, Christopher o' Donnell is the, you know, former Chief Product Officer at HubSpot, as you mentioned. He's a friend of mine. Like, I've, I've known Christopher since 2012, I think when they got acquired by HubSpot back in the day. I, Christopher and I have had conversations about AI going back to like 2015, 16. Like, I, I know he's thought about these things and he, he lived in, in a product like a major CRM for all those years. And that is the prototype of an AI native company, someone who's been in it. They've seen the pain points, they know where the product breaks, they know the value prop it offers, they devise these product roadmaps for all those years and then you step back and say, what's just a smarter version of this? Let me get rid of all the crap, let me get rid of all the barriers to adoption and value creation. Let's just go build something different. That's the kind of companies that these, these legacy SaaS companies need to be watching for. So, you know, whether, who knows if they play? I mean, Sequoia doesn't invest in companies that don't have promise. They're a renowned VC firm for a reason. So, yeah, definitely a company to watch. And I think just at a higher level, it plays to this trend of like, what's going to happen with these legacy SaaS companies.
1:16:06
All right, Paul, I'm going to wrap up here with some quick AI product and funding updates and then we'll kind of close out this week's episode. So first up, eleven Labs, which is the voice AI platform known for text to speech and voice cloning, has raised $500 million in a series D round led by Sequoia capital at an $11 billion valuation. That's more than triple the company's valuation from a year ago. Second, there is a new AI startup on the scene called Adaption Labs. This is founded by former cohere managers, or VPs. Rock stars, I would imagine. VP of Research, Sarah Hooker, former Cohere Director of Inference Computing, Sudip Roy. They have founded this company and raised a $50 million seed round led by Emergence Capital Partners. Now this company is building AI models that are designed to actually use less computing power and learn continuously without expensive retraining or fine tuning. That's kind of where the name comes from, because Adaption models are designed to be adaptive, meaning they update and improve on the fly without the extensive prompt engineering and context setup that you currently need to get reliable results from large language models. And last but not least, Waymo, Alphabet's autonomous vehicle subsidiary raised $16 billion in an investment round at a $126 billion post money valuation. This is the largest autonomous vehicle funding round to date. Waymo tripled its annual ride volume in 2020, 2025 to 15 million rides and has surpassed 20 million lifetime rides. They also report more than 127 million miles of fully autonomous operation. All right, Paul, so one final announcement here. We talked about the top of the episode, our AI pulse survey. Go to SmarterX AI forward slash pulse to take that survey. This week we're going to talk about two specific questions. First, how concerned are you that AI will disrupt your company's core software tools in the next 12 months? Second, has a recent experience with AI made you rethink the value of a skill you've built over your career? So I'm very interested to see our audience's responses to those. But until then, Paul, really appreciate you breaking everything down for us this week.
1:17:41
Yeah, thanks everyone. And again, we have a Intro to AI this week that's free. Part of our AI Literacy project. We try and do as many things for free as we can. So we've got intro to AI on, I guess that's Tuesday the 10th, and then we have our AI for Agencies Summit on Thursday. So both of those are great learning experiences at the cost of nothing. So other than your time. So we'd love to have you join us and then we will be back next week, I think. Oh, we might not have an episode. I'm out of town.
1:19:54
Oh, all right.
1:20:19
We might not have a regular weekly. We got to figure that out now that I'm thinking about it because I'm gone Friday to Monday, so that would be. That would be kind of tough to record that one or we might drop it a day later. We'll do it on Tuesday. All right, well, stay tuned. We may be doing our weekly next week, but it might be a day later than normal. All right, thanks everyone. Have a great week. Thanks for listening to the Artificial intelligence show. Visit SmarterX AI to continue on your AI learning journey and join more than 100,000 professionals and business leaders who have subscribed to our weekly newsletters, downloaded AI blueprints, attended virtual and in person events, taken online AI courses and earned professional certificates from our AI Academy and engaged, engaged in the SmartRx Slack community. Until next time, stay curious and explore AI.
1:20:20